idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
47,901
|
Standardize binary variable to create interaction term in regression?
|
It makes little sense to standardize dummy variables
It cannot be increased by a standard deviation so the regular interpretation for standardized coefficients does not apply
Moreover, the standard interpretation of the dummy variable, showing difference in average level of Y between two categories is lost
Your interaction results could be interpreted as follows for:
Among those who are females (sex dummy 1=female 0=male), 1 standard deviation point increase in age (standardized age, mean=0, std=1) has a positive/negative (significant / insignificant) effect of (exact value of the coefficient of the interaction term) on your dependent variable (Y-variable).
The links below might help
page 5 of this link
http://polisci.msu.edu/jacoby/icpsr/regress3/lectures/week2/8.RelImport.pdf
page 9 of this link
https://stat.ethz.ch/~maathuis/teaching/stat423/handouts/Chapter7.pdf
|
Standardize binary variable to create interaction term in regression?
|
It makes little sense to standardize dummy variables
It cannot be increased by a standard deviation so the regular interpretation for standardized coefficients does not apply
Moreover, the standard i
|
Standardize binary variable to create interaction term in regression?
It makes little sense to standardize dummy variables
It cannot be increased by a standard deviation so the regular interpretation for standardized coefficients does not apply
Moreover, the standard interpretation of the dummy variable, showing difference in average level of Y between two categories is lost
Your interaction results could be interpreted as follows for:
Among those who are females (sex dummy 1=female 0=male), 1 standard deviation point increase in age (standardized age, mean=0, std=1) has a positive/negative (significant / insignificant) effect of (exact value of the coefficient of the interaction term) on your dependent variable (Y-variable).
The links below might help
page 5 of this link
http://polisci.msu.edu/jacoby/icpsr/regress3/lectures/week2/8.RelImport.pdf
page 9 of this link
https://stat.ethz.ch/~maathuis/teaching/stat423/handouts/Chapter7.pdf
|
Standardize binary variable to create interaction term in regression?
It makes little sense to standardize dummy variables
It cannot be increased by a standard deviation so the regular interpretation for standardized coefficients does not apply
Moreover, the standard i
|
47,902
|
Standardize binary variable to create interaction term in regression?
|
The "mean centring" procedure for categorical variables is different from the one you would use for continuous variable (in R: scale(x, center=T, scale=F).
You can mean center categorical variables by using an effect coding strategy instead of traditional dummy coding (0/1). The choice of the coding scheme does not really matter (as they are linearly equivalent => so basically you can retrieve results for one knowing results of the other). However it makes a big difference in the interpretation/reading of the results.
In your case I don't think it really makes sense to "mean center" (effect code) the gender variable because (1) this variable has only 2 modalities and (2) you interact it with a continuous variable (i.e., age). Say the reference category for gender is "male", in your model (y ~ gender + age + genderage) "age" will represent mean effect when gender=male and "genderage" the marginal effect of moving from male to female.
If you decide to "mean center" (effect code) gender, then "age" will correspond to the mean effect of age when gender is set to its mean (so basically something in between male and female). If you compare (y ~ gender + age) and (y ~ gender + age + gender*age) you will find little difference in the estimates for "age" between the two models, what is not the case if you don't mean center "gender".
Personally, I found mean centring (effect coding) for categorical variables useful when I want to estimate interaction effects between 2 (or more) of them - Then I don't have to keep track of the reference situation (e.g., male - young - etc.). Hope this helps!
|
Standardize binary variable to create interaction term in regression?
|
The "mean centring" procedure for categorical variables is different from the one you would use for continuous variable (in R: scale(x, center=T, scale=F).
You can mean center categorical variables by
|
Standardize binary variable to create interaction term in regression?
The "mean centring" procedure for categorical variables is different from the one you would use for continuous variable (in R: scale(x, center=T, scale=F).
You can mean center categorical variables by using an effect coding strategy instead of traditional dummy coding (0/1). The choice of the coding scheme does not really matter (as they are linearly equivalent => so basically you can retrieve results for one knowing results of the other). However it makes a big difference in the interpretation/reading of the results.
In your case I don't think it really makes sense to "mean center" (effect code) the gender variable because (1) this variable has only 2 modalities and (2) you interact it with a continuous variable (i.e., age). Say the reference category for gender is "male", in your model (y ~ gender + age + genderage) "age" will represent mean effect when gender=male and "genderage" the marginal effect of moving from male to female.
If you decide to "mean center" (effect code) gender, then "age" will correspond to the mean effect of age when gender is set to its mean (so basically something in between male and female). If you compare (y ~ gender + age) and (y ~ gender + age + gender*age) you will find little difference in the estimates for "age" between the two models, what is not the case if you don't mean center "gender".
Personally, I found mean centring (effect coding) for categorical variables useful when I want to estimate interaction effects between 2 (or more) of them - Then I don't have to keep track of the reference situation (e.g., male - young - etc.). Hope this helps!
|
Standardize binary variable to create interaction term in regression?
The "mean centring" procedure for categorical variables is different from the one you would use for continuous variable (in R: scale(x, center=T, scale=F).
You can mean center categorical variables by
|
47,903
|
Standardize binary variable to create interaction term in regression?
|
In terms of specifics on how to set up the interactions, this web page seems to be helpful: http://www.restore.ac.uk/srme/www/fac/soc/wie/research-new/srme/modules/mod3/11/
You don't need to standardize the variables (unless they are already standardized).
I'd set it up like this (limiting this list to a small number of ages):
age sex age*sex
10 1 10
20 1 20
30 1 30
10 0 0
20 0 0
30 0 0
|
Standardize binary variable to create interaction term in regression?
|
In terms of specifics on how to set up the interactions, this web page seems to be helpful: http://www.restore.ac.uk/srme/www/fac/soc/wie/research-new/srme/modules/mod3/11/
You don't need to standard
|
Standardize binary variable to create interaction term in regression?
In terms of specifics on how to set up the interactions, this web page seems to be helpful: http://www.restore.ac.uk/srme/www/fac/soc/wie/research-new/srme/modules/mod3/11/
You don't need to standardize the variables (unless they are already standardized).
I'd set it up like this (limiting this list to a small number of ages):
age sex age*sex
10 1 10
20 1 20
30 1 30
10 0 0
20 0 0
30 0 0
|
Standardize binary variable to create interaction term in regression?
In terms of specifics on how to set up the interactions, this web page seems to be helpful: http://www.restore.ac.uk/srme/www/fac/soc/wie/research-new/srme/modules/mod3/11/
You don't need to standard
|
47,904
|
Standardize binary variable to create interaction term in regression?
|
By interaction term do you mean a higher order term?
So instead of:
fit <- lm(y ~ x1 + x2, data = mydata)
You can do
fit <- lm(y ~ x1 + x2 + x1*x2, data = mydata)
(looking back at this answer it's probably not what you looking for. I would suggest trying setting your sex variable to be -1 or 1, this way at least the mean of the product term between sex and age is zero.)
|
Standardize binary variable to create interaction term in regression?
|
By interaction term do you mean a higher order term?
So instead of:
fit <- lm(y ~ x1 + x2, data = mydata)
You can do
fit <- lm(y ~ x1 + x2 + x1*x2, data = mydata)
(looking back at this answer it's
|
Standardize binary variable to create interaction term in regression?
By interaction term do you mean a higher order term?
So instead of:
fit <- lm(y ~ x1 + x2, data = mydata)
You can do
fit <- lm(y ~ x1 + x2 + x1*x2, data = mydata)
(looking back at this answer it's probably not what you looking for. I would suggest trying setting your sex variable to be -1 or 1, this way at least the mean of the product term between sex and age is zero.)
|
Standardize binary variable to create interaction term in regression?
By interaction term do you mean a higher order term?
So instead of:
fit <- lm(y ~ x1 + x2, data = mydata)
You can do
fit <- lm(y ~ x1 + x2 + x1*x2, data = mydata)
(looking back at this answer it's
|
47,905
|
Linear Properties of the Quantile Function
|
Hint: the probability that $X<Q_x(p)$ is $p$. Given that $Y$ is linearly related to $X$, can you now write a similar expression for $Y$?
|
Linear Properties of the Quantile Function
|
Hint: the probability that $X<Q_x(p)$ is $p$. Given that $Y$ is linearly related to $X$, can you now write a similar expression for $Y$?
|
Linear Properties of the Quantile Function
Hint: the probability that $X<Q_x(p)$ is $p$. Given that $Y$ is linearly related to $X$, can you now write a similar expression for $Y$?
|
Linear Properties of the Quantile Function
Hint: the probability that $X<Q_x(p)$ is $p$. Given that $Y$ is linearly related to $X$, can you now write a similar expression for $Y$?
|
47,906
|
How to perform a sensitivity analysis in Bayesian statistics?
|
A fairly standard approach to showing that your results were not heavily influenced by your choice of prior is simply to show that your results hold when choosing a different prior. For example, if you have an informed prior that suggests a certain result is more likely, you might want to also show your results also hold when a uniform prior is specified.
A fairly new piece of software for checking such things is called JASP, which is like a free, modern SPSS that handles Bayesian versions of many frequentest statistical tests. What is nice about this, is that when you run a Bayesian test it outputs a graph showing what your test result would have been had a range of other priors been specified. I don't know if this output is something you would want to include in a report, but it is useful to get an idea of how sensitive your results were to your specific prior.
|
How to perform a sensitivity analysis in Bayesian statistics?
|
A fairly standard approach to showing that your results were not heavily influenced by your choice of prior is simply to show that your results hold when choosing a different prior. For example, if yo
|
How to perform a sensitivity analysis in Bayesian statistics?
A fairly standard approach to showing that your results were not heavily influenced by your choice of prior is simply to show that your results hold when choosing a different prior. For example, if you have an informed prior that suggests a certain result is more likely, you might want to also show your results also hold when a uniform prior is specified.
A fairly new piece of software for checking such things is called JASP, which is like a free, modern SPSS that handles Bayesian versions of many frequentest statistical tests. What is nice about this, is that when you run a Bayesian test it outputs a graph showing what your test result would have been had a range of other priors been specified. I don't know if this output is something you would want to include in a report, but it is useful to get an idea of how sensitive your results were to your specific prior.
|
How to perform a sensitivity analysis in Bayesian statistics?
A fairly standard approach to showing that your results were not heavily influenced by your choice of prior is simply to show that your results hold when choosing a different prior. For example, if yo
|
47,907
|
MLE for k-truncated poisson
|
Update: Now that the question is more specific, I've added the log likelihood and R code for left-trunction.
Right-truncation. For $Y_i$ distributed as a truncated Poisson with parameters $\lambda$ and truncation parameter $k$, the log likelihood of $n$ random samples is given by
$$\log(\lambda) \sum_{i=1}^n y_i -n \lambda -n \log \left({\sum_{j=0}^k {{e^{-\lambda}\lambda^j}\over{j!}}}\right) $$
This is maximized for any value of $\lambda$ when $\hat{k}=\max(y_1, y_2,\ldots, y_n)$ (which is the smallest value that $k$ can take). The maximum likelihood estimate of $\lambda$ requires an iterative procedure. Moore (1952, Biometrika) provides a good and easily calculated initial estimate of $\hat{\lambda}$:
$$\hat{\lambda}_0 ={{\sum_{i=1}^n y_i}\over{\sum_{i=1}^n I(y_i<\hat{k})}}$$
where $I(\cdot)$ is the indicator function taking on 1 if $y_i<\hat{k}$ and 0 otherwise. (So it's a mean adjusted upwards a bit.)
Some not so elegant R code follows:
# MLE estimation of a truncated Poisson with unknown truncation level
# Objective is to find estimates of lambda (underlying Poisson mean) and
# k (unknown trunction value).
# Generate some samples from a known truncated Poisson distribution
lambda <- 10 # Poisson mean
k <- 8 # Truncation level
n <- 100 # Sample size
y <- rpois(n*4,lambda) # Oversample to be more certain of getting n samples
# Keep just the first n legitimate observations
y <- y[y <= k][1:n]
# MLE for k and initial estimate for lambda
# From Moore (1952) Biometrika
# "The estimation of the Poisson parameter from a truncated distribution"
khat <- max(y)
lambda0 <- sum(y)/length(y[y < khat])
# Define log of the likelihood function
logL <- function(lambda,ysum,n,k) {
log(lambda)*ysum - n*lambda - n*ppois(k,lambda,log.p=TRUE)
}
# Find maximum likelihood estimate of lambda
trPoisson <- optim(lambda0, logL, ysum=sum(y), n=length(y), k=khat,
method="BFGS", control=list(fnscale=-1))
# Show results
cat("MLE of truncation value =",khat,"\n")
# MLE of truncation value = 8
cat("MLE of lambda =",trPoisson$par,"\n")
# MLE of lambda = 9.765054
Left-truncation. For left-truncation the log likelihood is the following:
$$\log(\lambda) \sum_{i=1}^n y_i -n \lambda -n \log \left(1-{\sum_{j=0}^{k-1} {{e^{-\lambda}\lambda^j}\over{j!}}}\right) $$
with the sum on the far right being zero when $k<0$. Below is some R code:
# MLE estimation of a left-truncated Poisson with unknown truncation level
# Objective is to find estimates of lambda (underlying Poisson mean) and
# k (unknown trunction value).
# Generate some samples from a known truncated Poisson distribution
lambda <- 10 # Poisson mean
k <- 7 # Truncation level
n <- 100 # Sample size
y <- rpois(n*30,lambda) # Oversample to be more certain of getting n samples
y <- y[y >= k][1:n]
# MLE for k and initial estimate for lambda
khat <- min(y)
lambda0 <- mean(y) # Probably any starting value greater than zero will work
# Define log of the likelihood function
logL <- function(lambda,ysum,n,k) {
log(lambda)*ysum - n*lambda - n*log(1-ppois(k-1,lambda))
}
# Find maximum likelihood estimate of lambda
trPoisson <- optim(lambda0, logL, ysum=sum(y), n=length(y), k=khat,
method="L-BFGS-B", lower=0, upper=Inf, control=list(fnscale=-1))
# Show results
cat("MLE of truncation value =",khat,"\n")
cat("MLE of lambda =",trPoisson$par,"\n")
|
MLE for k-truncated poisson
|
Update: Now that the question is more specific, I've added the log likelihood and R code for left-trunction.
Right-truncation. For $Y_i$ distributed as a truncated Poisson with parameters $\lambda$
|
MLE for k-truncated poisson
Update: Now that the question is more specific, I've added the log likelihood and R code for left-trunction.
Right-truncation. For $Y_i$ distributed as a truncated Poisson with parameters $\lambda$ and truncation parameter $k$, the log likelihood of $n$ random samples is given by
$$\log(\lambda) \sum_{i=1}^n y_i -n \lambda -n \log \left({\sum_{j=0}^k {{e^{-\lambda}\lambda^j}\over{j!}}}\right) $$
This is maximized for any value of $\lambda$ when $\hat{k}=\max(y_1, y_2,\ldots, y_n)$ (which is the smallest value that $k$ can take). The maximum likelihood estimate of $\lambda$ requires an iterative procedure. Moore (1952, Biometrika) provides a good and easily calculated initial estimate of $\hat{\lambda}$:
$$\hat{\lambda}_0 ={{\sum_{i=1}^n y_i}\over{\sum_{i=1}^n I(y_i<\hat{k})}}$$
where $I(\cdot)$ is the indicator function taking on 1 if $y_i<\hat{k}$ and 0 otherwise. (So it's a mean adjusted upwards a bit.)
Some not so elegant R code follows:
# MLE estimation of a truncated Poisson with unknown truncation level
# Objective is to find estimates of lambda (underlying Poisson mean) and
# k (unknown trunction value).
# Generate some samples from a known truncated Poisson distribution
lambda <- 10 # Poisson mean
k <- 8 # Truncation level
n <- 100 # Sample size
y <- rpois(n*4,lambda) # Oversample to be more certain of getting n samples
# Keep just the first n legitimate observations
y <- y[y <= k][1:n]
# MLE for k and initial estimate for lambda
# From Moore (1952) Biometrika
# "The estimation of the Poisson parameter from a truncated distribution"
khat <- max(y)
lambda0 <- sum(y)/length(y[y < khat])
# Define log of the likelihood function
logL <- function(lambda,ysum,n,k) {
log(lambda)*ysum - n*lambda - n*ppois(k,lambda,log.p=TRUE)
}
# Find maximum likelihood estimate of lambda
trPoisson <- optim(lambda0, logL, ysum=sum(y), n=length(y), k=khat,
method="BFGS", control=list(fnscale=-1))
# Show results
cat("MLE of truncation value =",khat,"\n")
# MLE of truncation value = 8
cat("MLE of lambda =",trPoisson$par,"\n")
# MLE of lambda = 9.765054
Left-truncation. For left-truncation the log likelihood is the following:
$$\log(\lambda) \sum_{i=1}^n y_i -n \lambda -n \log \left(1-{\sum_{j=0}^{k-1} {{e^{-\lambda}\lambda^j}\over{j!}}}\right) $$
with the sum on the far right being zero when $k<0$. Below is some R code:
# MLE estimation of a left-truncated Poisson with unknown truncation level
# Objective is to find estimates of lambda (underlying Poisson mean) and
# k (unknown trunction value).
# Generate some samples from a known truncated Poisson distribution
lambda <- 10 # Poisson mean
k <- 7 # Truncation level
n <- 100 # Sample size
y <- rpois(n*30,lambda) # Oversample to be more certain of getting n samples
y <- y[y >= k][1:n]
# MLE for k and initial estimate for lambda
khat <- min(y)
lambda0 <- mean(y) # Probably any starting value greater than zero will work
# Define log of the likelihood function
logL <- function(lambda,ysum,n,k) {
log(lambda)*ysum - n*lambda - n*log(1-ppois(k-1,lambda))
}
# Find maximum likelihood estimate of lambda
trPoisson <- optim(lambda0, logL, ysum=sum(y), n=length(y), k=khat,
method="L-BFGS-B", lower=0, upper=Inf, control=list(fnscale=-1))
# Show results
cat("MLE of truncation value =",khat,"\n")
cat("MLE of lambda =",trPoisson$par,"\n")
|
MLE for k-truncated poisson
Update: Now that the question is more specific, I've added the log likelihood and R code for left-trunction.
Right-truncation. For $Y_i$ distributed as a truncated Poisson with parameters $\lambda$
|
47,908
|
Logistic regression intercept representing baseline probability
|
The intercept might be interpreted as the estimated baseline log odds when all independent variables are set to 0, or the reference category in case of categorical variables. The probability when all independent variables are set to 0 is log(intercept)/(1+log(intercept)).
With a standardized continuous variable, the intercept is the estimated log odds for the event when the standardized variable is 0.
The problem is that mean probability in your sample is not the same as probability when the standardized variable is 0. If the probability of having an event (or whatever the dependent variable is) is 0.1 when the standardized variable x is 0, and the estimated coefficient for x is 1, this means that for an individual whose value for x is 1, the odds ratio will be exp(1)=2.71. We can calculate the expected probability for an event in such an individual:
base odds = 0.1/(1-0.1) = 0.11.
odds for this individual: 0.11 * 2.71 = 0.3
probability for this individual = 0.3/(1+0.3) = 0.23
Now, for an individual who is one standard deviation below the mean on the x variable, the odds ratio will be exp(-1) = 0.37:
odds for this individual: 0.11 * 0.37 = 0.03
probability for this individual = 0.04
So the +1 sd of x means a probability of 0.23 and -1 sd means a probability of 0.03. If we instead calculate the probability for +2 sd we get a probability of 0.45, and for -2 sd we get a probability of 0.01.
It's easy to see that the average probability in the sample will be higher than the probability for individuals whose value on x is 0, because the probabilities are skewed because of how odds and odds ratios work.
As for your question, I don't think it's possible to make the intercept represent the mean probability, because in logistic regression, (log) odds and odds ratios are estimated, not probabilities, and the mean probability is not really meaningful to consider in a logistic regression.
|
Logistic regression intercept representing baseline probability
|
The intercept might be interpreted as the estimated baseline log odds when all independent variables are set to 0, or the reference category in case of categorical variables. The probability when all
|
Logistic regression intercept representing baseline probability
The intercept might be interpreted as the estimated baseline log odds when all independent variables are set to 0, or the reference category in case of categorical variables. The probability when all independent variables are set to 0 is log(intercept)/(1+log(intercept)).
With a standardized continuous variable, the intercept is the estimated log odds for the event when the standardized variable is 0.
The problem is that mean probability in your sample is not the same as probability when the standardized variable is 0. If the probability of having an event (or whatever the dependent variable is) is 0.1 when the standardized variable x is 0, and the estimated coefficient for x is 1, this means that for an individual whose value for x is 1, the odds ratio will be exp(1)=2.71. We can calculate the expected probability for an event in such an individual:
base odds = 0.1/(1-0.1) = 0.11.
odds for this individual: 0.11 * 2.71 = 0.3
probability for this individual = 0.3/(1+0.3) = 0.23
Now, for an individual who is one standard deviation below the mean on the x variable, the odds ratio will be exp(-1) = 0.37:
odds for this individual: 0.11 * 0.37 = 0.03
probability for this individual = 0.04
So the +1 sd of x means a probability of 0.23 and -1 sd means a probability of 0.03. If we instead calculate the probability for +2 sd we get a probability of 0.45, and for -2 sd we get a probability of 0.01.
It's easy to see that the average probability in the sample will be higher than the probability for individuals whose value on x is 0, because the probabilities are skewed because of how odds and odds ratios work.
As for your question, I don't think it's possible to make the intercept represent the mean probability, because in logistic regression, (log) odds and odds ratios are estimated, not probabilities, and the mean probability is not really meaningful to consider in a logistic regression.
|
Logistic regression intercept representing baseline probability
The intercept might be interpreted as the estimated baseline log odds when all independent variables are set to 0, or the reference category in case of categorical variables. The probability when all
|
47,909
|
Logistic regression intercept representing baseline probability
|
There is no simple interpretation in binary logistic models other than the intercept and slopes satisfy the property that the the average predicted probability equals the observed prevalence of $Y=1$ in the dataset used to fit the model. But I don't find it very useful to think about this in either linear models or logistic models, because the idea of reference values is arbitrary. For example one person may think of the median or mode as the reference and another the mean. When categorical variables are included things are more complex.
I like to think of the intercept as an arbitrary constant that makes the model work no matter what the numeric origin is for the predictors. In R when you request predictions everything is handled automatically.
|
Logistic regression intercept representing baseline probability
|
There is no simple interpretation in binary logistic models other than the intercept and slopes satisfy the property that the the average predicted probability equals the observed prevalence of $Y=1$
|
Logistic regression intercept representing baseline probability
There is no simple interpretation in binary logistic models other than the intercept and slopes satisfy the property that the the average predicted probability equals the observed prevalence of $Y=1$ in the dataset used to fit the model. But I don't find it very useful to think about this in either linear models or logistic models, because the idea of reference values is arbitrary. For example one person may think of the median or mode as the reference and another the mean. When categorical variables are included things are more complex.
I like to think of the intercept as an arbitrary constant that makes the model work no matter what the numeric origin is for the predictors. In R when you request predictions everything is handled automatically.
|
Logistic regression intercept representing baseline probability
There is no simple interpretation in binary logistic models other than the intercept and slopes satisfy the property that the the average predicted probability equals the observed prevalence of $Y=1$
|
47,910
|
More efficient way to create Block Toeplitz matrix in R? [closed]
|
This question might be of interest to statisticians and data scientists because of the multidimensional array handling issues it presents.
Loops aren't the problem. Your goal likely is to perform the operation efficiently, regardless of how it might be done, and perhaps to do it in an obviously parallelizable way.
Exploit R's ability to manage multidimensional arrays. (It inherits this from FORTRAN, which even in its original design handled up to seven dimensions.) Move groups of data en masse into an array and then reshape it as needed. These native operations will be fast, even if some looping needs to be done.
Although it uses short high-level loops (each executing $n+1$ times), the following solution will create a "block Toeplitz" array with $n=999$ and $2\times 2$ blocks--an array with four million elements--within a half second and small use of overhead storage. It is based on the observation that the Toeplitz pattern can be constructed from the string
$$U_n, U_{n-1}, \ldots, U_1, U_0, U_1^\prime, U_2^\prime, \ldots, U_n^\prime$$
by taking the last $n+1$ elements to form the first column of output, shifting those elements to the left to form the second column of output, and so on, until the last column of output is formed from the first $n+1$ elements of this string.
The code uses n to represent $n+1$ (because R indexing starts at 1 instead of 0). The string of matrices itself, as a $k\times (2n-1)$ array, is stored in a temporary array strip.
At the end, the $k\times k\times n\times n$ array that is created by "blasting" parts of the strip down the columns of the output X is permuted to get its values stored in the right order (I was too lazy to work out the optimal storage configuration at the outset) and then recast as a $kn\times kn$ array (which involves no movement of data). These column-wise operations are each highly vectorized and capable of massive parallelism.
The initial value of U contains obvious patterns to check that the output is correct.
#
# Create n square matrices of dimension k by k.
#
U <- list(matrix(1:4, 2), matrix(5:8, 2), matrix(9:12, 2))
#U <- lapply(1:1000, function(i) matrix(-3:0 + 4*i, 2))
system.time({
k <- min(unlist(lapply(U, dim)))
n <- length(U)
#
# Create the "strip".
#
strip <- array(NA, dim=c(k,k,2*n-1))
for (i in 1:n) strip[,,i] <- U[[n+1-i]]
if (n > 1) for (i in 2:n) strip[,,n+i-1] <- t(U[[i]])
#
# Assemble into "block-Toeplitz" form.
#
X <- array(NA, dim=c(k,k,n,n))
#
# Blast the strip across X.
#
for (i in 1:n) X[,,,i] <- strip[,,(n+1-i):(2*n-i)]
X <- matrix(aperm(X, c(1,3,2,4)), n*k)
})
|
More efficient way to create Block Toeplitz matrix in R? [closed]
|
This question might be of interest to statisticians and data scientists because of the multidimensional array handling issues it presents.
Loops aren't the problem. Your goal likely is to perform the
|
More efficient way to create Block Toeplitz matrix in R? [closed]
This question might be of interest to statisticians and data scientists because of the multidimensional array handling issues it presents.
Loops aren't the problem. Your goal likely is to perform the operation efficiently, regardless of how it might be done, and perhaps to do it in an obviously parallelizable way.
Exploit R's ability to manage multidimensional arrays. (It inherits this from FORTRAN, which even in its original design handled up to seven dimensions.) Move groups of data en masse into an array and then reshape it as needed. These native operations will be fast, even if some looping needs to be done.
Although it uses short high-level loops (each executing $n+1$ times), the following solution will create a "block Toeplitz" array with $n=999$ and $2\times 2$ blocks--an array with four million elements--within a half second and small use of overhead storage. It is based on the observation that the Toeplitz pattern can be constructed from the string
$$U_n, U_{n-1}, \ldots, U_1, U_0, U_1^\prime, U_2^\prime, \ldots, U_n^\prime$$
by taking the last $n+1$ elements to form the first column of output, shifting those elements to the left to form the second column of output, and so on, until the last column of output is formed from the first $n+1$ elements of this string.
The code uses n to represent $n+1$ (because R indexing starts at 1 instead of 0). The string of matrices itself, as a $k\times (2n-1)$ array, is stored in a temporary array strip.
At the end, the $k\times k\times n\times n$ array that is created by "blasting" parts of the strip down the columns of the output X is permuted to get its values stored in the right order (I was too lazy to work out the optimal storage configuration at the outset) and then recast as a $kn\times kn$ array (which involves no movement of data). These column-wise operations are each highly vectorized and capable of massive parallelism.
The initial value of U contains obvious patterns to check that the output is correct.
#
# Create n square matrices of dimension k by k.
#
U <- list(matrix(1:4, 2), matrix(5:8, 2), matrix(9:12, 2))
#U <- lapply(1:1000, function(i) matrix(-3:0 + 4*i, 2))
system.time({
k <- min(unlist(lapply(U, dim)))
n <- length(U)
#
# Create the "strip".
#
strip <- array(NA, dim=c(k,k,2*n-1))
for (i in 1:n) strip[,,i] <- U[[n+1-i]]
if (n > 1) for (i in 2:n) strip[,,n+i-1] <- t(U[[i]])
#
# Assemble into "block-Toeplitz" form.
#
X <- array(NA, dim=c(k,k,n,n))
#
# Blast the strip across X.
#
for (i in 1:n) X[,,,i] <- strip[,,(n+1-i):(2*n-i)]
X <- matrix(aperm(X, c(1,3,2,4)), n*k)
})
|
More efficient way to create Block Toeplitz matrix in R? [closed]
This question might be of interest to statisticians and data scientists because of the multidimensional array handling issues it presents.
Loops aren't the problem. Your goal likely is to perform the
|
47,911
|
More efficient way to create Block Toeplitz matrix in R? [closed]
|
The following function that takes as argument a list of blocks. However, this may be not the best solution: there still one loop present. And it needs some more work, since it doesn't do the transpose of the blocks lower diagonal (in my case I have symmetric matrices).
toeplitz.block <- function(blocks) {
l <- length(blocks)
m.str <- toeplitz(1:l)
res <- lapply(1:l,function(k) {
res <- matrix(0,ncol=ncol(m.str),nrow=nrow(m.str))
res[m.str == k] <- 1
res %x% blocks[[k]]
})
Reduce("+",res)
}
|
More efficient way to create Block Toeplitz matrix in R? [closed]
|
The following function that takes as argument a list of blocks. However, this may be not the best solution: there still one loop present. And it needs some more work, since it doesn't do the transpose
|
More efficient way to create Block Toeplitz matrix in R? [closed]
The following function that takes as argument a list of blocks. However, this may be not the best solution: there still one loop present. And it needs some more work, since it doesn't do the transpose of the blocks lower diagonal (in my case I have symmetric matrices).
toeplitz.block <- function(blocks) {
l <- length(blocks)
m.str <- toeplitz(1:l)
res <- lapply(1:l,function(k) {
res <- matrix(0,ncol=ncol(m.str),nrow=nrow(m.str))
res[m.str == k] <- 1
res %x% blocks[[k]]
})
Reduce("+",res)
}
|
More efficient way to create Block Toeplitz matrix in R? [closed]
The following function that takes as argument a list of blocks. However, this may be not the best solution: there still one loop present. And it needs some more work, since it doesn't do the transpose
|
47,912
|
How to compare 2 classifers using confusion matrix?
|
As @Enrique mentioned, there are many statistics you can calculate from a confusion matrix. To determine which ones are appropriate depends on the specific characteristics of your problem, such as the relative costs associated with true positives and false positives.
Chapter 11 of Applied Predictive Modeling gives a very detailed overview of how to think about evaluating classification models.
An Introduction to Statistical Learning, which is freely available as a pdf, provides a less detailed overview in chapter 4.
|
How to compare 2 classifers using confusion matrix?
|
As @Enrique mentioned, there are many statistics you can calculate from a confusion matrix. To determine which ones are appropriate depends on the specific characteristics of your problem, such as the
|
How to compare 2 classifers using confusion matrix?
As @Enrique mentioned, there are many statistics you can calculate from a confusion matrix. To determine which ones are appropriate depends on the specific characteristics of your problem, such as the relative costs associated with true positives and false positives.
Chapter 11 of Applied Predictive Modeling gives a very detailed overview of how to think about evaluating classification models.
An Introduction to Statistical Learning, which is freely available as a pdf, provides a less detailed overview in chapter 4.
|
How to compare 2 classifers using confusion matrix?
As @Enrique mentioned, there are many statistics you can calculate from a confusion matrix. To determine which ones are appropriate depends on the specific characteristics of your problem, such as the
|
47,913
|
How to compare 2 classifers using confusion matrix?
|
From the confusion matrices you can compute the sensitivity, specificity, accuracy, precision, among other performance metrics for each of the classifiers. Then you can evaluate them in terms of those metrics. Here you can find the definition of several metrics and how they can be computed.
|
How to compare 2 classifers using confusion matrix?
|
From the confusion matrices you can compute the sensitivity, specificity, accuracy, precision, among other performance metrics for each of the classifiers. Then you can evaluate them in terms of those
|
How to compare 2 classifers using confusion matrix?
From the confusion matrices you can compute the sensitivity, specificity, accuracy, precision, among other performance metrics for each of the classifiers. Then you can evaluate them in terms of those metrics. Here you can find the definition of several metrics and how they can be computed.
|
How to compare 2 classifers using confusion matrix?
From the confusion matrices you can compute the sensitivity, specificity, accuracy, precision, among other performance metrics for each of the classifiers. Then you can evaluate them in terms of those
|
47,914
|
How to compare 2 classifers using confusion matrix?
|
Confusion Matrix is a square matrix, which in the ideal case, its main diagonal must be valued and other sides must be none.
Confusion Matrix in binomial class:
------+-------
| TP | FP |
--------------
| FN | TN |
------+-------
Thus, there are two main methods to evaluate a predictor: Using accuracy or F1-score (in imbalanced cases should be used).
ACC = (TP + TN) / (P + N)
F1 = 2TP / (2TP + FP + FN)
So in your case, the second predictor is better with 89% accuracy than the first one with 67%.
Here is a Confusion Matrix Online Calculator.
|
How to compare 2 classifers using confusion matrix?
|
Confusion Matrix is a square matrix, which in the ideal case, its main diagonal must be valued and other sides must be none.
Confusion Matrix in binomial class:
------+-------
| TP | FP |
---------
|
How to compare 2 classifers using confusion matrix?
Confusion Matrix is a square matrix, which in the ideal case, its main diagonal must be valued and other sides must be none.
Confusion Matrix in binomial class:
------+-------
| TP | FP |
--------------
| FN | TN |
------+-------
Thus, there are two main methods to evaluate a predictor: Using accuracy or F1-score (in imbalanced cases should be used).
ACC = (TP + TN) / (P + N)
F1 = 2TP / (2TP + FP + FN)
So in your case, the second predictor is better with 89% accuracy than the first one with 67%.
Here is a Confusion Matrix Online Calculator.
|
How to compare 2 classifers using confusion matrix?
Confusion Matrix is a square matrix, which in the ideal case, its main diagonal must be valued and other sides must be none.
Confusion Matrix in binomial class:
------+-------
| TP | FP |
---------
|
47,915
|
Testing for normality in non-normal distributions with zero skewness and zero excess kurtosis
|
Let my try to answer my own question:
If $X\sim\mathcal{N}\left(0,\sigma^2\right)$ (zero mean is w.l.o.g.). Define the centered moment $\mu_k=E\left(X-E X\right)^k$ and its empirical counterpart $m_k=\frac{1}{n}\sum_{i=1}^{n}\bigl(X_i-\overline{X}\bigr)^k$.
Analyze the two (method of moments) estimators
$$
b_1=\frac{m_3}{m_2^{3/2}}\qquad\text{and}\qquad b_2=\frac{m_4}{m_2^2},
$$
for the skewness $\beta_1(X)=E(X-E(X))^3/\left[E(X-E(X))^{2}\right]^{3/2}$ and for the kurtosis $\beta_2(X)=E(X-E(X))^4/\left[E(X-E(X))^2\right]^2$. A CLT will ensure that
$$
\sqrt{n}\left(\begin{pmatrix}m_2\\m_3\\m_4\end{pmatrix}-\begin{pmatrix}\mu_2\\ \mu_3\\ \mu_4\end{pmatrix}\right)
$$
converges in distribution to a centered multivariate normal with covariance matrix $\Sigma$.
Now, we want to show that, when $X$ is normal,
$$
\sqrt{n}\left(\begin{pmatrix}b_1\\b_2\end{pmatrix}-\begin{pmatrix}\beta_1\\ \beta_2\end{pmatrix}\right)\to_d\mathcal{N}\left(\begin{pmatrix}0\\0\end{pmatrix},\begin{pmatrix}6 & 0\\0 & 24\end{pmatrix}\right).
$$
The follow result is helpful (see, e.g., Rao C.R., Linear Statistical Inference and its applications (1972), Section 6h2): $$\text{Acov}(\sqrt{n}m_j,\sqrt{n}m_k)=\mu_{j+k}-\mu_j\mu_k+jk\mu_2\mu_{j-1}\mu_{k-1}-j\mu_{j-1}\mu_{k+1}-k\mu_{k-1}\mu_{j+1},$$ For the normal distribution,
$$
\mu_{2k}=\sigma^{2k}\frac{(2k)!}{k!2^k},\qquad\mu_{2k+1}=0,\qquad k=0,1,\ldots.
$$
Thus, $\mu_2=\sigma^2$, $\mu_3=0$, $\mu_4=3\sigma^4$, $\mu_5=0$, $\mu_6=15\sigma^6$ and so forth.
The limiting distribution of $(b_1,b_2)$ then is an application of the multivariate delta method.
The $(2\times3)$ matrix of derivatives of $b_1$ and $b_2$ w.r.t. $m_2$, $m_3$, $m_4$, evaluated at the population moments is
$$
J=\begin{pmatrix}0&\sigma^{-3}&0\\-6\sigma^{-2}&0&\sigma^{-4}\end{pmatrix}
$$
Also,
$$
\Sigma=\begin{pmatrix}2\sigma^4&*&15\sigma^6-\sigma^23\sigma^4\\*&6\sigma^6&*\\15\sigma^6-\sigma^23\sigma^4&*&105\sigma^8-(3\sigma^4)^2\end{pmatrix},
$$
where $*$ omits terms not needed (because they will be multiplied with zeros in $J$ when evaluating the $(2\times 2)$ variance matrix of interest $J\Sigma J'$).
The variance-covariance matrix of the delta method then gives
$$
J\Sigma J'=\begin{pmatrix}6 & 0\\0 & 24\end{pmatrix},
$$
as desired.
Thus, the limiting null distribution of the Jarque-Bera test
$$
JB=n\left(\frac{b_1^2}{6}+\frac{(b_2-3)^2}{24}\right)
$$
follows directly because, under the null of normality, $\sqrt{n}(b_1/\sqrt{6})\to_d N(0,1)$ and $\sqrt{n}((b_2-3)/\sqrt{24})\to_d N(0,1)$ such that $n(b_1^2/6)\to_d \chi^2_1$ and $n(b_2-3)^2/24\to_d\chi^2_1$. By asymptotic independence,
$$
JB\to_d\chi^2_2
$$
But, and that was the point of the original post, these arguments inherently required normality for the limiting distribution to obtain.
Hence, even if distributions share the skewness and kurtosis of the normal, there is no reason to believe that the JB test will be $\chi^2_2$, and thus, no reason to believe that the $p$-values will be uniform. So, my intuition was false.
(As also mentioned by @whuber in the comments, what would be required would be a distribution that shares the first eight moments with the normal, as $\text{Acov}(\sqrt{n}m_4,\sqrt{n}m_4)$ contains $\mu_{8}$. I am not aware of such a distribution though - examples would be appreciated, if existent!)
|
Testing for normality in non-normal distributions with zero skewness and zero excess kurtosis
|
Let my try to answer my own question:
If $X\sim\mathcal{N}\left(0,\sigma^2\right)$ (zero mean is w.l.o.g.). Define the centered moment $\mu_k=E\left(X-E X\right)^k$ and its empirical counterpart $m_k=
|
Testing for normality in non-normal distributions with zero skewness and zero excess kurtosis
Let my try to answer my own question:
If $X\sim\mathcal{N}\left(0,\sigma^2\right)$ (zero mean is w.l.o.g.). Define the centered moment $\mu_k=E\left(X-E X\right)^k$ and its empirical counterpart $m_k=\frac{1}{n}\sum_{i=1}^{n}\bigl(X_i-\overline{X}\bigr)^k$.
Analyze the two (method of moments) estimators
$$
b_1=\frac{m_3}{m_2^{3/2}}\qquad\text{and}\qquad b_2=\frac{m_4}{m_2^2},
$$
for the skewness $\beta_1(X)=E(X-E(X))^3/\left[E(X-E(X))^{2}\right]^{3/2}$ and for the kurtosis $\beta_2(X)=E(X-E(X))^4/\left[E(X-E(X))^2\right]^2$. A CLT will ensure that
$$
\sqrt{n}\left(\begin{pmatrix}m_2\\m_3\\m_4\end{pmatrix}-\begin{pmatrix}\mu_2\\ \mu_3\\ \mu_4\end{pmatrix}\right)
$$
converges in distribution to a centered multivariate normal with covariance matrix $\Sigma$.
Now, we want to show that, when $X$ is normal,
$$
\sqrt{n}\left(\begin{pmatrix}b_1\\b_2\end{pmatrix}-\begin{pmatrix}\beta_1\\ \beta_2\end{pmatrix}\right)\to_d\mathcal{N}\left(\begin{pmatrix}0\\0\end{pmatrix},\begin{pmatrix}6 & 0\\0 & 24\end{pmatrix}\right).
$$
The follow result is helpful (see, e.g., Rao C.R., Linear Statistical Inference and its applications (1972), Section 6h2): $$\text{Acov}(\sqrt{n}m_j,\sqrt{n}m_k)=\mu_{j+k}-\mu_j\mu_k+jk\mu_2\mu_{j-1}\mu_{k-1}-j\mu_{j-1}\mu_{k+1}-k\mu_{k-1}\mu_{j+1},$$ For the normal distribution,
$$
\mu_{2k}=\sigma^{2k}\frac{(2k)!}{k!2^k},\qquad\mu_{2k+1}=0,\qquad k=0,1,\ldots.
$$
Thus, $\mu_2=\sigma^2$, $\mu_3=0$, $\mu_4=3\sigma^4$, $\mu_5=0$, $\mu_6=15\sigma^6$ and so forth.
The limiting distribution of $(b_1,b_2)$ then is an application of the multivariate delta method.
The $(2\times3)$ matrix of derivatives of $b_1$ and $b_2$ w.r.t. $m_2$, $m_3$, $m_4$, evaluated at the population moments is
$$
J=\begin{pmatrix}0&\sigma^{-3}&0\\-6\sigma^{-2}&0&\sigma^{-4}\end{pmatrix}
$$
Also,
$$
\Sigma=\begin{pmatrix}2\sigma^4&*&15\sigma^6-\sigma^23\sigma^4\\*&6\sigma^6&*\\15\sigma^6-\sigma^23\sigma^4&*&105\sigma^8-(3\sigma^4)^2\end{pmatrix},
$$
where $*$ omits terms not needed (because they will be multiplied with zeros in $J$ when evaluating the $(2\times 2)$ variance matrix of interest $J\Sigma J'$).
The variance-covariance matrix of the delta method then gives
$$
J\Sigma J'=\begin{pmatrix}6 & 0\\0 & 24\end{pmatrix},
$$
as desired.
Thus, the limiting null distribution of the Jarque-Bera test
$$
JB=n\left(\frac{b_1^2}{6}+\frac{(b_2-3)^2}{24}\right)
$$
follows directly because, under the null of normality, $\sqrt{n}(b_1/\sqrt{6})\to_d N(0,1)$ and $\sqrt{n}((b_2-3)/\sqrt{24})\to_d N(0,1)$ such that $n(b_1^2/6)\to_d \chi^2_1$ and $n(b_2-3)^2/24\to_d\chi^2_1$. By asymptotic independence,
$$
JB\to_d\chi^2_2
$$
But, and that was the point of the original post, these arguments inherently required normality for the limiting distribution to obtain.
Hence, even if distributions share the skewness and kurtosis of the normal, there is no reason to believe that the JB test will be $\chi^2_2$, and thus, no reason to believe that the $p$-values will be uniform. So, my intuition was false.
(As also mentioned by @whuber in the comments, what would be required would be a distribution that shares the first eight moments with the normal, as $\text{Acov}(\sqrt{n}m_4,\sqrt{n}m_4)$ contains $\mu_{8}$. I am not aware of such a distribution though - examples would be appreciated, if existent!)
|
Testing for normality in non-normal distributions with zero skewness and zero excess kurtosis
Let my try to answer my own question:
If $X\sim\mathcal{N}\left(0,\sigma^2\right)$ (zero mean is w.l.o.g.). Define the centered moment $\mu_k=E\left(X-E X\right)^k$ and its empirical counterpart $m_k=
|
47,916
|
Testing for normality in non-normal distributions with zero skewness and zero excess kurtosis
|
It's not so much that the test would be without any power, but you might find it interesting to see what happens to the relative power as $n$ increases when you compare the "Jarque-Bera" with (say) Shapiro-Wilk. Actually, here you go:
The light grey line marks the type I error rate; below that (with a false null) we have a biased test (biased tests are common with goodness of fit).
[Note to self: I should make a beta example that does the same trick as the gamma one. That should make for a more spectacular comparison.]
R code, if anyone cares to have it --
library(tseries)
f <- function(x) mean(x<.05)
reps <- 5000
b <- (sqrt(13)+1)/2
nseq <- c(16,25,64,100,225,400,625)
nres <- matrix(rep(NA,length(nseq)*2),nc=2)
for (i in seq_along(nseq)){
res <- replicate(reps,{
g1 <- rgamma(nseq[i],b)*sign(runif(nseq[i])-.5)
c(jarque.bera.test(g1)$p.value,shapiro.test(g1)$p.value)
})
nres[i,] <- apply(res,1,f)
}
and code to do the plot:
plot(qnorm(nres[,1])~sqrt(nseq),ylim=c(qnorm(.02),qnorm(.997)),
type="o",col=4,ylab="zscore of power",xlab="sqrt(n)")
points(qnorm(nres[,2])~sqrt(nseq),type="o",col="orange3")
abline(h=qnorm(0.05),col=8)
|
Testing for normality in non-normal distributions with zero skewness and zero excess kurtosis
|
It's not so much that the test would be without any power, but you might find it interesting to see what happens to the relative power as $n$ increases when you compare the "Jarque-Bera" with (say) Sh
|
Testing for normality in non-normal distributions with zero skewness and zero excess kurtosis
It's not so much that the test would be without any power, but you might find it interesting to see what happens to the relative power as $n$ increases when you compare the "Jarque-Bera" with (say) Shapiro-Wilk. Actually, here you go:
The light grey line marks the type I error rate; below that (with a false null) we have a biased test (biased tests are common with goodness of fit).
[Note to self: I should make a beta example that does the same trick as the gamma one. That should make for a more spectacular comparison.]
R code, if anyone cares to have it --
library(tseries)
f <- function(x) mean(x<.05)
reps <- 5000
b <- (sqrt(13)+1)/2
nseq <- c(16,25,64,100,225,400,625)
nres <- matrix(rep(NA,length(nseq)*2),nc=2)
for (i in seq_along(nseq)){
res <- replicate(reps,{
g1 <- rgamma(nseq[i],b)*sign(runif(nseq[i])-.5)
c(jarque.bera.test(g1)$p.value,shapiro.test(g1)$p.value)
})
nres[i,] <- apply(res,1,f)
}
and code to do the plot:
plot(qnorm(nres[,1])~sqrt(nseq),ylim=c(qnorm(.02),qnorm(.997)),
type="o",col=4,ylab="zscore of power",xlab="sqrt(n)")
points(qnorm(nres[,2])~sqrt(nseq),type="o",col="orange3")
abline(h=qnorm(0.05),col=8)
|
Testing for normality in non-normal distributions with zero skewness and zero excess kurtosis
It's not so much that the test would be without any power, but you might find it interesting to see what happens to the relative power as $n$ increases when you compare the "Jarque-Bera" with (say) Sh
|
47,917
|
Why can't we use top-down methods in forecasting grouped time series?
|
The top-down methods implemented in the hts package were designed for hierarchical time series. If you want to define your own top-down method for some non-hierarchical time series, go right ahead. It's not wrong, it just hasn't been implemented in the hts package because there are much better solutions to the problem.
The best approach currently available is to use weighted least squares as explained in http://robjhyndman.com/working-papers/hgts/. This is the default in the hts package.
|
Why can't we use top-down methods in forecasting grouped time series?
|
The top-down methods implemented in the hts package were designed for hierarchical time series. If you want to define your own top-down method for some non-hierarchical time series, go right ahead. It
|
Why can't we use top-down methods in forecasting grouped time series?
The top-down methods implemented in the hts package were designed for hierarchical time series. If you want to define your own top-down method for some non-hierarchical time series, go right ahead. It's not wrong, it just hasn't been implemented in the hts package because there are much better solutions to the problem.
The best approach currently available is to use weighted least squares as explained in http://robjhyndman.com/working-papers/hgts/. This is the default in the hts package.
|
Why can't we use top-down methods in forecasting grouped time series?
The top-down methods implemented in the hts package were designed for hierarchical time series. If you want to define your own top-down method for some non-hierarchical time series, go right ahead. It
|
47,918
|
Why can't we use top-down methods in forecasting grouped time series?
|
Further to Rob's answer, let's look at your specific example:
Total
| |
A B
| | | |
AX AY BX BY
Total
| |
X Y
| | | |
AX BX AY BY
You have not one, but two hierarchies. For instance, the top one may group sales at product-location level first by location A and B and then add these up to get the Total, and the second one may group first by product X and Y and add these up.
To do top-down forecasting, you forecast the Total and then break this down by proportions. You can use historical proportions, in which case, yes, you can do this.
The problem arises if you want to use forecasted proportions to break down the Total forecasts. Why? Because if you forecast your component series A, B, X, Y separately, these forecasts will usually not be sum consistent. That is, if you first break the Total down to A & B and then break A down to AX & AY, you may get a different result for AX than if you first broke down to X & Y and then broke X down to AX & BX.
Thus, you will need to do some kind of reconciliation. And then, as Rob writes, you might as well use the "real" optimal reconciliation approach.
(Note that this problem does not arise if you use historical proportions to break forecasts down, because these are automatically consistent. However, historical proportions do not allow you to forecast out changing dynamics in your hierarchy, like changing market shares.)
|
Why can't we use top-down methods in forecasting grouped time series?
|
Further to Rob's answer, let's look at your specific example:
Total
| |
A B
| | | |
AX AY BX BY
Total
| |
X Y
| | | |
AX BX AY B
|
Why can't we use top-down methods in forecasting grouped time series?
Further to Rob's answer, let's look at your specific example:
Total
| |
A B
| | | |
AX AY BX BY
Total
| |
X Y
| | | |
AX BX AY BY
You have not one, but two hierarchies. For instance, the top one may group sales at product-location level first by location A and B and then add these up to get the Total, and the second one may group first by product X and Y and add these up.
To do top-down forecasting, you forecast the Total and then break this down by proportions. You can use historical proportions, in which case, yes, you can do this.
The problem arises if you want to use forecasted proportions to break down the Total forecasts. Why? Because if you forecast your component series A, B, X, Y separately, these forecasts will usually not be sum consistent. That is, if you first break the Total down to A & B and then break A down to AX & AY, you may get a different result for AX than if you first broke down to X & Y and then broke X down to AX & BX.
Thus, you will need to do some kind of reconciliation. And then, as Rob writes, you might as well use the "real" optimal reconciliation approach.
(Note that this problem does not arise if you use historical proportions to break forecasts down, because these are automatically consistent. However, historical proportions do not allow you to forecast out changing dynamics in your hierarchy, like changing market shares.)
|
Why can't we use top-down methods in forecasting grouped time series?
Further to Rob's answer, let's look at your specific example:
Total
| |
A B
| | | |
AX AY BX BY
Total
| |
X Y
| | | |
AX BX AY B
|
47,919
|
Bayesian importance sampling as an answer to a "paradox" by Wasserman
|
There exists a Bayesian approach to the numerical resolution of integrals and differential equations: it is called probabilistic numerics. Introducing a Gaussian process prior on $g$ leads to a posterior probability on $g$ itself and on integrals depending on $g$ once observations $g(x_1),\ldots,g(x_p)$ have been made.
Philipp Hennig and Michael Osborne have created a webpage on this approach, where you can find references and links.
|
Bayesian importance sampling as an answer to a "paradox" by Wasserman
|
There exists a Bayesian approach to the numerical resolution of integrals and differential equations: it is called probabilistic numerics. Introducing a Gaussian process prior on $g$ leads to a poster
|
Bayesian importance sampling as an answer to a "paradox" by Wasserman
There exists a Bayesian approach to the numerical resolution of integrals and differential equations: it is called probabilistic numerics. Introducing a Gaussian process prior on $g$ leads to a posterior probability on $g$ itself and on integrals depending on $g$ once observations $g(x_1),\ldots,g(x_p)$ have been made.
Philipp Hennig and Michael Osborne have created a webpage on this approach, where you can find references and links.
|
Bayesian importance sampling as an answer to a "paradox" by Wasserman
There exists a Bayesian approach to the numerical resolution of integrals and differential equations: it is called probabilistic numerics. Introducing a Gaussian process prior on $g$ leads to a poster
|
47,920
|
Linear regression - results depending on the scale of the inputs
|
Let's compare the two models.
The original one is clearly and well expressed in the question,
$$y = a + b_1x_1 + b_2 x_2 + b_{12}x_1x_2 + \epsilon.$$
Let's write the second model as
$$y = a^\prime + b^\prime_1 z_1 + b^\prime_2 z_2 + b^\prime_{12}z_1z_2 + \delta.$$
Because the values of the numbers 7.5, 47.5, etc. are of little interest, let's just name them with Greek letters:
$$z_i = \alpha_i x_i + \gamma_i.$$
We know them; they are constant; they will not need to be estimated.
Plugging these equations into the second model shows how it attempts to relate $y$ to the $x_i$:
$$\eqalign{
y &= &a^\prime + b^\prime_1 (\alpha_1 x_1 + \gamma_1) + b^\prime_2 (\alpha_2 x_2 + \gamma_2) + b^\prime_{12}(\alpha_1 x_1 + \gamma_1)(\alpha_2 x_2 + \gamma_2) + \delta \\
&= &(a^\prime + b^\prime_1\gamma_1 + b^\prime_2\gamma_2 + b^\prime_{12}\gamma_1\gamma_2) + (b^\prime_1\alpha_1 + b^\prime_{12} \gamma_2\alpha_1)x_1 + (b^\prime_2\alpha_2 + b^\prime_{12}\gamma_1\alpha_2)x_2 \\
&&+ (b^\prime_{12}\alpha_1\alpha_2)x_1x_2 + \delta.
}$$
Recalling that the default tests conducted by software compare coefficients to zeros, it's easy to compare the results, line by line, to the output for the first ($x$) model:
The errors are modeled in the same way: $\epsilon = \delta$. Therefore the fits (predictions) will be the same and so will the residuals. These are identical models, merely reparameterized.
The test of the intercept compares $a^\prime + b^\prime_1\gamma_1 + b^\prime_2\gamma_2 + b^\prime_{12}\gamma_1\gamma_2$ to $0$.
The tests of the coefficients compare $b^\prime_1\alpha_1 + b^\prime_{12} \gamma_2\alpha_1$ and $b^\prime_2\alpha_2 + b^\prime_{12}\gamma_1\alpha_2$ to $0$.
The test of the interaction term compares $b^\prime_{12}\alpha_1\alpha_2$ to $0$.
Only in the last case is there a simple relationship to the tests in the second model (involving only the primed coefficients): because rescaling $b^\prime$ by $\alpha_1\alpha_2$ also scales its standard error by the same amount, the $t$ statistic (which is the ratio of the estimate to its SE) does not change. Sure enough, the p-values for the interaction terms in both outputs agree because 6.731943e-01 is the same number as 0.67319431 (to within the displayed precision).
In the other cases (the intercept and coefficients of the $x_i$), the coefficients are linear functions of the coefficients for the second model. Their estimates will therefore be the very same linear functions of the estimated coefficients.
We can use this to work out the exact relationships in the two outputs provided we have information about the covariances of the estimates. This is because the standard errors of the linear combinations for the first three coefficients will depend on the variances of the estimates and their covariances. For instance,
$$\text{var}(\hat{b}_1) = \text{var}(\hat b^\prime_1\alpha_1 + \hat b^\prime_{12} \gamma_2\alpha_1) = \alpha_1^2\text{var}(\hat b^\prime_1) + (\gamma_2\alpha_1)^2\text{var}(\hat b^\prime_{12}) + 2\alpha_1^2\gamma_2\text{cov}(\hat b^\prime_1, \hat b^\prime_{12}). $$
(The hats over the letters denote data-based estimates, as usual.)
In theory, it matters not which model you choose. It is convenient, however, to use one where the automatic tests conducted by the software are relevant for your analytical purposes. Let that guide how you re-express the independent variables in a regression.
|
Linear regression - results depending on the scale of the inputs
|
Let's compare the two models.
The original one is clearly and well expressed in the question,
$$y = a + b_1x_1 + b_2 x_2 + b_{12}x_1x_2 + \epsilon.$$
Let's write the second model as
$$y = a^\prime + b
|
Linear regression - results depending on the scale of the inputs
Let's compare the two models.
The original one is clearly and well expressed in the question,
$$y = a + b_1x_1 + b_2 x_2 + b_{12}x_1x_2 + \epsilon.$$
Let's write the second model as
$$y = a^\prime + b^\prime_1 z_1 + b^\prime_2 z_2 + b^\prime_{12}z_1z_2 + \delta.$$
Because the values of the numbers 7.5, 47.5, etc. are of little interest, let's just name them with Greek letters:
$$z_i = \alpha_i x_i + \gamma_i.$$
We know them; they are constant; they will not need to be estimated.
Plugging these equations into the second model shows how it attempts to relate $y$ to the $x_i$:
$$\eqalign{
y &= &a^\prime + b^\prime_1 (\alpha_1 x_1 + \gamma_1) + b^\prime_2 (\alpha_2 x_2 + \gamma_2) + b^\prime_{12}(\alpha_1 x_1 + \gamma_1)(\alpha_2 x_2 + \gamma_2) + \delta \\
&= &(a^\prime + b^\prime_1\gamma_1 + b^\prime_2\gamma_2 + b^\prime_{12}\gamma_1\gamma_2) + (b^\prime_1\alpha_1 + b^\prime_{12} \gamma_2\alpha_1)x_1 + (b^\prime_2\alpha_2 + b^\prime_{12}\gamma_1\alpha_2)x_2 \\
&&+ (b^\prime_{12}\alpha_1\alpha_2)x_1x_2 + \delta.
}$$
Recalling that the default tests conducted by software compare coefficients to zeros, it's easy to compare the results, line by line, to the output for the first ($x$) model:
The errors are modeled in the same way: $\epsilon = \delta$. Therefore the fits (predictions) will be the same and so will the residuals. These are identical models, merely reparameterized.
The test of the intercept compares $a^\prime + b^\prime_1\gamma_1 + b^\prime_2\gamma_2 + b^\prime_{12}\gamma_1\gamma_2$ to $0$.
The tests of the coefficients compare $b^\prime_1\alpha_1 + b^\prime_{12} \gamma_2\alpha_1$ and $b^\prime_2\alpha_2 + b^\prime_{12}\gamma_1\alpha_2$ to $0$.
The test of the interaction term compares $b^\prime_{12}\alpha_1\alpha_2$ to $0$.
Only in the last case is there a simple relationship to the tests in the second model (involving only the primed coefficients): because rescaling $b^\prime$ by $\alpha_1\alpha_2$ also scales its standard error by the same amount, the $t$ statistic (which is the ratio of the estimate to its SE) does not change. Sure enough, the p-values for the interaction terms in both outputs agree because 6.731943e-01 is the same number as 0.67319431 (to within the displayed precision).
In the other cases (the intercept and coefficients of the $x_i$), the coefficients are linear functions of the coefficients for the second model. Their estimates will therefore be the very same linear functions of the estimated coefficients.
We can use this to work out the exact relationships in the two outputs provided we have information about the covariances of the estimates. This is because the standard errors of the linear combinations for the first three coefficients will depend on the variances of the estimates and their covariances. For instance,
$$\text{var}(\hat{b}_1) = \text{var}(\hat b^\prime_1\alpha_1 + \hat b^\prime_{12} \gamma_2\alpha_1) = \alpha_1^2\text{var}(\hat b^\prime_1) + (\gamma_2\alpha_1)^2\text{var}(\hat b^\prime_{12}) + 2\alpha_1^2\gamma_2\text{cov}(\hat b^\prime_1, \hat b^\prime_{12}). $$
(The hats over the letters denote data-based estimates, as usual.)
In theory, it matters not which model you choose. It is convenient, however, to use one where the automatic tests conducted by the software are relevant for your analytical purposes. Let that guide how you re-express the independent variables in a regression.
|
Linear regression - results depending on the scale of the inputs
Let's compare the two models.
The original one is clearly and well expressed in the question,
$$y = a + b_1x_1 + b_2 x_2 + b_{12}x_1x_2 + \epsilon.$$
Let's write the second model as
$$y = a^\prime + b
|
47,921
|
Linear regression - results depending on the scale of the inputs
|
The $p$-values did not change because of the rescaling, but because you added constants to the variable. The intercept is the expected value of y when all explanatory variables are 0. The $p$-value is the test of the hypothesis that that conditional mean is equal to 0. So if you add a constant to one or more of the explanatory variables, then you are testing a different null-hypothesis and you get a different $p$-value. For example, say one of your explanatory variables is year of birth. Without centering your intercept refers to a person born in year 0, which is quite often a bit of an extrapolation. If You centered the year of birth by subtracting 1960, the intercept will refer to someone born in 1960.
The main effects (x1, x2, z1, and z2) are also influenced by the constant, because you added an interaction term. x1 is the effect of x1 when x2 is 0. So say x2 is year of birth, then x1 is the effect of x1 for someone quite old. Again you change this by centering your variable at say someone born in 1960 to get a more meaningful effect of x1. But you get a very null hypothesis, and the $p$-value should change.
Notice that the $p$-value of the interaction terms (x1:x2 and z1:z2) remain unchanged ($6.731943 \times 10^{-1} = 0.6731943$). This is due to the fact that the interpretation of this interaction is not affected by any constants added to the model.
|
Linear regression - results depending on the scale of the inputs
|
The $p$-values did not change because of the rescaling, but because you added constants to the variable. The intercept is the expected value of y when all explanatory variables are 0. The $p$-value is
|
Linear regression - results depending on the scale of the inputs
The $p$-values did not change because of the rescaling, but because you added constants to the variable. The intercept is the expected value of y when all explanatory variables are 0. The $p$-value is the test of the hypothesis that that conditional mean is equal to 0. So if you add a constant to one or more of the explanatory variables, then you are testing a different null-hypothesis and you get a different $p$-value. For example, say one of your explanatory variables is year of birth. Without centering your intercept refers to a person born in year 0, which is quite often a bit of an extrapolation. If You centered the year of birth by subtracting 1960, the intercept will refer to someone born in 1960.
The main effects (x1, x2, z1, and z2) are also influenced by the constant, because you added an interaction term. x1 is the effect of x1 when x2 is 0. So say x2 is year of birth, then x1 is the effect of x1 for someone quite old. Again you change this by centering your variable at say someone born in 1960 to get a more meaningful effect of x1. But you get a very null hypothesis, and the $p$-value should change.
Notice that the $p$-value of the interaction terms (x1:x2 and z1:z2) remain unchanged ($6.731943 \times 10^{-1} = 0.6731943$). This is due to the fact that the interpretation of this interaction is not affected by any constants added to the model.
|
Linear regression - results depending on the scale of the inputs
The $p$-values did not change because of the rescaling, but because you added constants to the variable. The intercept is the expected value of y when all explanatory variables are 0. The $p$-value is
|
47,922
|
Linear regression - results depending on the scale of the inputs
|
Just to show @Zhanxiong last comment:
# response variable
y <- rnorm(8, mean=17, sd=1.2)
# model 1
x1 <- c(1, -1, -1, -1, 1, 1, -1, 1)
x2 <- c(-1, 1, 1, -1, -1, 1, -1, 1)
fit1 <- lm(y ~ x1 + x2 + x1*x2)
# model 2 (factors transformation - no addtion)
z1 <- x1 * 7.5
z2 <- x2 * 11.5
fit2 <- lm(y ~ z1 + z2 + z1*z2)
summary(fit1)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 17.2558 0.2382 72.439 2.18e-07 ***
x1 0.1923 0.2382 0.807 0.465
x2 0.1152 0.2382 0.484 0.654
x1*x2 0.1691 0.2382 0.710 0.517
summary(fit2)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 17.255837 0.238213 72.439 2.18e-07 ***
z1 0.025641 0.031762 0.807 0.465
z2 0.010018 0.020714 0.484 0.654
z1*z2 0.001960 0.002762 0.710 0.517
|
Linear regression - results depending on the scale of the inputs
|
Just to show @Zhanxiong last comment:
# response variable
y <- rnorm(8, mean=17, sd=1.2)
# model 1
x1 <- c(1, -1, -1, -1, 1, 1, -1, 1)
x2 <- c(-1, 1, 1, -1, -1, 1, -1, 1)
fit1 <- lm(y ~ x1 + x2 + x1*
|
Linear regression - results depending on the scale of the inputs
Just to show @Zhanxiong last comment:
# response variable
y <- rnorm(8, mean=17, sd=1.2)
# model 1
x1 <- c(1, -1, -1, -1, 1, 1, -1, 1)
x2 <- c(-1, 1, 1, -1, -1, 1, -1, 1)
fit1 <- lm(y ~ x1 + x2 + x1*x2)
# model 2 (factors transformation - no addtion)
z1 <- x1 * 7.5
z2 <- x2 * 11.5
fit2 <- lm(y ~ z1 + z2 + z1*z2)
summary(fit1)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 17.2558 0.2382 72.439 2.18e-07 ***
x1 0.1923 0.2382 0.807 0.465
x2 0.1152 0.2382 0.484 0.654
x1*x2 0.1691 0.2382 0.710 0.517
summary(fit2)
Estimate Std. Error t value Pr(>|t|)
(Intercept) 17.255837 0.238213 72.439 2.18e-07 ***
z1 0.025641 0.031762 0.807 0.465
z2 0.010018 0.020714 0.484 0.654
z1*z2 0.001960 0.002762 0.710 0.517
|
Linear regression - results depending on the scale of the inputs
Just to show @Zhanxiong last comment:
# response variable
y <- rnorm(8, mean=17, sd=1.2)
# model 1
x1 <- c(1, -1, -1, -1, 1, 1, -1, 1)
x2 <- c(-1, 1, 1, -1, -1, 1, -1, 1)
fit1 <- lm(y ~ x1 + x2 + x1*
|
47,923
|
How to specify logistic regression as transformed linear regression?
|
The short answer is "you don't". They don't correspond.
Logistic regression is not a transformed linear regression.
Even though $E(Y)$ ($=P(Y=1)$) may be written as $\text{logit}(X\beta)$, and so seemingly linearized, you can't transform the $y$ values to make a linear regression, nor can you fit a nonlinear least squares model (transforming the x's) that reproduces the logistic regression. [You may be able to fit a function of the logistic form via nonlinear least squares but it won't have the same estimates.] Answers to this question gives some additional details.
The observations enter the model estimation quite differently.
Logistic regression is fitted by maximum likelihood estimation for a binomial model with the natural link function (which is the logit for the binomial). That is, the data are seen as $y_i\sim\text{binomial}(n_i,1/(1+e^{-X_i\beta}))$
(Where $X_i\beta=\beta_0+\sum_j \beta_j x_{ij}$ is the linear predictor for casec$i$.)
Once you find the MLE for the model, you could find a set of weights and a set of pseudo-observations such that a linear model yields the parameter estimates, but the connection to the original data is quite indirect and in general you can't get to the point of doing so until you've already found the solution.
It's not particularly useful to think of that as a transformation of the data.
|
How to specify logistic regression as transformed linear regression?
|
The short answer is "you don't". They don't correspond.
Logistic regression is not a transformed linear regression.
Even though $E(Y)$ ($=P(Y=1)$) may be written as $\text{logit}(X\beta)$, and so seem
|
How to specify logistic regression as transformed linear regression?
The short answer is "you don't". They don't correspond.
Logistic regression is not a transformed linear regression.
Even though $E(Y)$ ($=P(Y=1)$) may be written as $\text{logit}(X\beta)$, and so seemingly linearized, you can't transform the $y$ values to make a linear regression, nor can you fit a nonlinear least squares model (transforming the x's) that reproduces the logistic regression. [You may be able to fit a function of the logistic form via nonlinear least squares but it won't have the same estimates.] Answers to this question gives some additional details.
The observations enter the model estimation quite differently.
Logistic regression is fitted by maximum likelihood estimation for a binomial model with the natural link function (which is the logit for the binomial). That is, the data are seen as $y_i\sim\text{binomial}(n_i,1/(1+e^{-X_i\beta}))$
(Where $X_i\beta=\beta_0+\sum_j \beta_j x_{ij}$ is the linear predictor for casec$i$.)
Once you find the MLE for the model, you could find a set of weights and a set of pseudo-observations such that a linear model yields the parameter estimates, but the connection to the original data is quite indirect and in general you can't get to the point of doing so until you've already found the solution.
It's not particularly useful to think of that as a transformation of the data.
|
How to specify logistic regression as transformed linear regression?
The short answer is "you don't". They don't correspond.
Logistic regression is not a transformed linear regression.
Even though $E(Y)$ ($=P(Y=1)$) may be written as $\text{logit}(X\beta)$, and so seem
|
47,924
|
What is the purpose of the 'convenience' 1/2 fraction on the sum of squared errors?
|
As long as $E(w) \geq 0$ (which is true for this sum of squares), minimizing $(1/2)E(w)$ is equivalent to minimizing $E(w)$. As has been pointed out in the comments, the factor of $1/2$ disappears when you take the derivative of $E(w)$.
|
What is the purpose of the 'convenience' 1/2 fraction on the sum of squared errors?
|
As long as $E(w) \geq 0$ (which is true for this sum of squares), minimizing $(1/2)E(w)$ is equivalent to minimizing $E(w)$. As has been pointed out in the comments, the factor of $1/2$ disappears wh
|
What is the purpose of the 'convenience' 1/2 fraction on the sum of squared errors?
As long as $E(w) \geq 0$ (which is true for this sum of squares), minimizing $(1/2)E(w)$ is equivalent to minimizing $E(w)$. As has been pointed out in the comments, the factor of $1/2$ disappears when you take the derivative of $E(w)$.
|
What is the purpose of the 'convenience' 1/2 fraction on the sum of squared errors?
As long as $E(w) \geq 0$ (which is true for this sum of squares), minimizing $(1/2)E(w)$ is equivalent to minimizing $E(w)$. As has been pointed out in the comments, the factor of $1/2$ disappears wh
|
47,925
|
What is the purpose of the 'convenience' 1/2 fraction on the sum of squared errors?
|
It probably doesn't matter whether you use $\frac{1}{2}$ or $\frac{1}{n}$ for MSE because the denominator value of 2 and $n$ will never change for the dataset being evaluated. The scale of both methods will differ due to the magnitude of what's calculated, but nevertheless, you'll be dividing by a constant that never changes. If you compare MSE across datasets, then you might go with $\frac{1}{n}$, since that will scale with sample size -- however, within the algorithm being fitted, the artificial neural network (ANN) just needs a reference point to gauge how bad/good the fit is.
FYI-The same equation (i.e., $\frac{1}{2}$), is used for MSE in the neural network chapter of Friedman and Tibshirani (Statistical Learning, Springer). Recall, however, MSE is for continuous function approximation using an ANN and cross-entropy is used for classification problems for ANNs.
Since you're reading Bishop, what you won't pull away from reading is that a key issue with ANNs is that they like input features to have range [-1,1] with no correlation between features. If there is correlation between features, then an ANN will spend time learning the correlation -- which is what you don't want an algorithm to do. Therefore, run PCA to decorrelate the features first and then input the top 10 orthogonal PC's into the ANN.
Last, there is another primary issue with ANNs regarding input samples, which is related to redundancy. That is, many of your records may be the same, and inputting the same (similar) records into an ANN does not help. One of the only groups I know who have developed methods to collapse features and redundant samples simultaneously before input to an ANN is Jurik Research's (DDR)
Finally, look at Ripley's text on ANNs, since the primary focus will always be how you regularized in order to minimize over-fitting and maximize generalization.
|
What is the purpose of the 'convenience' 1/2 fraction on the sum of squared errors?
|
It probably doesn't matter whether you use $\frac{1}{2}$ or $\frac{1}{n}$ for MSE because the denominator value of 2 and $n$ will never change for the dataset being evaluated. The scale of both metho
|
What is the purpose of the 'convenience' 1/2 fraction on the sum of squared errors?
It probably doesn't matter whether you use $\frac{1}{2}$ or $\frac{1}{n}$ for MSE because the denominator value of 2 and $n$ will never change for the dataset being evaluated. The scale of both methods will differ due to the magnitude of what's calculated, but nevertheless, you'll be dividing by a constant that never changes. If you compare MSE across datasets, then you might go with $\frac{1}{n}$, since that will scale with sample size -- however, within the algorithm being fitted, the artificial neural network (ANN) just needs a reference point to gauge how bad/good the fit is.
FYI-The same equation (i.e., $\frac{1}{2}$), is used for MSE in the neural network chapter of Friedman and Tibshirani (Statistical Learning, Springer). Recall, however, MSE is for continuous function approximation using an ANN and cross-entropy is used for classification problems for ANNs.
Since you're reading Bishop, what you won't pull away from reading is that a key issue with ANNs is that they like input features to have range [-1,1] with no correlation between features. If there is correlation between features, then an ANN will spend time learning the correlation -- which is what you don't want an algorithm to do. Therefore, run PCA to decorrelate the features first and then input the top 10 orthogonal PC's into the ANN.
Last, there is another primary issue with ANNs regarding input samples, which is related to redundancy. That is, many of your records may be the same, and inputting the same (similar) records into an ANN does not help. One of the only groups I know who have developed methods to collapse features and redundant samples simultaneously before input to an ANN is Jurik Research's (DDR)
Finally, look at Ripley's text on ANNs, since the primary focus will always be how you regularized in order to minimize over-fitting and maximize generalization.
|
What is the purpose of the 'convenience' 1/2 fraction on the sum of squared errors?
It probably doesn't matter whether you use $\frac{1}{2}$ or $\frac{1}{n}$ for MSE because the denominator value of 2 and $n$ will never change for the dataset being evaluated. The scale of both metho
|
47,926
|
How to compare performance of regression and classification?
|
Imagine this: You are trying to predict age of a population using some
features. It does not work that well. Then, you are reducing the
complexity of the problem. You only try to predict whether age is
above or below 20. This works well, using the same features. I simply
want to quantify the improvement gained by this simplification.
So you simply would have two models, one that says that the age is a numeric value $\hat y$ (regression), and the other that says that the age is some constant depending weather it is below, or above certain threshold (classification). To choose the optimal constants, you would simply take the conditional mean for the age, given being above, or below, the threshold. Now you can simply compare both outcomes using same metric for comparing regression models (e.g. RMSE, MAE).
I guess, in vast majority of cases this would tell you that no matter how bad the regression model is, it is still better then predicting only two constants. But if you think about it, on the end of the day, this is what the classification model will give you.
Now, if you'd agree with me that using classifier leaves you with two conditional means as constants approximating the continuous variable, another thing follows. An algorithm that conditional on some variables makes a binary split (you also said in the comments that you actually don't have any prespecified threshold) and predicts two conditional means is a very simple regression tree (see here for explanation how decision trees work). Usually, you would use much more complicated regression trees, that make more splits, and so get more accurate. Even more, usually you wouldn't use a single tree, but rather a random forest of many trees, trained on different subsets of data, that made multiple different splits and then aggregate the outputs. So, have you tried random forest? It is simple, yet pretty powerful algorithm, that would be doing all the "classification" part for you, but better.
But the general answer is that in most cases you can't compare classification to regression. Both approaches give you different kind of outcomes, I can't think of situation where they would be equivalent. In terms of real-life examples, say that you used your algorithm to predict age of your customers, and based on this send them targeted marketing campaigns. With more precise predictions about age you would be able to send them the age-specific campaigns. In this case the more accurate you are, the better for you. On another hand, you could quantify this and check how much better your business does if you have campaigns for precise age vs two age groups (in terms of some business metric like clicks, purchases etc). Based on this, you would also know how much better would you be if using regression vs classification. Same would apply if the classification task was something completely different, say classifying "send the campaign vs not (irrelevant of age)", or sending age specific campaign based on regression predictions for age, here also you should rather undertake an A/B test and simply check what pays better (or gives you more clicks etc). Saying this differently, to answer your question you need to consider also what for you want to use the outputs of the algorithms and check which algorithm works better for the task.
|
How to compare performance of regression and classification?
|
Imagine this: You are trying to predict age of a population using some
features. It does not work that well. Then, you are reducing the
complexity of the problem. You only try to predict whether a
|
How to compare performance of regression and classification?
Imagine this: You are trying to predict age of a population using some
features. It does not work that well. Then, you are reducing the
complexity of the problem. You only try to predict whether age is
above or below 20. This works well, using the same features. I simply
want to quantify the improvement gained by this simplification.
So you simply would have two models, one that says that the age is a numeric value $\hat y$ (regression), and the other that says that the age is some constant depending weather it is below, or above certain threshold (classification). To choose the optimal constants, you would simply take the conditional mean for the age, given being above, or below, the threshold. Now you can simply compare both outcomes using same metric for comparing regression models (e.g. RMSE, MAE).
I guess, in vast majority of cases this would tell you that no matter how bad the regression model is, it is still better then predicting only two constants. But if you think about it, on the end of the day, this is what the classification model will give you.
Now, if you'd agree with me that using classifier leaves you with two conditional means as constants approximating the continuous variable, another thing follows. An algorithm that conditional on some variables makes a binary split (you also said in the comments that you actually don't have any prespecified threshold) and predicts two conditional means is a very simple regression tree (see here for explanation how decision trees work). Usually, you would use much more complicated regression trees, that make more splits, and so get more accurate. Even more, usually you wouldn't use a single tree, but rather a random forest of many trees, trained on different subsets of data, that made multiple different splits and then aggregate the outputs. So, have you tried random forest? It is simple, yet pretty powerful algorithm, that would be doing all the "classification" part for you, but better.
But the general answer is that in most cases you can't compare classification to regression. Both approaches give you different kind of outcomes, I can't think of situation where they would be equivalent. In terms of real-life examples, say that you used your algorithm to predict age of your customers, and based on this send them targeted marketing campaigns. With more precise predictions about age you would be able to send them the age-specific campaigns. In this case the more accurate you are, the better for you. On another hand, you could quantify this and check how much better your business does if you have campaigns for precise age vs two age groups (in terms of some business metric like clicks, purchases etc). Based on this, you would also know how much better would you be if using regression vs classification. Same would apply if the classification task was something completely different, say classifying "send the campaign vs not (irrelevant of age)", or sending age specific campaign based on regression predictions for age, here also you should rather undertake an A/B test and simply check what pays better (or gives you more clicks etc). Saying this differently, to answer your question you need to consider also what for you want to use the outputs of the algorithms and check which algorithm works better for the task.
|
How to compare performance of regression and classification?
Imagine this: You are trying to predict age of a population using some
features. It does not work that well. Then, you are reducing the
complexity of the problem. You only try to predict whether a
|
47,927
|
How to compare performance of regression and classification?
|
I would choose the metric depending on my problem. Ie, if my problem is to predict the age of a person, I'd choose RMSE, if my problem is to predict young or old, I'd choose accuracy.
After choosing the metric, you need to be able to use that metric with the models. Ie, if your problem is to predict young or old, then it's supposed that you have a threshold to determine the labels used to train the LR, so you can apply what @EdM had mentioned.
IMHO, if you compare two models that perform different tasks, you can't conclude that one is better than the other, because are doing different things.
Let me know if I misunderstood something.
|
How to compare performance of regression and classification?
|
I would choose the metric depending on my problem. Ie, if my problem is to predict the age of a person, I'd choose RMSE, if my problem is to predict young or old, I'd choose accuracy.
After choosing t
|
How to compare performance of regression and classification?
I would choose the metric depending on my problem. Ie, if my problem is to predict the age of a person, I'd choose RMSE, if my problem is to predict young or old, I'd choose accuracy.
After choosing the metric, you need to be able to use that metric with the models. Ie, if your problem is to predict young or old, then it's supposed that you have a threshold to determine the labels used to train the LR, so you can apply what @EdM had mentioned.
IMHO, if you compare two models that perform different tasks, you can't conclude that one is better than the other, because are doing different things.
Let me know if I misunderstood something.
|
How to compare performance of regression and classification?
I would choose the metric depending on my problem. Ie, if my problem is to predict the age of a person, I'd choose RMSE, if my problem is to predict young or old, I'd choose accuracy.
After choosing t
|
47,928
|
Conditional mass function of minimum of two discrete uniform random variables given the maximum
|
Draw a $6$" $\times$ $6$" square and divide it into a $6\times 6$
array of $36$ one-inch
squares. Label the rows and columns with numbers $1$-$6$ and in each square, write down the values of $(X,Y)$, $U$, $V$, $S$ and $T$ in
each square . Then,
count!
|
Conditional mass function of minimum of two discrete uniform random variables given the maximum
|
Draw a $6$" $\times$ $6$" square and divide it into a $6\times 6$
array of $36$ one-inch
squares. Label the rows and columns with numbers $1$-$6$ and in each square, write down the values of $(X,Y)$,
|
Conditional mass function of minimum of two discrete uniform random variables given the maximum
Draw a $6$" $\times$ $6$" square and divide it into a $6\times 6$
array of $36$ one-inch
squares. Label the rows and columns with numbers $1$-$6$ and in each square, write down the values of $(X,Y)$, $U$, $V$, $S$ and $T$ in
each square . Then,
count!
|
Conditional mass function of minimum of two discrete uniform random variables given the maximum
Draw a $6$" $\times$ $6$" square and divide it into a $6\times 6$
array of $36$ one-inch
squares. Label the rows and columns with numbers $1$-$6$ and in each square, write down the values of $(X,Y)$,
|
47,929
|
How to find factor that is making matrix singular
|
You can use an eigen-decomposition to find linear combinations of your columns that vanish, then remove enough columns participating in these linear combinations.
Here's a matrix with a vanishing column linear combination:
> M <- matrix(c(0, 0, 0, 1, 0, 1, 0, 1, 1, 1, -1, 0), nrow=4, byrow=TRUE)
> M
[,1] [,2] [,3]
[1,] 0 0 0
[2,] 1 0 1
[3,] 0 1 1
[4,] 1 -1 0
If a linear combination of the columns vanish, then the same is true if I cut off the bottom of the matrix to make it square:
> sM <- M[1:3, ]
> sM
[,1] [,2] [,3]
[1,] 0 0 0
[2,] 1 0 1
[3,] 0 1 1
Now compute the eigenvalues and eigenvectors:
> eigen(sM)
$values
[1] 1.618034 -0.618034 0.000000
$vectors
[,1] [,2] [,3]
[1,] 0.0000000 0.0000000 0.5773503
[2,] 0.5257311 0.8506508 0.5773503
[3,] 0.8506508 -0.5257311 -0.5773503
So there's an zero eigenvalue, which we expected, and it corresponds to the column linear combination:
$$ .57 C_1 + .57 C_2 - .57 C_3 = 0 $$
So removing one of these columns will result in a full column rank matrix.
|
How to find factor that is making matrix singular
|
You can use an eigen-decomposition to find linear combinations of your columns that vanish, then remove enough columns participating in these linear combinations.
Here's a matrix with a vanishing colu
|
How to find factor that is making matrix singular
You can use an eigen-decomposition to find linear combinations of your columns that vanish, then remove enough columns participating in these linear combinations.
Here's a matrix with a vanishing column linear combination:
> M <- matrix(c(0, 0, 0, 1, 0, 1, 0, 1, 1, 1, -1, 0), nrow=4, byrow=TRUE)
> M
[,1] [,2] [,3]
[1,] 0 0 0
[2,] 1 0 1
[3,] 0 1 1
[4,] 1 -1 0
If a linear combination of the columns vanish, then the same is true if I cut off the bottom of the matrix to make it square:
> sM <- M[1:3, ]
> sM
[,1] [,2] [,3]
[1,] 0 0 0
[2,] 1 0 1
[3,] 0 1 1
Now compute the eigenvalues and eigenvectors:
> eigen(sM)
$values
[1] 1.618034 -0.618034 0.000000
$vectors
[,1] [,2] [,3]
[1,] 0.0000000 0.0000000 0.5773503
[2,] 0.5257311 0.8506508 0.5773503
[3,] 0.8506508 -0.5257311 -0.5773503
So there's an zero eigenvalue, which we expected, and it corresponds to the column linear combination:
$$ .57 C_1 + .57 C_2 - .57 C_3 = 0 $$
So removing one of these columns will result in a full column rank matrix.
|
How to find factor that is making matrix singular
You can use an eigen-decomposition to find linear combinations of your columns that vanish, then remove enough columns participating in these linear combinations.
Here's a matrix with a vanishing colu
|
47,930
|
Bonferroni Correction for Post Hoc Analysis in ANOVA & Regression
|
Preface: There are many different was to adjust for multiple comparisons. Olive Dunn proposed the Bonferroni adjustment in 1961, and the multiple comparisons literature (see, for example, Shaffer, 1995) has grown to a variety of family-wise error rate adjustment methods (of which Bonferroni is the simplest), and the more recent false discovery rate adjustment methods. Moreover, adjustments can either be made to $\alpha$, or the math may be inverted and instead applied to adjust p-values (sometimes adjusted p-values are called q-values)—my own preference is to adjust $\alpha$, since adjustments to p may need a clumsy upper-truncation at 1.0 to retain interpretability as a probability. Your question, and my answer applies regardless of which of these methods you choose, and whether you apply the adjustment to $\alpha$ or to the p-values.
You would apply the Bonferroni to post hoc multiple comparisons following rejection of a one-way ANOVA. In fact that is a canonical example of when to apply the Bonferroni adjustment. These pairwise tests are not quite the same thing as a bunch of standard t tests, because following rejection of an ANOVA the t test statistics are calculated using the pooled variance implicit in the ANOVA's null hypothesis, rather than variance from the two specific groups compared for a single test statistic.
You are correct: we would use multiple comparisons adjustments when make many statistical tests, as in the case of the t tests for the $\beta$ estimates in multiple regression, or in feature selection of N-way ANOVA.
References
Dunn, O. J. (1961). Multiple comparisons among means. Journal of the American Statistical Association, 56(293):52–64.
Shaffer, J. P. (1995). Multiple hypothesis testing. Annual Review of Psychology, 46:561–584.
|
Bonferroni Correction for Post Hoc Analysis in ANOVA & Regression
|
Preface: There are many different was to adjust for multiple comparisons. Olive Dunn proposed the Bonferroni adjustment in 1961, and the multiple comparisons literature (see, for example, Shaffer, 199
|
Bonferroni Correction for Post Hoc Analysis in ANOVA & Regression
Preface: There are many different was to adjust for multiple comparisons. Olive Dunn proposed the Bonferroni adjustment in 1961, and the multiple comparisons literature (see, for example, Shaffer, 1995) has grown to a variety of family-wise error rate adjustment methods (of which Bonferroni is the simplest), and the more recent false discovery rate adjustment methods. Moreover, adjustments can either be made to $\alpha$, or the math may be inverted and instead applied to adjust p-values (sometimes adjusted p-values are called q-values)—my own preference is to adjust $\alpha$, since adjustments to p may need a clumsy upper-truncation at 1.0 to retain interpretability as a probability. Your question, and my answer applies regardless of which of these methods you choose, and whether you apply the adjustment to $\alpha$ or to the p-values.
You would apply the Bonferroni to post hoc multiple comparisons following rejection of a one-way ANOVA. In fact that is a canonical example of when to apply the Bonferroni adjustment. These pairwise tests are not quite the same thing as a bunch of standard t tests, because following rejection of an ANOVA the t test statistics are calculated using the pooled variance implicit in the ANOVA's null hypothesis, rather than variance from the two specific groups compared for a single test statistic.
You are correct: we would use multiple comparisons adjustments when make many statistical tests, as in the case of the t tests for the $\beta$ estimates in multiple regression, or in feature selection of N-way ANOVA.
References
Dunn, O. J. (1961). Multiple comparisons among means. Journal of the American Statistical Association, 56(293):52–64.
Shaffer, J. P. (1995). Multiple hypothesis testing. Annual Review of Psychology, 46:561–584.
|
Bonferroni Correction for Post Hoc Analysis in ANOVA & Regression
Preface: There are many different was to adjust for multiple comparisons. Olive Dunn proposed the Bonferroni adjustment in 1961, and the multiple comparisons literature (see, for example, Shaffer, 199
|
47,931
|
Simple way to cluster histograms
|
Use hierarchical clustering or DBSCAN.
They have one huge benefit over k-means: they work with arbitrary distance measures, and with histograms you might want to use like, for example, Jensen-Shannon divergence, etc. that are designed to capture the similarity of distributions.
|
Simple way to cluster histograms
|
Use hierarchical clustering or DBSCAN.
They have one huge benefit over k-means: they work with arbitrary distance measures, and with histograms you might want to use like, for example, Jensen-Shannon
|
Simple way to cluster histograms
Use hierarchical clustering or DBSCAN.
They have one huge benefit over k-means: they work with arbitrary distance measures, and with histograms you might want to use like, for example, Jensen-Shannon divergence, etc. that are designed to capture the similarity of distributions.
|
Simple way to cluster histograms
Use hierarchical clustering or DBSCAN.
They have one huge benefit over k-means: they work with arbitrary distance measures, and with histograms you might want to use like, for example, Jensen-Shannon
|
47,932
|
Simple way to cluster histograms
|
K-means could do this. K-means is an unsupervised clustering algorithm. Rewrite each histogram as a vector and use Euclidean distance.
This post goes into the assumptions of K-means: How to understand the drawbacks of K-means You might want to check these.
You have to determine the number of clusters yourself by estimating models with different k.
|
Simple way to cluster histograms
|
K-means could do this. K-means is an unsupervised clustering algorithm. Rewrite each histogram as a vector and use Euclidean distance.
This post goes into the assumptions of K-means: How to understand
|
Simple way to cluster histograms
K-means could do this. K-means is an unsupervised clustering algorithm. Rewrite each histogram as a vector and use Euclidean distance.
This post goes into the assumptions of K-means: How to understand the drawbacks of K-means You might want to check these.
You have to determine the number of clusters yourself by estimating models with different k.
|
Simple way to cluster histograms
K-means could do this. K-means is an unsupervised clustering algorithm. Rewrite each histogram as a vector and use Euclidean distance.
This post goes into the assumptions of K-means: How to understand
|
47,933
|
Censored Binomial model - log likelihood
|
Let the constant probability of contamination be $p$, which is to be estimated, and let $X$ be a random variable equal to $1$ when contamination is observed, $0$ otherwise. Writing $q=1-p$ for the chance of no contamination in any individual unit, the probability of observing no contamination (that is, $X=0$) in a batch of size $n$ ($n = 1, 2, 3, \ldots$) is $q^n$, whence the chance of observing contamination is
$$\Pr(X=1\,|\, n) = 1 - q^n.$$
In a dataset of independent observations $\mathbf{x} = (x_1, x_2, \ldots)$ of $X$ from batches of size $\mathbf{n} = (n_1, n_2, \ldots)$, the likelihood therefore is
$$L(q; \mathbf{x}, \mathbf{n}) = \prod_{x_i=1} \left(1 - q^{n_i}\right)\prod_{x_j=0} q^{n_j},$$
whence the log likelihood is
$$\Lambda(q) = \sum_{x_i=1} \log\left(1 - q^{n_i}\right) + \sum_{x_j=0} {n_j} \log q.$$
At the extremes $q\to 0$ or $q\to 1$ this continuous function obviously diverges to $-\infty$, implying it has a global maximum somewhere in the interval $(0,1)$ corresponding to a zero of the derivative
$$\frac{d}{d q} \Lambda(q) = -\sum_{x_i=1} \frac{n_i q^{n_i-1}}{1 - q^{n_i}}+ \sum_{x_j=0} \frac{n_j} {q}.$$
Upon multiplying by $q$, the zeros are seen to be the solutions of the equation
$$\sum_{x_i=1} n_i\left( \frac{1}{1 - q^{n_i}}\right) - \sum_{x_i=1}n_i = \sum_{x_i=1} n_i\left( \frac{1}{1 - q^{n_i}}-1\right)= \sum_{x_i=1} \frac{n_i q^{n_i}}{1 - q^{n_i}}= \sum_{x_j=0} n_j.$$
Write, therefore, $N = \sum_{i} n_i$, obtaining
$$\sum_{x_i=1} \frac{n_i}{1 - q^{n_i}} = N.$$
Because for any $k\ge 1$ the function $q\to 1/(1-q^k)$ increases monotonically from $1$ to $\infty$ when $0\lt q \lt 1$, the left hand side increases monotonically from $\sum_{x_i=1}n_i$, which does not exceed $N$. Consequently there is a unique solution $0 \lt \hat q \lt 1$. It can quickly be found using any decent root finder. The estimated probability of contamination is $\hat p = 1 - \hat q$. Confidence intervals, etc., can be found with the usual Maximum Likelihood machinery.
So much information is lost in observing $X$ that these estimates will not be precise. Even with large batches (in the hundreds) and a large number of batches (in the hundreds), when $p$ is small (as one would expect), its estimate $\hat p$ can easily err by a factor of $2$ or greater. Estimates will be particularly poor when few or almost all of the batches exhibit contamination. This prognosis is borne out by the simulation results shown below, which present histograms of the estimates relative to $p$ (that is, the ratios $\hat{p}/p$) and scatterplots of the fraction of contaminated batches (the mean of the $x_i$) against the estimates $\hat p$.
Simulation results for $20$ batches averaging about $40$ units per batch.
Here is the R code that produced this figure. It turns out that minimizing the square of the objective function is a little more accurate and faster than finding its root in $(0,1)$, so optimize was chosen to perform the calculations.
n.batch <- 20 # Number of batches
set.seed(17)
sizes <- (1+rpois(n.batch, 3)) * 10 # Batch sizes
#
# Simulate data for various values of p and show the distribution of estimates.
#
f <- function(q, x, n) sum((n / (1 - q^n))[x]) - sum(n)
par(mfcol=c(2,4))
for (p in c(1e-4, 1e-3, 1e-2, 1e-1)) {
p.hat <- replicate(1e3, {
x <- rbinom(length(sizes), sizes, p) > 0
solution <- optimize(function(q) f(q, x, sizes)^2,
lower=1e-16, upper=1-1e-16, tol=1e-8)
c(sum(x), 1 - solution$minimum)
})
hist(p.hat[2,]/p, main=paste0("Estimates for p=", p), xlab="Relative p Estimate")
abline(v=1, col="Red", lwd=2)
plot(p.hat[1,]/n.batch, p.hat[2,]/p, xlim=c(0,1),
xlab="Fraction Contaminated", ylab="Relative p Estimate")
}
|
Censored Binomial model - log likelihood
|
Let the constant probability of contamination be $p$, which is to be estimated, and let $X$ be a random variable equal to $1$ when contamination is observed, $0$ otherwise. Writing $q=1-p$ for the ch
|
Censored Binomial model - log likelihood
Let the constant probability of contamination be $p$, which is to be estimated, and let $X$ be a random variable equal to $1$ when contamination is observed, $0$ otherwise. Writing $q=1-p$ for the chance of no contamination in any individual unit, the probability of observing no contamination (that is, $X=0$) in a batch of size $n$ ($n = 1, 2, 3, \ldots$) is $q^n$, whence the chance of observing contamination is
$$\Pr(X=1\,|\, n) = 1 - q^n.$$
In a dataset of independent observations $\mathbf{x} = (x_1, x_2, \ldots)$ of $X$ from batches of size $\mathbf{n} = (n_1, n_2, \ldots)$, the likelihood therefore is
$$L(q; \mathbf{x}, \mathbf{n}) = \prod_{x_i=1} \left(1 - q^{n_i}\right)\prod_{x_j=0} q^{n_j},$$
whence the log likelihood is
$$\Lambda(q) = \sum_{x_i=1} \log\left(1 - q^{n_i}\right) + \sum_{x_j=0} {n_j} \log q.$$
At the extremes $q\to 0$ or $q\to 1$ this continuous function obviously diverges to $-\infty$, implying it has a global maximum somewhere in the interval $(0,1)$ corresponding to a zero of the derivative
$$\frac{d}{d q} \Lambda(q) = -\sum_{x_i=1} \frac{n_i q^{n_i-1}}{1 - q^{n_i}}+ \sum_{x_j=0} \frac{n_j} {q}.$$
Upon multiplying by $q$, the zeros are seen to be the solutions of the equation
$$\sum_{x_i=1} n_i\left( \frac{1}{1 - q^{n_i}}\right) - \sum_{x_i=1}n_i = \sum_{x_i=1} n_i\left( \frac{1}{1 - q^{n_i}}-1\right)= \sum_{x_i=1} \frac{n_i q^{n_i}}{1 - q^{n_i}}= \sum_{x_j=0} n_j.$$
Write, therefore, $N = \sum_{i} n_i$, obtaining
$$\sum_{x_i=1} \frac{n_i}{1 - q^{n_i}} = N.$$
Because for any $k\ge 1$ the function $q\to 1/(1-q^k)$ increases monotonically from $1$ to $\infty$ when $0\lt q \lt 1$, the left hand side increases monotonically from $\sum_{x_i=1}n_i$, which does not exceed $N$. Consequently there is a unique solution $0 \lt \hat q \lt 1$. It can quickly be found using any decent root finder. The estimated probability of contamination is $\hat p = 1 - \hat q$. Confidence intervals, etc., can be found with the usual Maximum Likelihood machinery.
So much information is lost in observing $X$ that these estimates will not be precise. Even with large batches (in the hundreds) and a large number of batches (in the hundreds), when $p$ is small (as one would expect), its estimate $\hat p$ can easily err by a factor of $2$ or greater. Estimates will be particularly poor when few or almost all of the batches exhibit contamination. This prognosis is borne out by the simulation results shown below, which present histograms of the estimates relative to $p$ (that is, the ratios $\hat{p}/p$) and scatterplots of the fraction of contaminated batches (the mean of the $x_i$) against the estimates $\hat p$.
Simulation results for $20$ batches averaging about $40$ units per batch.
Here is the R code that produced this figure. It turns out that minimizing the square of the objective function is a little more accurate and faster than finding its root in $(0,1)$, so optimize was chosen to perform the calculations.
n.batch <- 20 # Number of batches
set.seed(17)
sizes <- (1+rpois(n.batch, 3)) * 10 # Batch sizes
#
# Simulate data for various values of p and show the distribution of estimates.
#
f <- function(q, x, n) sum((n / (1 - q^n))[x]) - sum(n)
par(mfcol=c(2,4))
for (p in c(1e-4, 1e-3, 1e-2, 1e-1)) {
p.hat <- replicate(1e3, {
x <- rbinom(length(sizes), sizes, p) > 0
solution <- optimize(function(q) f(q, x, sizes)^2,
lower=1e-16, upper=1-1e-16, tol=1e-8)
c(sum(x), 1 - solution$minimum)
})
hist(p.hat[2,]/p, main=paste0("Estimates for p=", p), xlab="Relative p Estimate")
abline(v=1, col="Red", lwd=2)
plot(p.hat[1,]/n.batch, p.hat[2,]/p, xlim=c(0,1),
xlab="Fraction Contaminated", ylab="Relative p Estimate")
}
|
Censored Binomial model - log likelihood
Let the constant probability of contamination be $p$, which is to be estimated, and let $X$ be a random variable equal to $1$ when contamination is observed, $0$ otherwise. Writing $q=1-p$ for the ch
|
47,934
|
Censored Binomial model - log likelihood
|
Note that you can also pass the negative log likelihood to function mle2 in package bbmle, which has the advantage that you then also get 95% profile confidence intervals on your estimated $p$. For example :
n = c(7, 7, 7, 7, 7, 8, 9, 9, 9, 10, 10, 10, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 13, 13, 13, 13, 15, 15, 16, 16, 16, 16, 17, 17, 17, 17, 17, 17, 18, 18, 19, 20, 20, 20, 20, 21, 21, 21, 21, 22, 22, 22, 23, 23, 24, 24, 25, 27, 31)
x = c(rep(0,6), 1, rep(0,7), 1, 1, 1, 0, 1, rep(0,20), 1, rep(0,13), 1, 1, rep(0,5))
neglogL = function(p, x, n) -sum((log(1 - (1-p)^n))[x]) -sum((n*log(1-p))[!x]) # negative log likelihood
require(bbmle)
fit = mle2(neglogL, start=list(p=0.01), data=list(x=x, n=n))
c(coef(fit),confint(fit))*100 # estimated p (in %) and profile likelihood confidence intervals
# p 2.5 % 97.5 %
# 0.9415172 0.4306652 1.7458847
summary(fit)
|
Censored Binomial model - log likelihood
|
Note that you can also pass the negative log likelihood to function mle2 in package bbmle, which has the advantage that you then also get 95% profile confidence intervals on your estimated $p$. For ex
|
Censored Binomial model - log likelihood
Note that you can also pass the negative log likelihood to function mle2 in package bbmle, which has the advantage that you then also get 95% profile confidence intervals on your estimated $p$. For example :
n = c(7, 7, 7, 7, 7, 8, 9, 9, 9, 10, 10, 10, 11, 11, 11, 11, 11, 11, 12, 12, 12, 12, 12, 13, 13, 13, 13, 15, 15, 16, 16, 16, 16, 17, 17, 17, 17, 17, 17, 18, 18, 19, 20, 20, 20, 20, 21, 21, 21, 21, 22, 22, 22, 23, 23, 24, 24, 25, 27, 31)
x = c(rep(0,6), 1, rep(0,7), 1, 1, 1, 0, 1, rep(0,20), 1, rep(0,13), 1, 1, rep(0,5))
neglogL = function(p, x, n) -sum((log(1 - (1-p)^n))[x]) -sum((n*log(1-p))[!x]) # negative log likelihood
require(bbmle)
fit = mle2(neglogL, start=list(p=0.01), data=list(x=x, n=n))
c(coef(fit),confint(fit))*100 # estimated p (in %) and profile likelihood confidence intervals
# p 2.5 % 97.5 %
# 0.9415172 0.4306652 1.7458847
summary(fit)
|
Censored Binomial model - log likelihood
Note that you can also pass the negative log likelihood to function mle2 in package bbmle, which has the advantage that you then also get 95% profile confidence intervals on your estimated $p$. For ex
|
47,935
|
Is it possible to calculate Q1, Median, Q3, StDev from already aggregated data?
|
You have mean, counts and StDev of the observations, so aggregated StDev is a matter of algebra. I'm sure you can figure it out easily.
The quantiles are trickier. Consider, Q1 of two samples. They form the bounds of the Q1 of the combined sample. If $Q1_1>Q1_2$, then it's easy to see that aggregated $Q1_2<Q1$ and $Q1<Q1_1$. That's all you can say about the quantiles, i.e. in your case $min(Q1_i)<Q1<max(Q1_i)$.
You can get a little more from your data by using asymptotic sample quantile distribution. In this case instead of getting the bounds, you could estimate the StDev of the quantiles. You'd have to assume that the distribution doesn't change during the day.
Alternatively, you could try to estimate the quantiles during the day, e.g. they're higher in the morning and lower in the evening. In this case, you could run a test to see whether this is the case.
|
Is it possible to calculate Q1, Median, Q3, StDev from already aggregated data?
|
You have mean, counts and StDev of the observations, so aggregated StDev is a matter of algebra. I'm sure you can figure it out easily.
The quantiles are trickier. Consider, Q1 of two samples. They fo
|
Is it possible to calculate Q1, Median, Q3, StDev from already aggregated data?
You have mean, counts and StDev of the observations, so aggregated StDev is a matter of algebra. I'm sure you can figure it out easily.
The quantiles are trickier. Consider, Q1 of two samples. They form the bounds of the Q1 of the combined sample. If $Q1_1>Q1_2$, then it's easy to see that aggregated $Q1_2<Q1$ and $Q1<Q1_1$. That's all you can say about the quantiles, i.e. in your case $min(Q1_i)<Q1<max(Q1_i)$.
You can get a little more from your data by using asymptotic sample quantile distribution. In this case instead of getting the bounds, you could estimate the StDev of the quantiles. You'd have to assume that the distribution doesn't change during the day.
Alternatively, you could try to estimate the quantiles during the day, e.g. they're higher in the morning and lower in the evening. In this case, you could run a test to see whether this is the case.
|
Is it possible to calculate Q1, Median, Q3, StDev from already aggregated data?
You have mean, counts and StDev of the observations, so aggregated StDev is a matter of algebra. I'm sure you can figure it out easily.
The quantiles are trickier. Consider, Q1 of two samples. They fo
|
47,936
|
How can the "anti-correlation" between these two curves be shown?
|
One curve almost looks like the derivative of the other and sometimes such pairs of curves are plotted against each other with curved connections. For instance, for plotting velocity versus acceleration to see cycles better. Here is red versus blue for your toy data:
Arrows and annotations are sometimes added. I don't know what these kinds of plots are properly called. I've heard "phase-plane" diagrams but that term includes a lot of other kinds of plots, too.
The data points are connected in this case. With more and noisier data, you'd probably want some kind of interpolated curve that just goes near each point.
Update: In case it doesn't go without saying, I'm not sure what you mean by "anti-correlation". I'm thinking you want to show a relationship between two curves that is not functional in the usual sense. For the chart I've shown, you can think of it as parametric plot in that each variable (blue and red) is a function of a parameter ("Coordinate" in your table).
For comparison, here's another application of this kind of diagram from a NYT graphic on gas prices.
|
How can the "anti-correlation" between these two curves be shown?
|
One curve almost looks like the derivative of the other and sometimes such pairs of curves are plotted against each other with curved connections. For instance, for plotting velocity versus accelerati
|
How can the "anti-correlation" between these two curves be shown?
One curve almost looks like the derivative of the other and sometimes such pairs of curves are plotted against each other with curved connections. For instance, for plotting velocity versus acceleration to see cycles better. Here is red versus blue for your toy data:
Arrows and annotations are sometimes added. I don't know what these kinds of plots are properly called. I've heard "phase-plane" diagrams but that term includes a lot of other kinds of plots, too.
The data points are connected in this case. With more and noisier data, you'd probably want some kind of interpolated curve that just goes near each point.
Update: In case it doesn't go without saying, I'm not sure what you mean by "anti-correlation". I'm thinking you want to show a relationship between two curves that is not functional in the usual sense. For the chart I've shown, you can think of it as parametric plot in that each variable (blue and red) is a function of a parameter ("Coordinate" in your table).
For comparison, here's another application of this kind of diagram from a NYT graphic on gas prices.
|
How can the "anti-correlation" between these two curves be shown?
One curve almost looks like the derivative of the other and sometimes such pairs of curves are plotted against each other with curved connections. For instance, for plotting velocity versus accelerati
|
47,937
|
How can the "anti-correlation" between these two curves be shown?
|
The plot of one measure against the other (@xan's answer) is a good idea, except that I don't think it makes sense to join the points in this way. It only makes sense if the order of observations is really important. My understanding is that the fact that they're anti-correlated doesn't have anything to do with their ordering.
So you should plot them one against the other, to get a cloud of points. Measure the correlation using a metric like Pearson's correlation coefficient, and you will presumably get a negative value like -0.5 or so.
Then, you can show that this is statistically significant by a randomization test:
you have the values:
blue0, blue1, ... , bluen
and
red0, red1, ... redn
Saying that the anti-correlation you observe is significant and is unlikely to happen by chance basically means that if instead of matching bluek to redk for all k, you match them randomly, then it is very unlikely that the resulting dataset will display this level of correlation.
So you can prove this by generating many random permutations of the red data, and calculating the correlation of the permuted red values with the original blue.
Sort the correlation values obtained, and see how extreme the true correlation is. Is it in the top 1%? 0.1%? This gives you an estimate of how unlikely it is to happen by chance.
|
How can the "anti-correlation" between these two curves be shown?
|
The plot of one measure against the other (@xan's answer) is a good idea, except that I don't think it makes sense to join the points in this way. It only makes sense if the order of observations is r
|
How can the "anti-correlation" between these two curves be shown?
The plot of one measure against the other (@xan's answer) is a good idea, except that I don't think it makes sense to join the points in this way. It only makes sense if the order of observations is really important. My understanding is that the fact that they're anti-correlated doesn't have anything to do with their ordering.
So you should plot them one against the other, to get a cloud of points. Measure the correlation using a metric like Pearson's correlation coefficient, and you will presumably get a negative value like -0.5 or so.
Then, you can show that this is statistically significant by a randomization test:
you have the values:
blue0, blue1, ... , bluen
and
red0, red1, ... redn
Saying that the anti-correlation you observe is significant and is unlikely to happen by chance basically means that if instead of matching bluek to redk for all k, you match them randomly, then it is very unlikely that the resulting dataset will display this level of correlation.
So you can prove this by generating many random permutations of the red data, and calculating the correlation of the permuted red values with the original blue.
Sort the correlation values obtained, and see how extreme the true correlation is. Is it in the top 1%? 0.1%? This gives you an estimate of how unlikely it is to happen by chance.
|
How can the "anti-correlation" between these two curves be shown?
The plot of one measure against the other (@xan's answer) is a good idea, except that I don't think it makes sense to join the points in this way. It only makes sense if the order of observations is r
|
47,938
|
Difference between a random slope/intercept model and an ANCOVA with an interaction?
|
In this case, I think you want to use the model with the interaction. A random slopes/intercepts model would suggest that you have slopes/intercepts that can vary with regard to some random factor. For example, imagine that you have participants completing a task in which they make several responses to images under condition A and under condition B. You may be interested in the effect of condition on your outcome controlling for gender. Thus you might run an ANCOVA as you tried before and it would look like this:
$$ Outcome_{ij} = \beta_0 + \beta_1Condition_{ij} + \beta_2Gender_j + \epsilon_{ij}$$
Here i represents a particular trial and j represents a particular participant. Since condition varies within each participant you have different condition values within participants. Since gender varies between participants, gender only varies as a function of j and not i. This sort of model, in this case, would be incorrect in that it would ignore important violations of assumptions of non-independence of errors. If you had such a repeated measures design, it would be correct, instead to allow the intercept and slope of condition to vary from participant to participant as so:
$$ Outcome_{ij} = (\beta_0 + u_{0j}) + (\beta_1 + u_{1j})Condition_{ij} + \beta_2Gender_j + \epsilon_{ij} $$
Such a model would solve your non-independence problem, but likely not your homogeneity of regression problem, since (in this example) homogeneity of regression would mean that the effect of condition depends on gender. Thus, even with a random slopes and intercepts model you would want to include the Condition*Gender interaction. I suppose you could technically allow the slope and intercept in your model to vary randomly as a function of gender, but I think this makes little sense for two reasons.
Gender is a fixed effect and it is likely helpful for you to be able to easily determine by how much the condition effect is different from males to females. The
random intercept/slope model will only provide you with a measure of variance of the Condition effect across levels of the random factor.
Secondly, it gets tricky to estimate random variances with only two levels of the random factor. Those at the glmm wiki (http://glmm.wikidot.com/faq) state that a random factor should have a minimum of 5-6 levels. Additionally the authors of the wiki cite Crawley, M. J. 2002. Statistical Computing: An Introduction to Data Analysis using S-PLUS. John Wiley & Sons. which may be useful reading regarding this particular consideration.
Anyways, my recommendation would be to stick with the model that includes the X1*X2 interaction, although the notation on your interaction model suggests that you might also want to make sure you're accounting for potential violations of the assumption of non-independence.
|
Difference between a random slope/intercept model and an ANCOVA with an interaction?
|
In this case, I think you want to use the model with the interaction. A random slopes/intercepts model would suggest that you have slopes/intercepts that can vary with regard to some random factor. Fo
|
Difference between a random slope/intercept model and an ANCOVA with an interaction?
In this case, I think you want to use the model with the interaction. A random slopes/intercepts model would suggest that you have slopes/intercepts that can vary with regard to some random factor. For example, imagine that you have participants completing a task in which they make several responses to images under condition A and under condition B. You may be interested in the effect of condition on your outcome controlling for gender. Thus you might run an ANCOVA as you tried before and it would look like this:
$$ Outcome_{ij} = \beta_0 + \beta_1Condition_{ij} + \beta_2Gender_j + \epsilon_{ij}$$
Here i represents a particular trial and j represents a particular participant. Since condition varies within each participant you have different condition values within participants. Since gender varies between participants, gender only varies as a function of j and not i. This sort of model, in this case, would be incorrect in that it would ignore important violations of assumptions of non-independence of errors. If you had such a repeated measures design, it would be correct, instead to allow the intercept and slope of condition to vary from participant to participant as so:
$$ Outcome_{ij} = (\beta_0 + u_{0j}) + (\beta_1 + u_{1j})Condition_{ij} + \beta_2Gender_j + \epsilon_{ij} $$
Such a model would solve your non-independence problem, but likely not your homogeneity of regression problem, since (in this example) homogeneity of regression would mean that the effect of condition depends on gender. Thus, even with a random slopes and intercepts model you would want to include the Condition*Gender interaction. I suppose you could technically allow the slope and intercept in your model to vary randomly as a function of gender, but I think this makes little sense for two reasons.
Gender is a fixed effect and it is likely helpful for you to be able to easily determine by how much the condition effect is different from males to females. The
random intercept/slope model will only provide you with a measure of variance of the Condition effect across levels of the random factor.
Secondly, it gets tricky to estimate random variances with only two levels of the random factor. Those at the glmm wiki (http://glmm.wikidot.com/faq) state that a random factor should have a minimum of 5-6 levels. Additionally the authors of the wiki cite Crawley, M. J. 2002. Statistical Computing: An Introduction to Data Analysis using S-PLUS. John Wiley & Sons. which may be useful reading regarding this particular consideration.
Anyways, my recommendation would be to stick with the model that includes the X1*X2 interaction, although the notation on your interaction model suggests that you might also want to make sure you're accounting for potential violations of the assumption of non-independence.
|
Difference between a random slope/intercept model and an ANCOVA with an interaction?
In this case, I think you want to use the model with the interaction. A random slopes/intercepts model would suggest that you have slopes/intercepts that can vary with regard to some random factor. Fo
|
47,939
|
Interaction between time-variant and time-invariant variable in FE model
|
Your intuition is fine. When you take the partial derivative with respect to $x_{1,it}$, then you get exactly what you were looking for.
$$\frac{\partial y_{it}}{\partial x_{1,it}} = \beta_1 + \eta z_i $$
This is particularly convenient if $z_i$ is a dummy variable. Wooldridge (2010) "Econometric Analysis of Cross Section and Panel Data" has a similar example where he interacts a time-invariant female dummy with time dummies. So even though one cannot estimate the female coefficient directly, its interaction with the time dummies still has a meaning as it shows the increase in the gender wage gap over time. So what you propose is perfectly valid under the usual assumptions, e.g. $z_i$ and $x_{1,it}$ are uncorrelated with the error $\epsilon_{it}$.
|
Interaction between time-variant and time-invariant variable in FE model
|
Your intuition is fine. When you take the partial derivative with respect to $x_{1,it}$, then you get exactly what you were looking for.
$$\frac{\partial y_{it}}{\partial x_{1,it}} = \beta_1 + \eta z_
|
Interaction between time-variant and time-invariant variable in FE model
Your intuition is fine. When you take the partial derivative with respect to $x_{1,it}$, then you get exactly what you were looking for.
$$\frac{\partial y_{it}}{\partial x_{1,it}} = \beta_1 + \eta z_i $$
This is particularly convenient if $z_i$ is a dummy variable. Wooldridge (2010) "Econometric Analysis of Cross Section and Panel Data" has a similar example where he interacts a time-invariant female dummy with time dummies. So even though one cannot estimate the female coefficient directly, its interaction with the time dummies still has a meaning as it shows the increase in the gender wage gap over time. So what you propose is perfectly valid under the usual assumptions, e.g. $z_i$ and $x_{1,it}$ are uncorrelated with the error $\epsilon_{it}$.
|
Interaction between time-variant and time-invariant variable in FE model
Your intuition is fine. When you take the partial derivative with respect to $x_{1,it}$, then you get exactly what you were looking for.
$$\frac{\partial y_{it}}{\partial x_{1,it}} = \beta_1 + \eta z_
|
47,940
|
Classifer for unbalanced dataset?
|
Most classifiers in sklearn support unbalanced datasets, through the sample_weight parameter in the clf.fit methods. If you need to fit unbalanced data with a classifier that does not support this option, you can use sampling with replacement to enlarge the smaller class to match the larger one.
Here is an adapted version of the sklearn SVM example demonstrating the sample_weight approach:
import numpy as np
import pylab as pl
from sklearn import svm
np.random.seed(0)
X = np.r_[2*np.random.randn(20, 2) - [2, 2], 2*np.random.randn(200, 2) + [2, 2]]
Y = [0] * 20 + [1] * 200
wt = [1/20.]*20 + [1/200.]*200
# fit the model
clf = svm.SVC(kernel='linear')
clf.fit(X, Y, sample_weight=wt)
This question about unbalanced classification using RandomForestClassifier has some additional details.
|
Classifer for unbalanced dataset?
|
Most classifiers in sklearn support unbalanced datasets, through the sample_weight parameter in the clf.fit methods. If you need to fit unbalanced data with a classifier that does not support this op
|
Classifer for unbalanced dataset?
Most classifiers in sklearn support unbalanced datasets, through the sample_weight parameter in the clf.fit methods. If you need to fit unbalanced data with a classifier that does not support this option, you can use sampling with replacement to enlarge the smaller class to match the larger one.
Here is an adapted version of the sklearn SVM example demonstrating the sample_weight approach:
import numpy as np
import pylab as pl
from sklearn import svm
np.random.seed(0)
X = np.r_[2*np.random.randn(20, 2) - [2, 2], 2*np.random.randn(200, 2) + [2, 2]]
Y = [0] * 20 + [1] * 200
wt = [1/20.]*20 + [1/200.]*200
# fit the model
clf = svm.SVC(kernel='linear')
clf.fit(X, Y, sample_weight=wt)
This question about unbalanced classification using RandomForestClassifier has some additional details.
|
Classifer for unbalanced dataset?
Most classifiers in sklearn support unbalanced datasets, through the sample_weight parameter in the clf.fit methods. If you need to fit unbalanced data with a classifier that does not support this op
|
47,941
|
Classifer for unbalanced dataset?
|
Linear SVM can handle unbalanced data sets just fine by using class-weights on the misclassification penalty. This functionality is available in any decent SVM implementation.
The objective function for class-weighted SVM is as follows:
$$\min_{\xi,\mathbf{w}} \frac{1}{2}\|\mathbf{w}\|^2 + C_{\mathcal{P}}\sum_{i\in\mathcal{P}} xi_i + C_\mathcal{N} \sum_{i\in\mathcal{N}} \xi_i, $$
where the minority class uses a higher misclassification penalty. A common heuristic is as follows:
$$C_\mathcal{P} \times |\mathcal{P}| = C_\mathcal{N} \times |\mathcal{N}|,$$
with $|\mathcal{P}|$ and $\mathcal{N}|$ the number of positive and negative training samples, respectively.
|
Classifer for unbalanced dataset?
|
Linear SVM can handle unbalanced data sets just fine by using class-weights on the misclassification penalty. This functionality is available in any decent SVM implementation.
The objective function f
|
Classifer for unbalanced dataset?
Linear SVM can handle unbalanced data sets just fine by using class-weights on the misclassification penalty. This functionality is available in any decent SVM implementation.
The objective function for class-weighted SVM is as follows:
$$\min_{\xi,\mathbf{w}} \frac{1}{2}\|\mathbf{w}\|^2 + C_{\mathcal{P}}\sum_{i\in\mathcal{P}} xi_i + C_\mathcal{N} \sum_{i\in\mathcal{N}} \xi_i, $$
where the minority class uses a higher misclassification penalty. A common heuristic is as follows:
$$C_\mathcal{P} \times |\mathcal{P}| = C_\mathcal{N} \times |\mathcal{N}|,$$
with $|\mathcal{P}|$ and $\mathcal{N}|$ the number of positive and negative training samples, respectively.
|
Classifer for unbalanced dataset?
Linear SVM can handle unbalanced data sets just fine by using class-weights on the misclassification penalty. This functionality is available in any decent SVM implementation.
The objective function f
|
47,942
|
Matrix multiplication to find correlation matrix
|
I think the author is simply handy-wavy there. I believe it is assumed that the columns of $A$ have norm 1 and mean 0.
$A^TA$ is a Gram matrix. Given you are using random variables to construct $A$, $A^TA$ is approximately proportional to the covariance matrix (scale by $n$ the number of variables). The correlation matrix is simply the scaled version of the covariance matrix. Clearly if your random variables in the columns of $A$ are already normalized to unit-norm then $A^TA$ does not need any further normalization and it is immediately a correlation matrix.
For example using MATLAB:
% Set the seed and generate a random matrix
rng(123);
A = randn(10,3);
A'*A
% ans =
% 4.867589606955550 -3.004809945345502 -1.615373090426428
% -3.004809945345502 5.356131932084330 -0.457643222208441
% -1.615373090426428 -0.457643222208441 5.574303027408192.
% Normalized now all the variables to have unit norm and mean zeros
A = A - repmat(mean(A),10,1);
A(:,1) = A(:,1)./norm(A(:,1));
A(:,2) = A(:,2)./norm(A(:,2));
A(:,3) = A(:,3)./norm(A(:,3));
A'*A
% ans =
% 1.000000000000000 -0.568373671796724 -0.336996715690262
% -0.568373671796724 1.000000000000000 -0.052272607974969
% -0.336996715690262 -0.052272607974969 1.000000000000000
If we wanted to make the columns of $A$ orthogonal too, we would orthonormalize $A$. In that case we would do a Gram-Schmidt process on $A$; a process relating to a lot of wonderful things eg. Givens rotations, Householder transformation, etc.)
|
Matrix multiplication to find correlation matrix
|
I think the author is simply handy-wavy there. I believe it is assumed that the columns of $A$ have norm 1 and mean 0.
$A^TA$ is a Gram matrix. Given you are using random variables to construct $A$, $
|
Matrix multiplication to find correlation matrix
I think the author is simply handy-wavy there. I believe it is assumed that the columns of $A$ have norm 1 and mean 0.
$A^TA$ is a Gram matrix. Given you are using random variables to construct $A$, $A^TA$ is approximately proportional to the covariance matrix (scale by $n$ the number of variables). The correlation matrix is simply the scaled version of the covariance matrix. Clearly if your random variables in the columns of $A$ are already normalized to unit-norm then $A^TA$ does not need any further normalization and it is immediately a correlation matrix.
For example using MATLAB:
% Set the seed and generate a random matrix
rng(123);
A = randn(10,3);
A'*A
% ans =
% 4.867589606955550 -3.004809945345502 -1.615373090426428
% -3.004809945345502 5.356131932084330 -0.457643222208441
% -1.615373090426428 -0.457643222208441 5.574303027408192.
% Normalized now all the variables to have unit norm and mean zeros
A = A - repmat(mean(A),10,1);
A(:,1) = A(:,1)./norm(A(:,1));
A(:,2) = A(:,2)./norm(A(:,2));
A(:,3) = A(:,3)./norm(A(:,3));
A'*A
% ans =
% 1.000000000000000 -0.568373671796724 -0.336996715690262
% -0.568373671796724 1.000000000000000 -0.052272607974969
% -0.336996715690262 -0.052272607974969 1.000000000000000
If we wanted to make the columns of $A$ orthogonal too, we would orthonormalize $A$. In that case we would do a Gram-Schmidt process on $A$; a process relating to a lot of wonderful things eg. Givens rotations, Householder transformation, etc.)
|
Matrix multiplication to find correlation matrix
I think the author is simply handy-wavy there. I believe it is assumed that the columns of $A$ have norm 1 and mean 0.
$A^TA$ is a Gram matrix. Given you are using random variables to construct $A$, $
|
47,943
|
Distances in PCA space [closed]
|
Bit late, but here we go:
The transformation spectra -> PC scores is typically set up to be a pure rotation. Thus Euclidean distance in PC score space equals Euclidean distance in original space as long as no PCs are discarded. Thus, neighbours stay neighbours.
For models that keep only some of the PCs, you can maybe construct a (squared) distance that distinguishes distance modeled from distance orthogonal to the model. This is e.g. done in SIMCA.
|
Distances in PCA space [closed]
|
Bit late, but here we go:
The transformation spectra -> PC scores is typically set up to be a pure rotation. Thus Euclidean distance in PC score space equals Euclidean distance in original space as lo
|
Distances in PCA space [closed]
Bit late, but here we go:
The transformation spectra -> PC scores is typically set up to be a pure rotation. Thus Euclidean distance in PC score space equals Euclidean distance in original space as long as no PCs are discarded. Thus, neighbours stay neighbours.
For models that keep only some of the PCs, you can maybe construct a (squared) distance that distinguishes distance modeled from distance orthogonal to the model. This is e.g. done in SIMCA.
|
Distances in PCA space [closed]
Bit late, but here we go:
The transformation spectra -> PC scores is typically set up to be a pure rotation. Thus Euclidean distance in PC score space equals Euclidean distance in original space as lo
|
47,944
|
Distances in PCA space [closed]
|
Sounds like you want to know how to get from the PCA projection back to the original data for 1), and 2) what to do with nearest neighbors? Look at the PCA score coefficient matrix, which allows a back-projection. Regarding nearest neighbors after PCA, the focus commonly involves use of a doubly-centered Gram matrix ($G=XX^T)$. Hence, you probably need to work with the Gram matrix, which is used heavily in distance metric (non-linear manifold) learning.
|
Distances in PCA space [closed]
|
Sounds like you want to know how to get from the PCA projection back to the original data for 1), and 2) what to do with nearest neighbors? Look at the PCA score coefficient matrix, which allows a ba
|
Distances in PCA space [closed]
Sounds like you want to know how to get from the PCA projection back to the original data for 1), and 2) what to do with nearest neighbors? Look at the PCA score coefficient matrix, which allows a back-projection. Regarding nearest neighbors after PCA, the focus commonly involves use of a doubly-centered Gram matrix ($G=XX^T)$. Hence, you probably need to work with the Gram matrix, which is used heavily in distance metric (non-linear manifold) learning.
|
Distances in PCA space [closed]
Sounds like you want to know how to get from the PCA projection back to the original data for 1), and 2) what to do with nearest neighbors? Look at the PCA score coefficient matrix, which allows a ba
|
47,945
|
Distances in PCA space [closed]
|
I'm working with PCA coefficients now and probably you're done with your project by now but I think this might be helpful to others.
In PCA the higher dimensions present less deviation from the axis, so discarding them does not loos as much information. Nonetheless the distance of the points are not maintained but the order of the distances will be the same as on average the dimension you are truncating represents less distance than any dimension before it.
|
Distances in PCA space [closed]
|
I'm working with PCA coefficients now and probably you're done with your project by now but I think this might be helpful to others.
In PCA the higher dimensions present less deviation from the axis,
|
Distances in PCA space [closed]
I'm working with PCA coefficients now and probably you're done with your project by now but I think this might be helpful to others.
In PCA the higher dimensions present less deviation from the axis, so discarding them does not loos as much information. Nonetheless the distance of the points are not maintained but the order of the distances will be the same as on average the dimension you are truncating represents less distance than any dimension before it.
|
Distances in PCA space [closed]
I'm working with PCA coefficients now and probably you're done with your project by now but I think this might be helpful to others.
In PCA the higher dimensions present less deviation from the axis,
|
47,946
|
ARIMA vs. Random Forest
|
It has been awhile, but this is still getting upvotes. So ...
In the event, I did use a Random Forest, and it worked really well. A boosted tree did a bit better that that even. Also since then I know of several other similar projects that took the same route with success and very little fuss. While ARIMA is surely a powerful method with a distinguished history, it seems in today's world the ensemble methods will usually get you where you want to go much quicker, but maybe with the disadvantage of not shed as much light as to what is really going on under the covers.
One thing to watch out for is don't just split your training/test set randomly and test on it, you have to chunk it out so that the test points are not right up against to the training points (on the time axis). This is necessary for any time series which has slowly changing data, otherwise the forest always gets to see training points almost identical to the test points and will thus fit it almost exactly, giving you highly misleading performance that will not reflect real life performance on unseen data.I made that mistake in the first round and was euphoric about my model explaining 90% plus of the variation - for about a day until I realized what was going on. 60% to 80% was what it really did.
|
ARIMA vs. Random Forest
|
It has been awhile, but this is still getting upvotes. So ...
In the event, I did use a Random Forest, and it worked really well. A boosted tree did a bit better that that even. Also since then I know
|
ARIMA vs. Random Forest
It has been awhile, but this is still getting upvotes. So ...
In the event, I did use a Random Forest, and it worked really well. A boosted tree did a bit better that that even. Also since then I know of several other similar projects that took the same route with success and very little fuss. While ARIMA is surely a powerful method with a distinguished history, it seems in today's world the ensemble methods will usually get you where you want to go much quicker, but maybe with the disadvantage of not shed as much light as to what is really going on under the covers.
One thing to watch out for is don't just split your training/test set randomly and test on it, you have to chunk it out so that the test points are not right up against to the training points (on the time axis). This is necessary for any time series which has slowly changing data, otherwise the forest always gets to see training points almost identical to the test points and will thus fit it almost exactly, giving you highly misleading performance that will not reflect real life performance on unseen data.I made that mistake in the first round and was euphoric about my model explaining 90% plus of the variation - for about a day until I realized what was going on. 60% to 80% was what it really did.
|
ARIMA vs. Random Forest
It has been awhile, but this is still getting upvotes. So ...
In the event, I did use a Random Forest, and it worked really well. A boosted tree did a bit better that that even. Also since then I know
|
47,947
|
ARIMA vs. Random Forest
|
I don't see the argument that Random Forest is less work than ARIMA. I would present RF is more work.
Here is the process for ARIMA:
1. Detrend / transform your data
2. Test for stationarity
3. Run autocorrelation plots
4. Set your parameters for your model
5. Run a grid search if you want
6. Look at RMSE
Here is the process for Random Forest:
1. You still have to transform your data
2. You still have to test for stationarity
3. You have to think about creating a bunch of useful features like season, time of day, t-1, t-7, t-14, split weeks, holidays, features that go into all machine learning models
4. Set up cross validation (train, test)
5. Optimize with gridsearch or kfold
6. Pick parameters, then run a model
7. Look at results
|
ARIMA vs. Random Forest
|
I don't see the argument that Random Forest is less work than ARIMA. I would present RF is more work.
Here is the process for ARIMA:
1. Detrend / transform your data
2. Test for stationarity
3. Run au
|
ARIMA vs. Random Forest
I don't see the argument that Random Forest is less work than ARIMA. I would present RF is more work.
Here is the process for ARIMA:
1. Detrend / transform your data
2. Test for stationarity
3. Run autocorrelation plots
4. Set your parameters for your model
5. Run a grid search if you want
6. Look at RMSE
Here is the process for Random Forest:
1. You still have to transform your data
2. You still have to test for stationarity
3. You have to think about creating a bunch of useful features like season, time of day, t-1, t-7, t-14, split weeks, holidays, features that go into all machine learning models
4. Set up cross validation (train, test)
5. Optimize with gridsearch or kfold
6. Pick parameters, then run a model
7. Look at results
|
ARIMA vs. Random Forest
I don't see the argument that Random Forest is less work than ARIMA. I would present RF is more work.
Here is the process for ARIMA:
1. Detrend / transform your data
2. Test for stationarity
3. Run au
|
47,948
|
What does id (cluster) mean in gee?
|
I never used this kind of models but a quick Google search reveals that
Generalized Estimating Equations (GEE) (Liang and Zeger 1986) are a
general method for analyzing data collected in clusters where 1)
observations within a cluster may be correlated, 2) observations in
separate clusters are independent, 3) a monotone transformation of the
expectation is linearly related to the explanatory variables and 4)
the variance is a function of the expectation. It is essential to note
that the expectation and the variance referred to in points 3) and 4)
are conditional given cluster-level or individual-level covariates.
(source: Halekoh and Højsgaard, 2006 in JSS paper on geepack library)
So this kind of models seem to be designed especially for clustered data and if your data is not clustered then this does not seem to be a right model for you. If you need a model that accounts for autocorrelated errors you may try GLS.
As about what is clustered data - we say that the data is clustered if there is some grouped structure, e.g. students are grouped in schools, patients in hospitals etc. If you want to account for group effects, then you use models that let you define such structure (e.g. linear mixed models). The structures can be hierarchical: students grouped in classes, classes in schools, schools in districts etc. or even crossed: students grouped in schools and at the same time grouped by the neighborhood they live in (that is possibly different than school localization).
|
What does id (cluster) mean in gee?
|
I never used this kind of models but a quick Google search reveals that
Generalized Estimating Equations (GEE) (Liang and Zeger 1986) are a
general method for analyzing data collected in clusters w
|
What does id (cluster) mean in gee?
I never used this kind of models but a quick Google search reveals that
Generalized Estimating Equations (GEE) (Liang and Zeger 1986) are a
general method for analyzing data collected in clusters where 1)
observations within a cluster may be correlated, 2) observations in
separate clusters are independent, 3) a monotone transformation of the
expectation is linearly related to the explanatory variables and 4)
the variance is a function of the expectation. It is essential to note
that the expectation and the variance referred to in points 3) and 4)
are conditional given cluster-level or individual-level covariates.
(source: Halekoh and Højsgaard, 2006 in JSS paper on geepack library)
So this kind of models seem to be designed especially for clustered data and if your data is not clustered then this does not seem to be a right model for you. If you need a model that accounts for autocorrelated errors you may try GLS.
As about what is clustered data - we say that the data is clustered if there is some grouped structure, e.g. students are grouped in schools, patients in hospitals etc. If you want to account for group effects, then you use models that let you define such structure (e.g. linear mixed models). The structures can be hierarchical: students grouped in classes, classes in schools, schools in districts etc. or even crossed: students grouped in schools and at the same time grouped by the neighborhood they live in (that is possibly different than school localization).
|
What does id (cluster) mean in gee?
I never used this kind of models but a quick Google search reveals that
Generalized Estimating Equations (GEE) (Liang and Zeger 1986) are a
general method for analyzing data collected in clusters w
|
47,949
|
Is there an estimator for the symmetry of a bimodal distribution?
|
By definition, a symmetric random variable $X$ is one for which there is a constant $\mu$ for which $X-\mu$ and $\mu-X$ are identically distributed. In terms of the distribution function $F$ this is equivalent to
$$\eqalign{
F(\mu+x) &= \Pr(X \le \mu+x) = \Pr(X-\mu \le x) \\
&= \Pr(\mu-X \le x) \\
&= \Pr(X-\mu \ge -x) = \Pr(X \ge \mu - x)\\
&= 1 - \Pr(X \lt \mu- x) \\
&= 1 - F(\mu - x) + \Pr(X = \mu-x)
}$$
When $F$ is continuous this simplifies to
$$F(\mu + x) + F(\mu-x) = 1$$
for all $x$ and thence (via differentiation), when $F$ is absolutely continuous with distribution function $f$,
$$f(\mu + x) = f(\mu - x)$$
for all $x$. (It's not hard to see that any of these equations uniquely determine $\mu$.)
This provides a general, flexible procedure to test for symmetry, based on any method of comparing two distributions. (Many such methods exist, ranging from comparing basic properties like moments through relative entropy, KL distances, and so on.) Specifically, take any non-negative function $\delta$ where
$$\delta(F,G)$$
is intended to measure the "distance" or "dissimilarity" between distributions $F$ and $G$. All we ask of $\delta$ (besides being nonnegative) is that $\delta(F,G) = 0$ if and only if $F=G$.
For any constant $\mu$ define $F_\mu(x) = F(x-\mu)$ and $\check F_\mu(x) = F(\mu-x)$. Then merely take
$$\inf_{\mu}\, (\delta(F_\mu, \check{F}_\mu))$$
as the measure of asymmetry. This measures how close you can make $X-\mu$ and $\mu-X$ appear to be. It will be nonnegative and equal to zero only when $F$ is symmetric. Choose $\delta$ to emphasize those aspects of "closeness" important in your application, such as asymptotic tail behavior or balancing an odd moment.
As a simple example, intended to be applied to bimodal absolutely continuous distributions, let
$$\delta(F,G) = \int (f(x) - g(x))^2 dx.$$
This is the $L^2$ norm of their density functions, depending on the total area between their graphs. The illustration shows the density $f$ for a bimodal distribution at the left, followed by three graphs depicting the region between $f_\mu$ and $\check{f}_\mu$ for values of $\mu$ around the optimum $\mu=1$, where that region is the smallest in the $L^2$ sense:
This shows how the whole process lends itself to exploratory (visual) evaluation: simply make such a plot to superimpose $f_\mu$ and $\check{f}_\mu$ (or the CDFs $F_\mu$ and $\check{F}_\mu$) and vary $\mu$ until the graphs look as "alike" as possible. The visual deviations at this optimal point will not only indicate asymmetry, but they will also show the form of the asymmetry.
|
Is there an estimator for the symmetry of a bimodal distribution?
|
By definition, a symmetric random variable $X$ is one for which there is a constant $\mu$ for which $X-\mu$ and $\mu-X$ are identically distributed. In terms of the distribution function $F$ this is
|
Is there an estimator for the symmetry of a bimodal distribution?
By definition, a symmetric random variable $X$ is one for which there is a constant $\mu$ for which $X-\mu$ and $\mu-X$ are identically distributed. In terms of the distribution function $F$ this is equivalent to
$$\eqalign{
F(\mu+x) &= \Pr(X \le \mu+x) = \Pr(X-\mu \le x) \\
&= \Pr(\mu-X \le x) \\
&= \Pr(X-\mu \ge -x) = \Pr(X \ge \mu - x)\\
&= 1 - \Pr(X \lt \mu- x) \\
&= 1 - F(\mu - x) + \Pr(X = \mu-x)
}$$
When $F$ is continuous this simplifies to
$$F(\mu + x) + F(\mu-x) = 1$$
for all $x$ and thence (via differentiation), when $F$ is absolutely continuous with distribution function $f$,
$$f(\mu + x) = f(\mu - x)$$
for all $x$. (It's not hard to see that any of these equations uniquely determine $\mu$.)
This provides a general, flexible procedure to test for symmetry, based on any method of comparing two distributions. (Many such methods exist, ranging from comparing basic properties like moments through relative entropy, KL distances, and so on.) Specifically, take any non-negative function $\delta$ where
$$\delta(F,G)$$
is intended to measure the "distance" or "dissimilarity" between distributions $F$ and $G$. All we ask of $\delta$ (besides being nonnegative) is that $\delta(F,G) = 0$ if and only if $F=G$.
For any constant $\mu$ define $F_\mu(x) = F(x-\mu)$ and $\check F_\mu(x) = F(\mu-x)$. Then merely take
$$\inf_{\mu}\, (\delta(F_\mu, \check{F}_\mu))$$
as the measure of asymmetry. This measures how close you can make $X-\mu$ and $\mu-X$ appear to be. It will be nonnegative and equal to zero only when $F$ is symmetric. Choose $\delta$ to emphasize those aspects of "closeness" important in your application, such as asymptotic tail behavior or balancing an odd moment.
As a simple example, intended to be applied to bimodal absolutely continuous distributions, let
$$\delta(F,G) = \int (f(x) - g(x))^2 dx.$$
This is the $L^2$ norm of their density functions, depending on the total area between their graphs. The illustration shows the density $f$ for a bimodal distribution at the left, followed by three graphs depicting the region between $f_\mu$ and $\check{f}_\mu$ for values of $\mu$ around the optimum $\mu=1$, where that region is the smallest in the $L^2$ sense:
This shows how the whole process lends itself to exploratory (visual) evaluation: simply make such a plot to superimpose $f_\mu$ and $\check{f}_\mu$ (or the CDFs $F_\mu$ and $\check{F}_\mu$) and vary $\mu$ until the graphs look as "alike" as possible. The visual deviations at this optimal point will not only indicate asymmetry, but they will also show the form of the asymmetry.
|
Is there an estimator for the symmetry of a bimodal distribution?
By definition, a symmetric random variable $X$ is one for which there is a constant $\mu$ for which $X-\mu$ and $\mu-X$ are identically distributed. In terms of the distribution function $F$ this is
|
47,950
|
Is there an estimator for the symmetry of a bimodal distribution?
|
While I don't think that there is a single measure of symmetry for a bimodal distribution in general, for a special case of mixture of two normal distributions, perhaps, it is possible to use one or several bi-modality measures and statistical tests. Some mixture modeling software (especially, some R packages) might have some of those measures assessment and tests implemented, so that it might be possible to determine the level of symmetry of a bimodal distribution analytically.
|
Is there an estimator for the symmetry of a bimodal distribution?
|
While I don't think that there is a single measure of symmetry for a bimodal distribution in general, for a special case of mixture of two normal distributions, perhaps, it is possible to use one or s
|
Is there an estimator for the symmetry of a bimodal distribution?
While I don't think that there is a single measure of symmetry for a bimodal distribution in general, for a special case of mixture of two normal distributions, perhaps, it is possible to use one or several bi-modality measures and statistical tests. Some mixture modeling software (especially, some R packages) might have some of those measures assessment and tests implemented, so that it might be possible to determine the level of symmetry of a bimodal distribution analytically.
|
Is there an estimator for the symmetry of a bimodal distribution?
While I don't think that there is a single measure of symmetry for a bimodal distribution in general, for a special case of mixture of two normal distributions, perhaps, it is possible to use one or s
|
47,951
|
Unequal sample size one way ANOVA [duplicate]
|
You have a total N = 148, distributed into 4 groups. If you had 37 in each group instead, you would have greater statistical power. Otherwise, a one-way ANOVA is just as valid here as anywhere else (given that the normal assumptions are met). (To understand this better, it may help to read my answer here: How should one interpret the comparison of means from different sample sizes?) So to answer 1. explicitly, yes, you can use a one-way ANOVA when the sample sizes are extremely unequal.
However, your description in 2. seems odd to me, so let me add a few notes:
If the groups (A through D) were formed by categorizing BMI (a continuous variable), you would be better off using regression with BMI as your predictor; categorizing continuous variables is not a good thing to do.
It isn't clear what you mean when you say that A-B and A-C were significant, but A-D wasn't. An ANOVA doesn't tell you that. An ANOVA only tells you if there is a difference somewhere amongst your groups. Did you run some post-hoc test to get those results?
I don't see how you could have run a paired t-test to compare A and D when they do not have the same ns. Did you mean an unpaired t-test? Under the assumption that you used some proper test for post-hoc comparisons with the ANOVA, that was probably the appropriate option as a t-test would not take into account that you have multiple comparisons, for example.
|
Unequal sample size one way ANOVA [duplicate]
|
You have a total N = 148, distributed into 4 groups. If you had 37 in each group instead, you would have greater statistical power. Otherwise, a one-way ANOVA is just as valid here as anywhere else
|
Unequal sample size one way ANOVA [duplicate]
You have a total N = 148, distributed into 4 groups. If you had 37 in each group instead, you would have greater statistical power. Otherwise, a one-way ANOVA is just as valid here as anywhere else (given that the normal assumptions are met). (To understand this better, it may help to read my answer here: How should one interpret the comparison of means from different sample sizes?) So to answer 1. explicitly, yes, you can use a one-way ANOVA when the sample sizes are extremely unequal.
However, your description in 2. seems odd to me, so let me add a few notes:
If the groups (A through D) were formed by categorizing BMI (a continuous variable), you would be better off using regression with BMI as your predictor; categorizing continuous variables is not a good thing to do.
It isn't clear what you mean when you say that A-B and A-C were significant, but A-D wasn't. An ANOVA doesn't tell you that. An ANOVA only tells you if there is a difference somewhere amongst your groups. Did you run some post-hoc test to get those results?
I don't see how you could have run a paired t-test to compare A and D when they do not have the same ns. Did you mean an unpaired t-test? Under the assumption that you used some proper test for post-hoc comparisons with the ANOVA, that was probably the appropriate option as a t-test would not take into account that you have multiple comparisons, for example.
|
Unequal sample size one way ANOVA [duplicate]
You have a total N = 148, distributed into 4 groups. If you had 37 in each group instead, you would have greater statistical power. Otherwise, a one-way ANOVA is just as valid here as anywhere else
|
47,952
|
Degeneracy paradox
|
You would normally bet on the mode of the outcome distribution, not on the expected value. The mode corresponding to 98 flips is 0, so you would bet on 0.
The mode corresponding to a very large number $N$ of flips will be approximately $N \cdot 0.01$ (rounding will play a very small role for very large $N$), so you would bet on that.
Edit: as pointed out by @CagdasOzgenc, what to bet on depends on the loss function. Expected value works for quadratic loss, while mode works for the principle "if you do not guess right, it does not matter how close your guess was".
|
Degeneracy paradox
|
You would normally bet on the mode of the outcome distribution, not on the expected value. The mode corresponding to 98 flips is 0, so you would bet on 0.
The mode corresponding to a very large numbe
|
Degeneracy paradox
You would normally bet on the mode of the outcome distribution, not on the expected value. The mode corresponding to 98 flips is 0, so you would bet on 0.
The mode corresponding to a very large number $N$ of flips will be approximately $N \cdot 0.01$ (rounding will play a very small role for very large $N$), so you would bet on that.
Edit: as pointed out by @CagdasOzgenc, what to bet on depends on the loss function. Expected value works for quadratic loss, while mode works for the principle "if you do not guess right, it does not matter how close your guess was".
|
Degeneracy paradox
You would normally bet on the mode of the outcome distribution, not on the expected value. The mode corresponding to 98 flips is 0, so you would bet on 0.
The mode corresponding to a very large numbe
|
47,953
|
Density estimation and histograms
|
As @tristan comments, $m$ is a counter integer, while $n$ is the total number of data points in the sample, and $h$ is the histogram bin width. The formula is correct.
It may be easier to understand if you consider the case where you have $M$ bins and the same number of data points $\frac{n}{M}$ in each bin. Then your histogram height will be the same for each bin, $\hat{f}=\frac{1}{Mh}$. So you have $M$ bins, each of width $h$ and height $\hat{f}=\frac{1}{Mh}$, for a total area of 1. As a density should be.
In fact, if you count the data points in each bin, you will find that your histogram always has a total area of 1. Again: this is just what a density should be.
And yes, $\hat{f}$ is an estimator. It is an estimate of the density, in the space of step functions. You can approximate most "normal" functions using step functions (in the sense that the integral over the absolute difference between the step function and the function to be approximated goes to zero as $h\to 0$), so step functions are a logical simple approximation.
In fact, histograms can be seen as related to kernel density estimators, with "kernels" that don't only depend on $\frac{x-x_i}{h}$, but additionally on $x$: i.e., "counting kernels" that count how many $x_i$ fall into the interval (bin) containing $x$. This is a somewhat contrived way of looking at histograms, but I actually find it a bit enlightening.
|
Density estimation and histograms
|
As @tristan comments, $m$ is a counter integer, while $n$ is the total number of data points in the sample, and $h$ is the histogram bin width. The formula is correct.
It may be easier to understand i
|
Density estimation and histograms
As @tristan comments, $m$ is a counter integer, while $n$ is the total number of data points in the sample, and $h$ is the histogram bin width. The formula is correct.
It may be easier to understand if you consider the case where you have $M$ bins and the same number of data points $\frac{n}{M}$ in each bin. Then your histogram height will be the same for each bin, $\hat{f}=\frac{1}{Mh}$. So you have $M$ bins, each of width $h$ and height $\hat{f}=\frac{1}{Mh}$, for a total area of 1. As a density should be.
In fact, if you count the data points in each bin, you will find that your histogram always has a total area of 1. Again: this is just what a density should be.
And yes, $\hat{f}$ is an estimator. It is an estimate of the density, in the space of step functions. You can approximate most "normal" functions using step functions (in the sense that the integral over the absolute difference between the step function and the function to be approximated goes to zero as $h\to 0$), so step functions are a logical simple approximation.
In fact, histograms can be seen as related to kernel density estimators, with "kernels" that don't only depend on $\frac{x-x_i}{h}$, but additionally on $x$: i.e., "counting kernels" that count how many $x_i$ fall into the interval (bin) containing $x$. This is a somewhat contrived way of looking at histograms, but I actually find it a bit enlightening.
|
Density estimation and histograms
As @tristan comments, $m$ is a counter integer, while $n$ is the total number of data points in the sample, and $h$ is the histogram bin width. The formula is correct.
It may be easier to understand i
|
47,954
|
LSTM forgetting dependencies
|
In a typical LSTM block forget gate computes its value based on its inputs (from layer below) and weights associated with each input. Weights are trained, usually by gradient descent using gradients that are computed by backpropagation.
You can think of forget gate as simple logistic regression classifier that is trained to classify inputs into two classes (forget/not forget). Hovewer, in LSTM case one don't expicitly assign classes to inputs because output error is backpropagated from above units.
Thus, one can (informally) say that forget gate learns optimal time to forget by adjusting weights on its inputs in a way that minimizes overall network output error.
|
LSTM forgetting dependencies
|
In a typical LSTM block forget gate computes its value based on its inputs (from layer below) and weights associated with each input. Weights are trained, usually by gradient descent using gradients t
|
LSTM forgetting dependencies
In a typical LSTM block forget gate computes its value based on its inputs (from layer below) and weights associated with each input. Weights are trained, usually by gradient descent using gradients that are computed by backpropagation.
You can think of forget gate as simple logistic regression classifier that is trained to classify inputs into two classes (forget/not forget). Hovewer, in LSTM case one don't expicitly assign classes to inputs because output error is backpropagated from above units.
Thus, one can (informally) say that forget gate learns optimal time to forget by adjusting weights on its inputs in a way that minimizes overall network output error.
|
LSTM forgetting dependencies
In a typical LSTM block forget gate computes its value based on its inputs (from layer below) and weights associated with each input. Weights are trained, usually by gradient descent using gradients t
|
47,955
|
LSTM forgetting dependencies
|
The forget gate simply applies to the stored cell value. If forget is closer to 1, he takes the stored value into account, otherwise, stored value will be ignored, he just gotta find a value for the forget value in which the error is minimized.
|
LSTM forgetting dependencies
|
The forget gate simply applies to the stored cell value. If forget is closer to 1, he takes the stored value into account, otherwise, stored value will be ignored, he just gotta find a value for the f
|
LSTM forgetting dependencies
The forget gate simply applies to the stored cell value. If forget is closer to 1, he takes the stored value into account, otherwise, stored value will be ignored, he just gotta find a value for the forget value in which the error is minimized.
|
LSTM forgetting dependencies
The forget gate simply applies to the stored cell value. If forget is closer to 1, he takes the stored value into account, otherwise, stored value will be ignored, he just gotta find a value for the f
|
47,956
|
What are the best empirical studies comparing causal inference with experimental, quasi-experimental, and non-experimental techniques?
|
The type of study you are referring to is called a within-study comparison. An early example that produced a lot of discussion is Dehejia and Wahba (1999; JASA) using Lalonde's (1986) NSW data in which they compared the results based on PSA to the randomized experimental benchmark. The Lalonde data set is now included in PSA packages in R such as Matching and twang, for example. There was an ongoing workshop at Northwestern (not sure if they still do it) that has an archived website with a reference list you will find useful (link).
One interesting example is the 2008 JASA paper by Shadish, Clark, and Steiner in which they randomized participants to be in either an observational study or a randomized experiment and then used the results from the randomized experiment as a benchmark, as you say. The more typical design is three arm (randomized treatment gp, randomized comparison group, observational comparison group). Shadish, Clark, and Steiner's design was four arm (randomized treatment gp, randomized comparison group, observational treatment gp, observational comparison group).
|
What are the best empirical studies comparing causal inference with experimental, quasi-experimental
|
The type of study you are referring to is called a within-study comparison. An early example that produced a lot of discussion is Dehejia and Wahba (1999; JASA) using Lalonde's (1986) NSW data in whic
|
What are the best empirical studies comparing causal inference with experimental, quasi-experimental, and non-experimental techniques?
The type of study you are referring to is called a within-study comparison. An early example that produced a lot of discussion is Dehejia and Wahba (1999; JASA) using Lalonde's (1986) NSW data in which they compared the results based on PSA to the randomized experimental benchmark. The Lalonde data set is now included in PSA packages in R such as Matching and twang, for example. There was an ongoing workshop at Northwestern (not sure if they still do it) that has an archived website with a reference list you will find useful (link).
One interesting example is the 2008 JASA paper by Shadish, Clark, and Steiner in which they randomized participants to be in either an observational study or a randomized experiment and then used the results from the randomized experiment as a benchmark, as you say. The more typical design is three arm (randomized treatment gp, randomized comparison group, observational comparison group). Shadish, Clark, and Steiner's design was four arm (randomized treatment gp, randomized comparison group, observational treatment gp, observational comparison group).
|
What are the best empirical studies comparing causal inference with experimental, quasi-experimental
The type of study you are referring to is called a within-study comparison. An early example that produced a lot of discussion is Dehejia and Wahba (1999; JASA) using Lalonde's (1986) NSW data in whic
|
47,957
|
What are the best empirical studies comparing causal inference with experimental, quasi-experimental, and non-experimental techniques?
|
In the medicine, the most recent and comprehensive work I'm aware of has been done by OMOP (the Observational Medical Outcomes Partnership). You'll find a lot of relevant research on their publications page, and I think the review paper, 'A systematic statistical approach to evaluating evidence from observational studies', gives a good overview of the project and its findings.
|
What are the best empirical studies comparing causal inference with experimental, quasi-experimental
|
In the medicine, the most recent and comprehensive work I'm aware of has been done by OMOP (the Observational Medical Outcomes Partnership). You'll find a lot of relevant research on their publication
|
What are the best empirical studies comparing causal inference with experimental, quasi-experimental, and non-experimental techniques?
In the medicine, the most recent and comprehensive work I'm aware of has been done by OMOP (the Observational Medical Outcomes Partnership). You'll find a lot of relevant research on their publications page, and I think the review paper, 'A systematic statistical approach to evaluating evidence from observational studies', gives a good overview of the project and its findings.
|
What are the best empirical studies comparing causal inference with experimental, quasi-experimental
In the medicine, the most recent and comprehensive work I'm aware of has been done by OMOP (the Observational Medical Outcomes Partnership). You'll find a lot of relevant research on their publication
|
47,958
|
What are the best empirical studies comparing causal inference with experimental, quasi-experimental, and non-experimental techniques?
|
Angus Deaton, the latest Economics Nobel Laureate, is interviewed in this link regarding his thoughts on RCTs as the gold standard. He's quite refreshingly skeptical pointing to, among other things, the typically small sample sizes in RCTs vs the nationally projectable estimates available from observational studies and concluding that, "I don’t see a difference in terms of quality of evidence or usefulness. There are bad studies of all sorts."
https://medium.com/@timothyogden/experimental-conversations-angus-deaton-b2f768dffd57#.t41xnnnd5
|
What are the best empirical studies comparing causal inference with experimental, quasi-experimental
|
Angus Deaton, the latest Economics Nobel Laureate, is interviewed in this link regarding his thoughts on RCTs as the gold standard. He's quite refreshingly skeptical pointing to, among other things, t
|
What are the best empirical studies comparing causal inference with experimental, quasi-experimental, and non-experimental techniques?
Angus Deaton, the latest Economics Nobel Laureate, is interviewed in this link regarding his thoughts on RCTs as the gold standard. He's quite refreshingly skeptical pointing to, among other things, the typically small sample sizes in RCTs vs the nationally projectable estimates available from observational studies and concluding that, "I don’t see a difference in terms of quality of evidence or usefulness. There are bad studies of all sorts."
https://medium.com/@timothyogden/experimental-conversations-angus-deaton-b2f768dffd57#.t41xnnnd5
|
What are the best empirical studies comparing causal inference with experimental, quasi-experimental
Angus Deaton, the latest Economics Nobel Laureate, is interviewed in this link regarding his thoughts on RCTs as the gold standard. He's quite refreshingly skeptical pointing to, among other things, t
|
47,959
|
What are the best empirical studies comparing causal inference with experimental, quasi-experimental, and non-experimental techniques?
|
Have a look at ACIC causal inference competitions; have been going on for a couple of years by now. See for example https://statmodeling.stat.columbia.edu/2022/02/16/welcome-to-the-american-causal-inference-conference-2022-data-challenge/ and references therein.
|
What are the best empirical studies comparing causal inference with experimental, quasi-experimental
|
Have a look at ACIC causal inference competitions; have been going on for a couple of years by now. See for example https://statmodeling.stat.columbia.edu/2022/02/16/welcome-to-the-american-causal-inf
|
What are the best empirical studies comparing causal inference with experimental, quasi-experimental, and non-experimental techniques?
Have a look at ACIC causal inference competitions; have been going on for a couple of years by now. See for example https://statmodeling.stat.columbia.edu/2022/02/16/welcome-to-the-american-causal-inference-conference-2022-data-challenge/ and references therein.
|
What are the best empirical studies comparing causal inference with experimental, quasi-experimental
Have a look at ACIC causal inference competitions; have been going on for a couple of years by now. See for example https://statmodeling.stat.columbia.edu/2022/02/16/welcome-to-the-american-causal-inf
|
47,960
|
Distribution of Ratio of 2 Chi-squared
|
With finite d.f.
the ratio of two independent chi-squared variates has a beta-prime distribution (also sometimes called a 'beta distribution of the second kind').
if you divide each of the chi-square variates by its df the ratio has an F-distribution.
Asymptotic arguments:
if $j\to\infty$, you can apply Slutsky's theorem to argue that the F-ratio should go to a $\chi^2_k/k$
if $k$ also $\to\infty$ you can in turn (with appropriate standardization) make an argument that invokes CLT
|
Distribution of Ratio of 2 Chi-squared
|
With finite d.f.
the ratio of two independent chi-squared variates has a beta-prime distribution (also sometimes called a 'beta distribution of the second kind').
if you divide each of the chi-square
|
Distribution of Ratio of 2 Chi-squared
With finite d.f.
the ratio of two independent chi-squared variates has a beta-prime distribution (also sometimes called a 'beta distribution of the second kind').
if you divide each of the chi-square variates by its df the ratio has an F-distribution.
Asymptotic arguments:
if $j\to\infty$, you can apply Slutsky's theorem to argue that the F-ratio should go to a $\chi^2_k/k$
if $k$ also $\to\infty$ you can in turn (with appropriate standardization) make an argument that invokes CLT
|
Distribution of Ratio of 2 Chi-squared
With finite d.f.
the ratio of two independent chi-squared variates has a beta-prime distribution (also sometimes called a 'beta distribution of the second kind').
if you divide each of the chi-square
|
47,961
|
Fixed Regressor Conspiracy and Connection to Exchangeability
|
A regression model gives predictions of the response conditional on predictor values; so there's no problem in applying a model fitted to one set of predictor values fixed by design to another set of predictor values, even if the latter are randomly sampled from a population. With an experimental design matrix $X$, the expectation & variance of the predicted response $\hat y$ for a (new) predictor vector $x$ are given by
$$\operatorname{E}{\hat y \,|\, x} = x^\mathrm{T}\beta$$
$$\operatorname{Var}{\hat y\,|\,x}=\sigma^2\left(1+x^\mathrm{T}(X^\mathrm{T}X)^{-1}x\right)$$
where $\beta$ is the coefficient vector & $\sigma^2$ is the error variance—so the particular predictor values used for the fit don't affect the expectation of predictions, but do affect the variation in their precision throughout predictor space. Note that any aggregate fit metrics, say root mean square error of predictions, don't carry over from the experiment to the new sample.
The above discussion assumes the model is right: in practice there will be extra-statistical considerations when applying it. You need to think about e.g. variation of effects that weren't investigated in the original experiment, the reliability of extrapolation into new regions of predictor space, selection bias in the population, & whether experimental manipulation is comparable to a natural cause. An engineer might model resistivity as a linear function of temperature from experimental data & be confident in applying the model to a particular collection of resistors used in a circuit board. The medical researcher in your example might assert that the medicine reduces blood cholesterol level, & confidently predict the results of further experiments; but would be unlikely to claim that, in a random sample from, say, all hospital admissions, those patients taking the medicine would have lower cholesterol levels than those who weren't.
|
Fixed Regressor Conspiracy and Connection to Exchangeability
|
A regression model gives predictions of the response conditional on predictor values; so there's no problem in applying a model fitted to one set of predictor values fixed by design to another set of
|
Fixed Regressor Conspiracy and Connection to Exchangeability
A regression model gives predictions of the response conditional on predictor values; so there's no problem in applying a model fitted to one set of predictor values fixed by design to another set of predictor values, even if the latter are randomly sampled from a population. With an experimental design matrix $X$, the expectation & variance of the predicted response $\hat y$ for a (new) predictor vector $x$ are given by
$$\operatorname{E}{\hat y \,|\, x} = x^\mathrm{T}\beta$$
$$\operatorname{Var}{\hat y\,|\,x}=\sigma^2\left(1+x^\mathrm{T}(X^\mathrm{T}X)^{-1}x\right)$$
where $\beta$ is the coefficient vector & $\sigma^2$ is the error variance—so the particular predictor values used for the fit don't affect the expectation of predictions, but do affect the variation in their precision throughout predictor space. Note that any aggregate fit metrics, say root mean square error of predictions, don't carry over from the experiment to the new sample.
The above discussion assumes the model is right: in practice there will be extra-statistical considerations when applying it. You need to think about e.g. variation of effects that weren't investigated in the original experiment, the reliability of extrapolation into new regions of predictor space, selection bias in the population, & whether experimental manipulation is comparable to a natural cause. An engineer might model resistivity as a linear function of temperature from experimental data & be confident in applying the model to a particular collection of resistors used in a circuit board. The medical researcher in your example might assert that the medicine reduces blood cholesterol level, & confidently predict the results of further experiments; but would be unlikely to claim that, in a random sample from, say, all hospital admissions, those patients taking the medicine would have lower cholesterol levels than those who weren't.
|
Fixed Regressor Conspiracy and Connection to Exchangeability
A regression model gives predictions of the response conditional on predictor values; so there's no problem in applying a model fitted to one set of predictor values fixed by design to another set of
|
47,962
|
Can I use likelihood-ratio test to compare two samples drawn from power-law distributions?
|
What you have certainly works. Another option that would only require you to only run the non-pooled model (where you estimate both $\hat\alpha_1$ and $\hat\alpha_2$) is the Wald test with the linear hpothesis
$$
H_o: \alpha_1 - \alpha_2 =0
$$
$$
H_1: \alpha_1 - \alpha_2 \neq 0
$$
If your sample size is large, then this method may be more efficient from a computational standpoint (since you only have to run one model instead of two). Other than that both the likelihood ratio and Wald tests are equivalent asymptotically.
|
Can I use likelihood-ratio test to compare two samples drawn from power-law distributions?
|
What you have certainly works. Another option that would only require you to only run the non-pooled model (where you estimate both $\hat\alpha_1$ and $\hat\alpha_2$) is the Wald test with the linear
|
Can I use likelihood-ratio test to compare two samples drawn from power-law distributions?
What you have certainly works. Another option that would only require you to only run the non-pooled model (where you estimate both $\hat\alpha_1$ and $\hat\alpha_2$) is the Wald test with the linear hpothesis
$$
H_o: \alpha_1 - \alpha_2 =0
$$
$$
H_1: \alpha_1 - \alpha_2 \neq 0
$$
If your sample size is large, then this method may be more efficient from a computational standpoint (since you only have to run one model instead of two). Other than that both the likelihood ratio and Wald tests are equivalent asymptotically.
|
Can I use likelihood-ratio test to compare two samples drawn from power-law distributions?
What you have certainly works. Another option that would only require you to only run the non-pooled model (where you estimate both $\hat\alpha_1$ and $\hat\alpha_2$) is the Wald test with the linear
|
47,963
|
Can I use likelihood-ratio test to compare two samples drawn from power-law distributions?
|
I can't do that by means of Kolmogorov-Smirnov test because my data are discrete,
Well, actually you could use a Komogorov-Smirnov test on discrete data as long as either:
(i) you don't use the distribution of the test statistic that assumes the data are continuous. You could, for example, run a permutation or randomization test on the data you have, and you could use the K-S statistic for that if you wanted. This would deal with the impact of discreteness on the distribution of the test statistic.
(ii) you are prepared to deal with the consequences of ignoring the discreteness (lower-than-nominal significance level and corresponding reduction in power) and using the tables anyway. With a sample size of a million, that may not actually be such a problem; you can always use simulation to get a sense of where your actual significance lies. It largely depends on "how discrete" the discrete distribution is.
That said, a likelihood ratio test makes perfect sense, too (but how do you know for sure you have a power-law?).
You would indeed proceed exactly as you've said. In small samples, you might try to work out the exact small sample distribution of some simple transformation of the LRT, but with a huge sample there's no reason to bother with all that.
(If your distribution were to have more parameters than the one you mention, under the formulation you give, any additional parameters are assumed constant across samples.)
I suggest taking a look at the paper by Clauset, Shalizi and Newman (2009) [1], which to my recollection covers both continuous and discrete power laws and discusses both Kolmogorov-Smirnov and likelihood ratio tests.
[1] Aaron Clauset, Cosma Rohilla Shalizi, M. E. J. Newman (2009),
"Power-law distributions in empirical data,"
SIAM Review 51, 661-703
(also arXiv:0706.1062v2)
|
Can I use likelihood-ratio test to compare two samples drawn from power-law distributions?
|
I can't do that by means of Kolmogorov-Smirnov test because my data are discrete,
Well, actually you could use a Komogorov-Smirnov test on discrete data as long as either:
(i) you don't use the dist
|
Can I use likelihood-ratio test to compare two samples drawn from power-law distributions?
I can't do that by means of Kolmogorov-Smirnov test because my data are discrete,
Well, actually you could use a Komogorov-Smirnov test on discrete data as long as either:
(i) you don't use the distribution of the test statistic that assumes the data are continuous. You could, for example, run a permutation or randomization test on the data you have, and you could use the K-S statistic for that if you wanted. This would deal with the impact of discreteness on the distribution of the test statistic.
(ii) you are prepared to deal with the consequences of ignoring the discreteness (lower-than-nominal significance level and corresponding reduction in power) and using the tables anyway. With a sample size of a million, that may not actually be such a problem; you can always use simulation to get a sense of where your actual significance lies. It largely depends on "how discrete" the discrete distribution is.
That said, a likelihood ratio test makes perfect sense, too (but how do you know for sure you have a power-law?).
You would indeed proceed exactly as you've said. In small samples, you might try to work out the exact small sample distribution of some simple transformation of the LRT, but with a huge sample there's no reason to bother with all that.
(If your distribution were to have more parameters than the one you mention, under the formulation you give, any additional parameters are assumed constant across samples.)
I suggest taking a look at the paper by Clauset, Shalizi and Newman (2009) [1], which to my recollection covers both continuous and discrete power laws and discusses both Kolmogorov-Smirnov and likelihood ratio tests.
[1] Aaron Clauset, Cosma Rohilla Shalizi, M. E. J. Newman (2009),
"Power-law distributions in empirical data,"
SIAM Review 51, 661-703
(also arXiv:0706.1062v2)
|
Can I use likelihood-ratio test to compare two samples drawn from power-law distributions?
I can't do that by means of Kolmogorov-Smirnov test because my data are discrete,
Well, actually you could use a Komogorov-Smirnov test on discrete data as long as either:
(i) you don't use the dist
|
47,964
|
Mann-Whitney U test and paired data
|
(I'm not sure I really follow your reasoning.) The Mann-Whitney U-test can be used with paired data. It will simply be less powerful. When you ignore the pairing, you are throwing a lot of information away.
I don't really understand this question.
The meaning of p-values here is the same as the meaning of p-values anywhere in frequentist statistics. That is, it is the probability of finding data as far or further from the null value if the null hypothesis is true. It may help you to read this CV thread: What is the meaning of p values and t values in statistical tests?
|
Mann-Whitney U test and paired data
|
(I'm not sure I really follow your reasoning.) The Mann-Whitney U-test can be used with paired data. It will simply be less powerful. When you ignore the pairing, you are throwing a lot of informat
|
Mann-Whitney U test and paired data
(I'm not sure I really follow your reasoning.) The Mann-Whitney U-test can be used with paired data. It will simply be less powerful. When you ignore the pairing, you are throwing a lot of information away.
I don't really understand this question.
The meaning of p-values here is the same as the meaning of p-values anywhere in frequentist statistics. That is, it is the probability of finding data as far or further from the null value if the null hypothesis is true. It may help you to read this CV thread: What is the meaning of p values and t values in statistical tests?
|
Mann-Whitney U test and paired data
(I'm not sure I really follow your reasoning.) The Mann-Whitney U-test can be used with paired data. It will simply be less powerful. When you ignore the pairing, you are throwing a lot of informat
|
47,965
|
Mann-Whitney U test and paired data
|
I mean if it violate some assumptions of the test
Certainly it violates assumptions, because any machine that crops up in both samples will have scores for the two measures that are dependent (due to the impact of the unobserved $s_i$), when there is an explicit assumption is of independence.
This will impact the behavior of the test.
If so, which is the advantage of using Wilcoxon signed-rank test instead of Mann-Whitney U test?
If dependence is substantial, the true significance level of the rank-sum test may be severely impacted (simulation in the case of paired data indicates the effect can be quite strong). By contrast, the signed rank test is designed for the situation, and it has better power in the presence of dependence (I'd guess largely due to the fact that its significance level isn't pushed down). If the dependence is low (e.g overlapping samples with very small overlap), it won't matter so much.
If you do have overlapping samples and you know which ones are paired, you could separate into "paired" and "independent" subsets, apply signed rank to the paired and rank-sum to the unpaired and combine the two p-values (say via Fisher's method).
If you have the option, take advantage of the pairing, and use all-paired data.
What exactly does the p-value mean in the case of the Wilcoxon signed-rank test?
The same as what it means for any other hypothesis test. See the second sentence of the second paragraph here
[the] p-value is the probability of obtaining the observed sample results, or a "more extreme" result, when assuming the null hypothesis is actually true
|
Mann-Whitney U test and paired data
|
I mean if it violate some assumptions of the test
Certainly it violates assumptions, because any machine that crops up in both samples will have scores for the two measures that are dependent (due to
|
Mann-Whitney U test and paired data
I mean if it violate some assumptions of the test
Certainly it violates assumptions, because any machine that crops up in both samples will have scores for the two measures that are dependent (due to the impact of the unobserved $s_i$), when there is an explicit assumption is of independence.
This will impact the behavior of the test.
If so, which is the advantage of using Wilcoxon signed-rank test instead of Mann-Whitney U test?
If dependence is substantial, the true significance level of the rank-sum test may be severely impacted (simulation in the case of paired data indicates the effect can be quite strong). By contrast, the signed rank test is designed for the situation, and it has better power in the presence of dependence (I'd guess largely due to the fact that its significance level isn't pushed down). If the dependence is low (e.g overlapping samples with very small overlap), it won't matter so much.
If you do have overlapping samples and you know which ones are paired, you could separate into "paired" and "independent" subsets, apply signed rank to the paired and rank-sum to the unpaired and combine the two p-values (say via Fisher's method).
If you have the option, take advantage of the pairing, and use all-paired data.
What exactly does the p-value mean in the case of the Wilcoxon signed-rank test?
The same as what it means for any other hypothesis test. See the second sentence of the second paragraph here
[the] p-value is the probability of obtaining the observed sample results, or a "more extreme" result, when assuming the null hypothesis is actually true
|
Mann-Whitney U test and paired data
I mean if it violate some assumptions of the test
Certainly it violates assumptions, because any machine that crops up in both samples will have scores for the two measures that are dependent (due to
|
47,966
|
Are the order statistics minimal sufficient for a location-scale family?
|
This question is Example 6.10, page 36, in Lehmann and Casella Theory of Point Estimation. If the density $f$ is unknown or simply outside exponential families, like the Cauchy distribution, the order statistics $$T(X) = (X_{(1)}, X_{(2)}, X_{(3)},..., X_{(n)})$$ is minimal sufficient.
The result is based on Theorem 6.12, page 37, that, for a finite family of distributions $\mathcal{P}=\{p_0,\ldots,p_k\}$, and a sample $X$, the statistic $$T(X) = (p_1(X)/p_0(X),\ldots,p_k(X)/p_0(X))$$ is minimal sufficient. The result follows from this property by considering $n$ different values of the Cauchy parameters and by noticing that if $T$ is minimal sufficient for $\mathcal{P}_0$ and sufficient for $\mathcal{P}_1\supset\mathcal{P}_0$, then it is minimal sufficient for $\mathcal{P}_1$.
|
Are the order statistics minimal sufficient for a location-scale family?
|
This question is Example 6.10, page 36, in Lehmann and Casella Theory of Point Estimation. If the density $f$ is unknown or simply outside exponential families, like the Cauchy distribution, the order
|
Are the order statistics minimal sufficient for a location-scale family?
This question is Example 6.10, page 36, in Lehmann and Casella Theory of Point Estimation. If the density $f$ is unknown or simply outside exponential families, like the Cauchy distribution, the order statistics $$T(X) = (X_{(1)}, X_{(2)}, X_{(3)},..., X_{(n)})$$ is minimal sufficient.
The result is based on Theorem 6.12, page 37, that, for a finite family of distributions $\mathcal{P}=\{p_0,\ldots,p_k\}$, and a sample $X$, the statistic $$T(X) = (p_1(X)/p_0(X),\ldots,p_k(X)/p_0(X))$$ is minimal sufficient. The result follows from this property by considering $n$ different values of the Cauchy parameters and by noticing that if $T$ is minimal sufficient for $\mathcal{P}_0$ and sufficient for $\mathcal{P}_1\supset\mathcal{P}_0$, then it is minimal sufficient for $\mathcal{P}_1$.
|
Are the order statistics minimal sufficient for a location-scale family?
This question is Example 6.10, page 36, in Lehmann and Casella Theory of Point Estimation. If the density $f$ is unknown or simply outside exponential families, like the Cauchy distribution, the order
|
47,967
|
What is the proper name of a model that takes as input the output of another model?
|
The process is known as Cascaded classification/regression or Multi-stage classification/regression. It is a type of ensemble learning with some differences. You can find more in Wikipedia.
|
What is the proper name of a model that takes as input the output of another model?
|
The process is known as Cascaded classification/regression or Multi-stage classification/regression. It is a type of ensemble learning with some differences. You can find more in Wikipedia.
|
What is the proper name of a model that takes as input the output of another model?
The process is known as Cascaded classification/regression or Multi-stage classification/regression. It is a type of ensemble learning with some differences. You can find more in Wikipedia.
|
What is the proper name of a model that takes as input the output of another model?
The process is known as Cascaded classification/regression or Multi-stage classification/regression. It is a type of ensemble learning with some differences. You can find more in Wikipedia.
|
47,968
|
What is the proper name of a model that takes as input the output of another model?
|
I am not sure, if there exists at all a "proper name" for models that you describe, but I would call such models chain models, similarly to the chain procedures, introduced in the following paper: http://www.multxpert.com/doc/md2011.pdf. This term IMHO better reflects the nature of this type of models and, at the same time, prevents potential confusion with hierarchical and multi-level ones.
|
What is the proper name of a model that takes as input the output of another model?
|
I am not sure, if there exists at all a "proper name" for models that you describe, but I would call such models chain models, similarly to the chain procedures, introduced in the following paper: htt
|
What is the proper name of a model that takes as input the output of another model?
I am not sure, if there exists at all a "proper name" for models that you describe, but I would call such models chain models, similarly to the chain procedures, introduced in the following paper: http://www.multxpert.com/doc/md2011.pdf. This term IMHO better reflects the nature of this type of models and, at the same time, prevents potential confusion with hierarchical and multi-level ones.
|
What is the proper name of a model that takes as input the output of another model?
I am not sure, if there exists at all a "proper name" for models that you describe, but I would call such models chain models, similarly to the chain procedures, introduced in the following paper: htt
|
47,969
|
Are prediction and distribution-fitting ever not the same thing?
|
one has also (perhaps implicitly) produced a good estimate of its marginal or conditional distribution.
Point predictions don't necessarily do this.
one must necessarily be able to generate good predictions.
Possibly -- depending on how we define "good"
I'm deliberately leaving the prediction and distribution-fitting methods unspecified.
Okay, then consider prediction using a linear model with a single predictor - one fitted by choosing the slope of the line so as to make the Spearman correlation between residuals and $x$ as close to 0 as possible (if there's an interval at 0, choosing the center of that interval). Rather similar to what was done in this answer to fit a line
(choosing the slope at which the red 'curve' crosses 0, yielding a slope estimate of $3.714$), except we then proceed with a different intercept obtained from the residuals from the one used in that previous answer. Instead consider this:
Given that slope, estimate the intercept from the $y-\hat{\beta}x$ values using a 3-part Hampel redescending M-estimator of location. (We could do the whole line fit via M-estimation, but I wanted to give some idea of the sheer variety of perfectly reasonable approaches to prediction that are available.)
A point prediction at some $x$, say $x_\text{new}$, is then obtained from the fitted value for that $x$.
So doing that on the cars data in R that I fitted at the other link (using the defaults in robustbase::lmrob with psi="hampel"), I obtained an intercept of $-15.79$.
Resulting in this fitted line:
The prediction at $x=21$ is marked in ($62.2$). It appears to be a perfectly reasonable prediction.
We've certainly assumed linearity, but there's no distributional assumption made in obtaining that prediction - the slope was obtained nonparametrically (ie with a distribution free method), while the intercept used M-estimation (and while that grows out of ML estimation, the $\psi$-functions which redescend to 0, such as the Hampel, correspond to no actual distribution).
Clearly point prediction at least needn't involve or relate to a distributional fit, and so the answer to the title question is (demonstrably) "not so".
--
Indeed if we then generated a confidence interval or a prediction interval by bootstrapping, we would have interval prediction without fitting a distribution (unless you call using/re-sampling the ECDF 'fitting', it might well count as estimation depending on what you intend the question to encompass). [However, I think there are also ways to get intervals for some fits generated along similar lines that don't use bootstrapping. For example, we can generate a confidence interval from the slope by inverting the critical values in the Spearman test; at least some kinds of prediction should allow us do something similar for intervals. There are nonparametric tolerance intervals, for example.]
|
Are prediction and distribution-fitting ever not the same thing?
|
one has also (perhaps implicitly) produced a good estimate of its marginal or conditional distribution.
Point predictions don't necessarily do this.
one must necessarily be able to generate good pre
|
Are prediction and distribution-fitting ever not the same thing?
one has also (perhaps implicitly) produced a good estimate of its marginal or conditional distribution.
Point predictions don't necessarily do this.
one must necessarily be able to generate good predictions.
Possibly -- depending on how we define "good"
I'm deliberately leaving the prediction and distribution-fitting methods unspecified.
Okay, then consider prediction using a linear model with a single predictor - one fitted by choosing the slope of the line so as to make the Spearman correlation between residuals and $x$ as close to 0 as possible (if there's an interval at 0, choosing the center of that interval). Rather similar to what was done in this answer to fit a line
(choosing the slope at which the red 'curve' crosses 0, yielding a slope estimate of $3.714$), except we then proceed with a different intercept obtained from the residuals from the one used in that previous answer. Instead consider this:
Given that slope, estimate the intercept from the $y-\hat{\beta}x$ values using a 3-part Hampel redescending M-estimator of location. (We could do the whole line fit via M-estimation, but I wanted to give some idea of the sheer variety of perfectly reasonable approaches to prediction that are available.)
A point prediction at some $x$, say $x_\text{new}$, is then obtained from the fitted value for that $x$.
So doing that on the cars data in R that I fitted at the other link (using the defaults in robustbase::lmrob with psi="hampel"), I obtained an intercept of $-15.79$.
Resulting in this fitted line:
The prediction at $x=21$ is marked in ($62.2$). It appears to be a perfectly reasonable prediction.
We've certainly assumed linearity, but there's no distributional assumption made in obtaining that prediction - the slope was obtained nonparametrically (ie with a distribution free method), while the intercept used M-estimation (and while that grows out of ML estimation, the $\psi$-functions which redescend to 0, such as the Hampel, correspond to no actual distribution).
Clearly point prediction at least needn't involve or relate to a distributional fit, and so the answer to the title question is (demonstrably) "not so".
--
Indeed if we then generated a confidence interval or a prediction interval by bootstrapping, we would have interval prediction without fitting a distribution (unless you call using/re-sampling the ECDF 'fitting', it might well count as estimation depending on what you intend the question to encompass). [However, I think there are also ways to get intervals for some fits generated along similar lines that don't use bootstrapping. For example, we can generate a confidence interval from the slope by inverting the critical values in the Spearman test; at least some kinds of prediction should allow us do something similar for intervals. There are nonparametric tolerance intervals, for example.]
|
Are prediction and distribution-fitting ever not the same thing?
one has also (perhaps implicitly) produced a good estimate of its marginal or conditional distribution.
Point predictions don't necessarily do this.
one must necessarily be able to generate good pre
|
47,970
|
Some questions about gam
|
Q1
Not really; in R's formula system, the intercept is implied and created when R parses the formula and builds the model matrix. If you want to suppress it, you need to add -1 or + 0 to the formula.
Q2
No, assuming $i$ is a grouping variable? mgcv::gam() will fit a spline that is equivalent to a random intercept in the variable i, i.e. the intercepts are drawn from a mean zero, Gaussian distribution with a single unknown variance to be estimated from the data.
Q3
Yes, you will get a spline for the data where weekend == 0 and a different spline for weekend == 1. You don't have to recode this as 0 or 1, just make sure that weekend is a factor variable. it may help for example to have weekend be a factor with levels c("weekday", "weekend"), corresponding to your 0 and 1 respectively as that will help you recall the coding.
Q4
Yes, boundary knots are placed at the minimum and maximum of the observed data for x and then the remaining knots are spread evenly over the interval of the data. For some spline bases it makes no sense to fiddle with knots, such as the p spline bases (bs = "ps") and some bases don't even use knots, like the thin-plate splines (bs = "tprs") that mgcv::gam() defaults to using.
|
Some questions about gam
|
Q1
Not really; in R's formula system, the intercept is implied and created when R parses the formula and builds the model matrix. If you want to suppress it, you need to add -1 or + 0 to the formula.
|
Some questions about gam
Q1
Not really; in R's formula system, the intercept is implied and created when R parses the formula and builds the model matrix. If you want to suppress it, you need to add -1 or + 0 to the formula.
Q2
No, assuming $i$ is a grouping variable? mgcv::gam() will fit a spline that is equivalent to a random intercept in the variable i, i.e. the intercepts are drawn from a mean zero, Gaussian distribution with a single unknown variance to be estimated from the data.
Q3
Yes, you will get a spline for the data where weekend == 0 and a different spline for weekend == 1. You don't have to recode this as 0 or 1, just make sure that weekend is a factor variable. it may help for example to have weekend be a factor with levels c("weekday", "weekend"), corresponding to your 0 and 1 respectively as that will help you recall the coding.
Q4
Yes, boundary knots are placed at the minimum and maximum of the observed data for x and then the remaining knots are spread evenly over the interval of the data. For some spline bases it makes no sense to fiddle with knots, such as the p spline bases (bs = "ps") and some bases don't even use knots, like the thin-plate splines (bs = "tprs") that mgcv::gam() defaults to using.
|
Some questions about gam
Q1
Not really; in R's formula system, the intercept is implied and created when R parses the formula and builds the model matrix. If you want to suppress it, you need to add -1 or + 0 to the formula.
|
47,971
|
Initial value of the conditional variance in the GARCH process
|
I know of at least five ways of initializing the volatility process:
1) Set it equal to $\varepsilon_{t-1}^2$,
2) The sample variance,
3) Unconditional variance of the model ($\alpha_0/(1-\alpha_1 - \alpha_2)$),
4) Allow it it to be an parameter to be estimated,
5) Backcasting with an exponential filter.
The topic is discussed in further detail here
|
Initial value of the conditional variance in the GARCH process
|
I know of at least five ways of initializing the volatility process:
1) Set it equal to $\varepsilon_{t-1}^2$,
2) The sample variance,
3) Unconditional variance of the model ($\alpha_0/(1-\alpha_1 -
|
Initial value of the conditional variance in the GARCH process
I know of at least five ways of initializing the volatility process:
1) Set it equal to $\varepsilon_{t-1}^2$,
2) The sample variance,
3) Unconditional variance of the model ($\alpha_0/(1-\alpha_1 - \alpha_2)$),
4) Allow it it to be an parameter to be estimated,
5) Backcasting with an exponential filter.
The topic is discussed in further detail here
|
Initial value of the conditional variance in the GARCH process
I know of at least five ways of initializing the volatility process:
1) Set it equal to $\varepsilon_{t-1}^2$,
2) The sample variance,
3) Unconditional variance of the model ($\alpha_0/(1-\alpha_1 -
|
47,972
|
Estimate an overall correlation from the correlations within subgroups
|
You will need 1. through 4. to compute the overall correlation for the two groups combined. The equation for this is: $$r_{xy} = \frac{n_1 sd_{x_1} sd_{y_1} r_{xy_1} + n_1 \delta_{x_1} \delta_{y_1} + n_2 sd_{x_2} sd_{y_2} r_{xy_2} + n_2 \delta_{x_2} \delta_{y_2}}{\sqrt{n_1 (sd_{x_1}^2 + \delta_{x_1}^2) + n_2 (sd_{x_2}^2 + \delta_{x_2}^2)} \sqrt{n_1 (sd_{y_1}^2 + \delta_{y_1}^2) + n_2 (sd_{y_2}^2 + \delta_{y_2}^2)}},$$ where $$\delta_{x_1} = m_{x_1} - m_x$$ and $$\delta_{x_2} = m_{x_2} - m_x$$ are the deviations of the group means for variable $x$ from the overall mean for that variable, which we can compute with $$m_x = \frac{n_1 m_{x_1} + n_2 m_{x_2}}{n_1 + n_2}$$ and $$\delta_{y_1} = m_{y_1} - m_y$$ and $$\delta_{y_2} = m_{y_2} - m_y$$ are the deviations of the group means for variable $y$ from the overall mean for that variable, which we can compute with $$m_y = \frac{n_1 m_{y_1} + n_2 m_{y_2}}{n_1 + n_2}.$$ Note that the equation above assumes that we have computed the variances (and hence, the standard deviations) with $n_1$ and $n_2$ in the denominator (instead of $n_1 - 1$ and $n_2 - 1$).
So, let's actually try this out. Here is an example (R code):
library(MASS)
set.seed(12315)
### data for group 1
n1 <- 10
N1 <- mvrnorm(n1, mu=c(0,0), Sigma=matrix(c(1,.5,.5,1), nrow=2))
r1 <- cor(N1)[1,2]
### data for group 2
n2 <- 20
N2 <- mvrnorm(n2, mu=c(2,2), Sigma=matrix(c(1,.3,.3,1), nrow=2))
r2 <- cor(N2)[1,2]
### correlations within the groups
r1
r2
This yields:
> r1
[1] 0.5853821
> r2
[1] 0.2983734
So, these are the correlations within the two groups.
### group means for each variable
mx1 <- mean(N1[,1])
my1 <- mean(N1[,2])
mx2 <- mean(N2[,1])
my2 <- mean(N2[,2])
### group SDs for each variable
sdx1 <- sd(N1[,1]) * sqrt((n1-1) / n1)
sdy1 <- sd(N1[,2]) * sqrt((n1-1) / n1)
sdx2 <- sd(N2[,1]) * sqrt((n2-1) / n2)
sdy2 <- sd(N2[,2]) * sqrt((n2-1) / n2)
### overall means for variables x and y
mx <- (n1*mx1 + n2*mx2) / (n1 + n2)
my <- (n1*my1 + n2*my2) / (n1 + n2)
### deviations of group means from overall means
dx1 <- mx1 - mx
dy1 <- my1 - my
dx2 <- mx2 - mx
dy2 <- my2 - my
### overall correlation for combined data
cor(rbind(N1,N2))[1,2]
This yields:
[1] 0.7370049
So, this is the overall correlation of the data when the two groups are combined. And now let's try out the equation above:
(n1*sdx1*sdy1*r1 + n1*dx1*dy1 + n2*sdx2*sdy2*r2 + n2*dx2*dy2) / (sqrt(n1*(sdx1^2+dx1^2) + n2*(sdx2^2+dx2^2)) * sqrt(n1*(sdy1^2+dy1^2) + n2*(sdy2^2+dy2^2)))
This yields:
[1] 0.7370049
Exactly the same.
If you need a reference for the equation:
Dunlap, J. W. (1937). Combinative properties of correlation coefficients. Journal of Experimental Education, 5(3), 286-288.
It's equation (13) in the article. The generalization to more than two groups is also given (equation 14).
|
Estimate an overall correlation from the correlations within subgroups
|
You will need 1. through 4. to compute the overall correlation for the two groups combined. The equation for this is: $$r_{xy} = \frac{n_1 sd_{x_1} sd_{y_1} r_{xy_1} + n_1 \delta_{x_1} \delta_{y_1} +
|
Estimate an overall correlation from the correlations within subgroups
You will need 1. through 4. to compute the overall correlation for the two groups combined. The equation for this is: $$r_{xy} = \frac{n_1 sd_{x_1} sd_{y_1} r_{xy_1} + n_1 \delta_{x_1} \delta_{y_1} + n_2 sd_{x_2} sd_{y_2} r_{xy_2} + n_2 \delta_{x_2} \delta_{y_2}}{\sqrt{n_1 (sd_{x_1}^2 + \delta_{x_1}^2) + n_2 (sd_{x_2}^2 + \delta_{x_2}^2)} \sqrt{n_1 (sd_{y_1}^2 + \delta_{y_1}^2) + n_2 (sd_{y_2}^2 + \delta_{y_2}^2)}},$$ where $$\delta_{x_1} = m_{x_1} - m_x$$ and $$\delta_{x_2} = m_{x_2} - m_x$$ are the deviations of the group means for variable $x$ from the overall mean for that variable, which we can compute with $$m_x = \frac{n_1 m_{x_1} + n_2 m_{x_2}}{n_1 + n_2}$$ and $$\delta_{y_1} = m_{y_1} - m_y$$ and $$\delta_{y_2} = m_{y_2} - m_y$$ are the deviations of the group means for variable $y$ from the overall mean for that variable, which we can compute with $$m_y = \frac{n_1 m_{y_1} + n_2 m_{y_2}}{n_1 + n_2}.$$ Note that the equation above assumes that we have computed the variances (and hence, the standard deviations) with $n_1$ and $n_2$ in the denominator (instead of $n_1 - 1$ and $n_2 - 1$).
So, let's actually try this out. Here is an example (R code):
library(MASS)
set.seed(12315)
### data for group 1
n1 <- 10
N1 <- mvrnorm(n1, mu=c(0,0), Sigma=matrix(c(1,.5,.5,1), nrow=2))
r1 <- cor(N1)[1,2]
### data for group 2
n2 <- 20
N2 <- mvrnorm(n2, mu=c(2,2), Sigma=matrix(c(1,.3,.3,1), nrow=2))
r2 <- cor(N2)[1,2]
### correlations within the groups
r1
r2
This yields:
> r1
[1] 0.5853821
> r2
[1] 0.2983734
So, these are the correlations within the two groups.
### group means for each variable
mx1 <- mean(N1[,1])
my1 <- mean(N1[,2])
mx2 <- mean(N2[,1])
my2 <- mean(N2[,2])
### group SDs for each variable
sdx1 <- sd(N1[,1]) * sqrt((n1-1) / n1)
sdy1 <- sd(N1[,2]) * sqrt((n1-1) / n1)
sdx2 <- sd(N2[,1]) * sqrt((n2-1) / n2)
sdy2 <- sd(N2[,2]) * sqrt((n2-1) / n2)
### overall means for variables x and y
mx <- (n1*mx1 + n2*mx2) / (n1 + n2)
my <- (n1*my1 + n2*my2) / (n1 + n2)
### deviations of group means from overall means
dx1 <- mx1 - mx
dy1 <- my1 - my
dx2 <- mx2 - mx
dy2 <- my2 - my
### overall correlation for combined data
cor(rbind(N1,N2))[1,2]
This yields:
[1] 0.7370049
So, this is the overall correlation of the data when the two groups are combined. And now let's try out the equation above:
(n1*sdx1*sdy1*r1 + n1*dx1*dy1 + n2*sdx2*sdy2*r2 + n2*dx2*dy2) / (sqrt(n1*(sdx1^2+dx1^2) + n2*(sdx2^2+dx2^2)) * sqrt(n1*(sdy1^2+dy1^2) + n2*(sdy2^2+dy2^2)))
This yields:
[1] 0.7370049
Exactly the same.
If you need a reference for the equation:
Dunlap, J. W. (1937). Combinative properties of correlation coefficients. Journal of Experimental Education, 5(3), 286-288.
It's equation (13) in the article. The generalization to more than two groups is also given (equation 14).
|
Estimate an overall correlation from the correlations within subgroups
You will need 1. through 4. to compute the overall correlation for the two groups combined. The equation for this is: $$r_{xy} = \frac{n_1 sd_{x_1} sd_{y_1} r_{xy_1} + n_1 \delta_{x_1} \delta_{y_1} +
|
47,973
|
Converting Adjusted Odds Ratios to its RR counterpart
|
You can do this calculation for an adjusted OR (I presume from a logistic regression) to a RR, but the end result may not be useful for your goal of meta-analysis. The essential problem is that the adjusted OR $exp(\beta_1)$ from a logistic regression is not an "average" over the population. And so there's no way to calculate a population average relative risk from a logistic regression OR. Simply using the population baseline risk to convert $exp(\beta_1)$ to an RR will be incorrect.
Instead, you only can calculate relative risks for fixed sets of covariates. Say you have:
$$g(Y) = \beta_0 + \beta_1 Treatment + \beta_2 Age + \beta_3 Gender$$
Then $exp(\beta_1)$ represents the multiplicative change in odds given fixed values for $Age$ and $Gender$. You essentially have different $p_0$ for different sets of covariates, so you end up with different relative risks for say, a (40, Female) vs a (30, Male).
Thus unless you're concerned with comparing a very specific set of fixed covariates, this likely isn't useful for meta-analysis. Separating the analysis into those that report RR and those that report OR is probably the best bet, as suggested here.
|
Converting Adjusted Odds Ratios to its RR counterpart
|
You can do this calculation for an adjusted OR (I presume from a logistic regression) to a RR, but the end result may not be useful for your goal of meta-analysis. The essential problem is that the ad
|
Converting Adjusted Odds Ratios to its RR counterpart
You can do this calculation for an adjusted OR (I presume from a logistic regression) to a RR, but the end result may not be useful for your goal of meta-analysis. The essential problem is that the adjusted OR $exp(\beta_1)$ from a logistic regression is not an "average" over the population. And so there's no way to calculate a population average relative risk from a logistic regression OR. Simply using the population baseline risk to convert $exp(\beta_1)$ to an RR will be incorrect.
Instead, you only can calculate relative risks for fixed sets of covariates. Say you have:
$$g(Y) = \beta_0 + \beta_1 Treatment + \beta_2 Age + \beta_3 Gender$$
Then $exp(\beta_1)$ represents the multiplicative change in odds given fixed values for $Age$ and $Gender$. You essentially have different $p_0$ for different sets of covariates, so you end up with different relative risks for say, a (40, Female) vs a (30, Male).
Thus unless you're concerned with comparing a very specific set of fixed covariates, this likely isn't useful for meta-analysis. Separating the analysis into those that report RR and those that report OR is probably the best bet, as suggested here.
|
Converting Adjusted Odds Ratios to its RR counterpart
You can do this calculation for an adjusted OR (I presume from a logistic regression) to a RR, but the end result may not be useful for your goal of meta-analysis. The essential problem is that the ad
|
47,974
|
Standard deviation of residuals from a linear regression
|
Yes, that's correct. You can also extract this result directly from the model object. For example:
2/sd(resid(mtcars_lm))
[1] 0.6674783
|
Standard deviation of residuals from a linear regression
|
Yes, that's correct. You can also extract this result directly from the model object. For example:
2/sd(resid(mtcars_lm))
[1] 0.6674783
|
Standard deviation of residuals from a linear regression
Yes, that's correct. You can also extract this result directly from the model object. For example:
2/sd(resid(mtcars_lm))
[1] 0.6674783
|
Standard deviation of residuals from a linear regression
Yes, that's correct. You can also extract this result directly from the model object. For example:
2/sd(resid(mtcars_lm))
[1] 0.6674783
|
47,975
|
Information theory without normalization
|
Just as unnormalized probabilities (likelihoods) can be compared but not turned into probabilities (without normalizing) — similarly, given log-likelihoods $\phi$ and $\psi$, you cannot calculate the KL divergence, but you can compare KL divergences.
For example, if you were trying to select a predictive unnormalized distribution $p$ out of $p_1, p_2, \dotsc$, given an observed unnormalized distribution $q$. Then you would want to choose $\text{arg min}_i D(q; p_i)$.
You can estimate this by sampling $x$s and taking a weighted average of $-\log p_i(x)$ weighted according $q(x)$. This requires no integration.
|
Information theory without normalization
|
Just as unnormalized probabilities (likelihoods) can be compared but not turned into probabilities (without normalizing) — similarly, given log-likelihoods $\phi$ and $\psi$, you cannot calculate the
|
Information theory without normalization
Just as unnormalized probabilities (likelihoods) can be compared but not turned into probabilities (without normalizing) — similarly, given log-likelihoods $\phi$ and $\psi$, you cannot calculate the KL divergence, but you can compare KL divergences.
For example, if you were trying to select a predictive unnormalized distribution $p$ out of $p_1, p_2, \dotsc$, given an observed unnormalized distribution $q$. Then you would want to choose $\text{arg min}_i D(q; p_i)$.
You can estimate this by sampling $x$s and taking a weighted average of $-\log p_i(x)$ weighted according $q(x)$. This requires no integration.
|
Information theory without normalization
Just as unnormalized probabilities (likelihoods) can be compared but not turned into probabilities (without normalizing) — similarly, given log-likelihoods $\phi$ and $\psi$, you cannot calculate the
|
47,976
|
Dirac delta function in likelihood function
|
A typical example where mixed (continuous and finite support) distributions occur is in censoring: the simplest model is to have a continuous variable, say $x\sim\text{N}(\mu,\sigma^2)$ observed, unless it is larger than a fixed value, say $\omicron$, in which case the bound $\omicron$ is reported. In such a case, the density of the reported random variable is defined against a measure that includes the Lebesgue measure on $\mathbb{R}$ and a Dirac mass at $\omicron$, which is a measure that gives a mass of $1$ to any set containing $\omicron$ and $0$ otherwise. With respect to this dominating measure the density is
\begin{align*}
f(x|\mu,\sigma,\omicron) &= \frac{1}{\sqrt{2\pi}\sigma}\exp\{-(x-\mu^2)/2\sigma^2\}\\
& + \int_{\omicron}^{+\infty} \frac{1}{\sqrt{2\pi}\sigma}\exp\{-(y-\mu^2)/2\sigma^2\}\,\text{d}y\,\mathbb{I}(x=\omicron)
\end{align*}
which involves an indicator, not a Dirac mass.
|
Dirac delta function in likelihood function
|
A typical example where mixed (continuous and finite support) distributions occur is in censoring: the simplest model is to have a continuous variable, say $x\sim\text{N}(\mu,\sigma^2)$ observed, unle
|
Dirac delta function in likelihood function
A typical example where mixed (continuous and finite support) distributions occur is in censoring: the simplest model is to have a continuous variable, say $x\sim\text{N}(\mu,\sigma^2)$ observed, unless it is larger than a fixed value, say $\omicron$, in which case the bound $\omicron$ is reported. In such a case, the density of the reported random variable is defined against a measure that includes the Lebesgue measure on $\mathbb{R}$ and a Dirac mass at $\omicron$, which is a measure that gives a mass of $1$ to any set containing $\omicron$ and $0$ otherwise. With respect to this dominating measure the density is
\begin{align*}
f(x|\mu,\sigma,\omicron) &= \frac{1}{\sqrt{2\pi}\sigma}\exp\{-(x-\mu^2)/2\sigma^2\}\\
& + \int_{\omicron}^{+\infty} \frac{1}{\sqrt{2\pi}\sigma}\exp\{-(y-\mu^2)/2\sigma^2\}\,\text{d}y\,\mathbb{I}(x=\omicron)
\end{align*}
which involves an indicator, not a Dirac mass.
|
Dirac delta function in likelihood function
A typical example where mixed (continuous and finite support) distributions occur is in censoring: the simplest model is to have a continuous variable, say $x\sim\text{N}(\mu,\sigma^2)$ observed, unle
|
47,977
|
Dirac delta function in likelihood function
|
The Dirac $\delta$ function is zero except for one point, where it is infinite. It is used with integrals, and is defined to have an integral of 1 if the special point is included. The effect of this is to "pull" specific point values through the integral.
This lets you assign probability weight to precise point values. @Xian's answer has a nice example of why you might do this.
Technically, it isn't a real-valued function and doesn't work with the normal definition of integrals -- which are the Riemann and Lebesgue measures. It's just convenient to write it as a function inside an integral.
In your example, $d_0$ is the Dirac delta function at zero, which means $\int_X f(y)\delta_0(y)$ is $f(0)$ -- i.e. $e^{−L}$ in your example -- if zero is inside the region integrated $X$, and 0 otherwise.
|
Dirac delta function in likelihood function
|
The Dirac $\delta$ function is zero except for one point, where it is infinite. It is used with integrals, and is defined to have an integral of 1 if the special point is included. The effect of this
|
Dirac delta function in likelihood function
The Dirac $\delta$ function is zero except for one point, where it is infinite. It is used with integrals, and is defined to have an integral of 1 if the special point is included. The effect of this is to "pull" specific point values through the integral.
This lets you assign probability weight to precise point values. @Xian's answer has a nice example of why you might do this.
Technically, it isn't a real-valued function and doesn't work with the normal definition of integrals -- which are the Riemann and Lebesgue measures. It's just convenient to write it as a function inside an integral.
In your example, $d_0$ is the Dirac delta function at zero, which means $\int_X f(y)\delta_0(y)$ is $f(0)$ -- i.e. $e^{−L}$ in your example -- if zero is inside the region integrated $X$, and 0 otherwise.
|
Dirac delta function in likelihood function
The Dirac $\delta$ function is zero except for one point, where it is infinite. It is used with integrals, and is defined to have an integral of 1 if the special point is included. The effect of this
|
47,978
|
How does backpropagation learn convolution filters?
|
First of all: 1) The filters are the weights, 2) As justin pointed out, backpropagation is simply a method for computing gradients. You update the weights using another optimization algorithm such as gradient descent. Gradient descent updates the weights at each time step. It updates the weights using the negative gradient of the loss function. The partial derivatives that are being calculated are not those of the weights, but of the loss function with respect to the weights:
$$
w_{t+1} = w_{t} - \epsilon \frac{\partial \mathcal{L}(w)}{\partial w} \\
$$
|
How does backpropagation learn convolution filters?
|
First of all: 1) The filters are the weights, 2) As justin pointed out, backpropagation is simply a method for computing gradients. You update the weights using another optimization algorithm such as
|
How does backpropagation learn convolution filters?
First of all: 1) The filters are the weights, 2) As justin pointed out, backpropagation is simply a method for computing gradients. You update the weights using another optimization algorithm such as gradient descent. Gradient descent updates the weights at each time step. It updates the weights using the negative gradient of the loss function. The partial derivatives that are being calculated are not those of the weights, but of the loss function with respect to the weights:
$$
w_{t+1} = w_{t} - \epsilon \frac{\partial \mathcal{L}(w)}{\partial w} \\
$$
|
How does backpropagation learn convolution filters?
First of all: 1) The filters are the weights, 2) As justin pointed out, backpropagation is simply a method for computing gradients. You update the weights using another optimization algorithm such as
|
47,979
|
Do these random variables satisfy Lindeberg's condition?
|
I will start with just some guidance, and perhaps return later to complete the answer.
Consider the first sequence of random variables, and note that $|X_n| = n$. In other words for given $n$, the absolute value of the random variable is a constant function (as is always the case for dichotomous random variables symmetric around zero).
Also,
$$s^2_n = \sum_{i=1}^n\sigma^2_k = 1+2^2+3^2+...+n^2 =\frac {n(n+1)(2n+1)}6 = O(n^3)$$
Then the indicator function for some $k$ becomes
$$\mathbb{1}_{\left\{k>\epsilon \left(\frac {n(n+1)(2n+1)}6\right)^{1/2}\right\}}$$
Nothing random remains in here, so it can be taken out of the expected value, being a deterministic function.
Can you take it from here?
|
Do these random variables satisfy Lindeberg's condition?
|
I will start with just some guidance, and perhaps return later to complete the answer.
Consider the first sequence of random variables, and note that $|X_n| = n$. In other words for given $n$, the abs
|
Do these random variables satisfy Lindeberg's condition?
I will start with just some guidance, and perhaps return later to complete the answer.
Consider the first sequence of random variables, and note that $|X_n| = n$. In other words for given $n$, the absolute value of the random variable is a constant function (as is always the case for dichotomous random variables symmetric around zero).
Also,
$$s^2_n = \sum_{i=1}^n\sigma^2_k = 1+2^2+3^2+...+n^2 =\frac {n(n+1)(2n+1)}6 = O(n^3)$$
Then the indicator function for some $k$ becomes
$$\mathbb{1}_{\left\{k>\epsilon \left(\frac {n(n+1)(2n+1)}6\right)^{1/2}\right\}}$$
Nothing random remains in here, so it can be taken out of the expected value, being a deterministic function.
Can you take it from here?
|
Do these random variables satisfy Lindeberg's condition?
I will start with just some guidance, and perhaps return later to complete the answer.
Consider the first sequence of random variables, and note that $|X_n| = n$. In other words for given $n$, the abs
|
47,980
|
Using predicted probabilities as regressors
|
If you are interested in an approximation of the average partial effect you could just use a linear probability model in the first stage, i.e. do your instrumental variables estimation via 2SLS, for instance, in the usual way. However, due to the non-linearities involved this is not the efficient approach but it can give a good initial idea of the effect under study. For a more in-depth treatment of this argument see Wooldridge (2010) "Econometric Analysis of Cross-Section and Panel Data" in section 15.7.3 from page 594 onward. On page 265-268 he explains the forbidden regression and its problems.
Another procedure that you might be interested in was used by Adams et al. (2009). They use a three-step procedure where they have a probit "first stage" and an OLS second stage without falling for the forbidden regression problem. Their general approach is:
use probit to regress the endogenous variable on the instrument(s) and exogenous variables
use the predicted values from the previous step in an OLS first stage together with the exogenous (but without the instrumental) variables
do the second stage as usual
This procedure will yield unbiased estimates and generally is more efficient than doing 2SLS with a linear probability model in the first stage.
|
Using predicted probabilities as regressors
|
If you are interested in an approximation of the average partial effect you could just use a linear probability model in the first stage, i.e. do your instrumental variables estimation via 2SLS, for i
|
Using predicted probabilities as regressors
If you are interested in an approximation of the average partial effect you could just use a linear probability model in the first stage, i.e. do your instrumental variables estimation via 2SLS, for instance, in the usual way. However, due to the non-linearities involved this is not the efficient approach but it can give a good initial idea of the effect under study. For a more in-depth treatment of this argument see Wooldridge (2010) "Econometric Analysis of Cross-Section and Panel Data" in section 15.7.3 from page 594 onward. On page 265-268 he explains the forbidden regression and its problems.
Another procedure that you might be interested in was used by Adams et al. (2009). They use a three-step procedure where they have a probit "first stage" and an OLS second stage without falling for the forbidden regression problem. Their general approach is:
use probit to regress the endogenous variable on the instrument(s) and exogenous variables
use the predicted values from the previous step in an OLS first stage together with the exogenous (but without the instrumental) variables
do the second stage as usual
This procedure will yield unbiased estimates and generally is more efficient than doing 2SLS with a linear probability model in the first stage.
|
Using predicted probabilities as regressors
If you are interested in an approximation of the average partial effect you could just use a linear probability model in the first stage, i.e. do your instrumental variables estimation via 2SLS, for i
|
47,981
|
Meaningful inference about data structure based on components with low variance in PCA
|
This sort of question did appear several times on CV (you have to browse through PCA clustering questions). The short answer to your question is yes, it makes sense inspecting junior dimensions in search for a structure (such as clusters) in your data. But why not? Often senior components explaining the lion's share of the variance are irrelevant to the currently important distinctions in the data. I might cut a loaf of bread lengthwise; then the 1st PC of that ellipsoid won't show the two halves, but PC2 or PC3 is likely to show it - the bimodality.
One should remember that dimensionality reduction methods (such as PCA, PCoA) are not intended to find clusters or to map classes the best way. They do not replace cluster analysis or discriminant analysis, therefore. With PCA or alike techniques, you only can hope that some dimensions will uncover the structure for you.
Just one example. Here is two scatterplots of the same 2-class data. One shows the first PC drawn on it, the other shows the discriminant function drawn. Neither PC1 or the remaining, orthogonal to it, PC2, alone, isn't quite bimodal. Discriminant is much better in that respect, because it was extracted for the purpose to capture the difference between the two classes.
Analytically logical pass to uncover-then-plot structure would be to perform cluster analysis (or latent class analysis) to form classes, then to use discriminant analysis (or, perhaps, multidimensional INDSCAL scaling) to plot those. However, discriminant analysis (DA) results are, naturally, dependent on the classes. PCA/PCoA results are not - since they are unsupervised and are blind to the nonhomogeneity in the data. But that is exactly the reason (or at least one of) why many people would prefer to attempt PCA instead of DA in order to visualize class distinctions.
You say, To me this feels like you are fishing for the results that you want to see. This apprehension would be relevant in the context of multiple statistical significance testing and not in the present context of exploratory data analysis. Yes, EDA is "fishing" for revelations that might look good to you, it's what it is about. On the other hand, if you prefer to think of junior dimensions of the data as noise (rather than weak but substantive ones) dimensions, then indeed the "fishing" claim is appropriate. PCA itself does not separate signal from noise. One has to analyze dimensions statistically if they theoretically resemble noise or signal, but that implies assumptions about the data; so greet the vicious circle. But, fortunately, with a sufficiently large sample size, noise dimensions are likely to dither real class differences, not to fake them.
|
Meaningful inference about data structure based on components with low variance in PCA
|
This sort of question did appear several times on CV (you have to browse through PCA clustering questions). The short answer to your question is yes, it makes sense inspecting junior dimensions in sea
|
Meaningful inference about data structure based on components with low variance in PCA
This sort of question did appear several times on CV (you have to browse through PCA clustering questions). The short answer to your question is yes, it makes sense inspecting junior dimensions in search for a structure (such as clusters) in your data. But why not? Often senior components explaining the lion's share of the variance are irrelevant to the currently important distinctions in the data. I might cut a loaf of bread lengthwise; then the 1st PC of that ellipsoid won't show the two halves, but PC2 or PC3 is likely to show it - the bimodality.
One should remember that dimensionality reduction methods (such as PCA, PCoA) are not intended to find clusters or to map classes the best way. They do not replace cluster analysis or discriminant analysis, therefore. With PCA or alike techniques, you only can hope that some dimensions will uncover the structure for you.
Just one example. Here is two scatterplots of the same 2-class data. One shows the first PC drawn on it, the other shows the discriminant function drawn. Neither PC1 or the remaining, orthogonal to it, PC2, alone, isn't quite bimodal. Discriminant is much better in that respect, because it was extracted for the purpose to capture the difference between the two classes.
Analytically logical pass to uncover-then-plot structure would be to perform cluster analysis (or latent class analysis) to form classes, then to use discriminant analysis (or, perhaps, multidimensional INDSCAL scaling) to plot those. However, discriminant analysis (DA) results are, naturally, dependent on the classes. PCA/PCoA results are not - since they are unsupervised and are blind to the nonhomogeneity in the data. But that is exactly the reason (or at least one of) why many people would prefer to attempt PCA instead of DA in order to visualize class distinctions.
You say, To me this feels like you are fishing for the results that you want to see. This apprehension would be relevant in the context of multiple statistical significance testing and not in the present context of exploratory data analysis. Yes, EDA is "fishing" for revelations that might look good to you, it's what it is about. On the other hand, if you prefer to think of junior dimensions of the data as noise (rather than weak but substantive ones) dimensions, then indeed the "fishing" claim is appropriate. PCA itself does not separate signal from noise. One has to analyze dimensions statistically if they theoretically resemble noise or signal, but that implies assumptions about the data; so greet the vicious circle. But, fortunately, with a sufficiently large sample size, noise dimensions are likely to dither real class differences, not to fake them.
|
Meaningful inference about data structure based on components with low variance in PCA
This sort of question did appear several times on CV (you have to browse through PCA clustering questions). The short answer to your question is yes, it makes sense inspecting junior dimensions in sea
|
47,982
|
Copulas with Regression
|
In my opinion the two methods (copula, regression) answer quite different questions. The copula approach is much more general than regression and one of the reasons why you have not seen regression models based on copulas, might be that using copulas is much harder than using regression. Two observations why this is so:
For a copula fit you need to know or estimate the joint distribution of all variables involved. You do not need this for regression.
If you are only interested in the response, regression gives you the answer more or less directly. But from the joint distribution you need to manufacture the conditional expectation of the response with additional effort.
This extra effort for estimating the joint distribution and only then finding the expected response would need to be justified by the specific problem you are interested in. Two justifications I can think of are: You are actually interested in the joint distribution (that is what you called "traditionally") or you know that your model does not allow for the standard assumptions of regression (additive independent errors, say).
On your questions 1. and 2.: Sure you can do this in theory (if the copula is differentiable and has a density). If you know the joint distribution, you can calculate all marginals and conditional expectations. The problems start when you want to estimate this from data. Unless your problem prescribes a specific, nice parametric copula, you might need special samples or lots of them to do this.
|
Copulas with Regression
|
In my opinion the two methods (copula, regression) answer quite different questions. The copula approach is much more general than regression and one of the reasons why you have not seen regression mo
|
Copulas with Regression
In my opinion the two methods (copula, regression) answer quite different questions. The copula approach is much more general than regression and one of the reasons why you have not seen regression models based on copulas, might be that using copulas is much harder than using regression. Two observations why this is so:
For a copula fit you need to know or estimate the joint distribution of all variables involved. You do not need this for regression.
If you are only interested in the response, regression gives you the answer more or less directly. But from the joint distribution you need to manufacture the conditional expectation of the response with additional effort.
This extra effort for estimating the joint distribution and only then finding the expected response would need to be justified by the specific problem you are interested in. Two justifications I can think of are: You are actually interested in the joint distribution (that is what you called "traditionally") or you know that your model does not allow for the standard assumptions of regression (additive independent errors, say).
On your questions 1. and 2.: Sure you can do this in theory (if the copula is differentiable and has a density). If you know the joint distribution, you can calculate all marginals and conditional expectations. The problems start when you want to estimate this from data. Unless your problem prescribes a specific, nice parametric copula, you might need special samples or lots of them to do this.
|
Copulas with Regression
In my opinion the two methods (copula, regression) answer quite different questions. The copula approach is much more general than regression and one of the reasons why you have not seen regression mo
|
47,983
|
Copulas with Regression
|
I have recently devised a curve fitting method for the relationship between two random variables based on copulas:
Regression by Integration demonstrated on Ångström-Prescott-type relations,
Renewable Energy,
Volume 127,
2018,
Pages 713-723,
ISSN 0960-1481,
https://doi.org/10.1016/j.renene.2018.05.004.
(http://www.sciencedirect.com/science/article/pii/S0960148118305238)
Abstract: We present a novel approach for the determination of the relationship between two random variables, which we call Regression by Integration. The resulting curve is a least absolute error estimate. Compared to other regression methods, it has the advantage that, instead of a sample of simultaneously taken pairs of the two random variables, only a separate sample of each of the random variables is required. We demonstrate the practicability of the method on Ångström-Prescott-type relations and compare the results with those obtained by least square error fits. We present supporting theoretical background information based on copulas. We show that Regression by Integration leads to the strict interdependence of the two random variables; Spearman's rho is equal to one.
Keywords: Ångström-Prescott relation; Copula; Curve fits; Regression by Integration; Random variable
|
Copulas with Regression
|
I have recently devised a curve fitting method for the relationship between two random variables based on copulas:
Regression by Integration demonstrated on Ångström-Prescott-type relations,
Renewabl
|
Copulas with Regression
I have recently devised a curve fitting method for the relationship between two random variables based on copulas:
Regression by Integration demonstrated on Ångström-Prescott-type relations,
Renewable Energy,
Volume 127,
2018,
Pages 713-723,
ISSN 0960-1481,
https://doi.org/10.1016/j.renene.2018.05.004.
(http://www.sciencedirect.com/science/article/pii/S0960148118305238)
Abstract: We present a novel approach for the determination of the relationship between two random variables, which we call Regression by Integration. The resulting curve is a least absolute error estimate. Compared to other regression methods, it has the advantage that, instead of a sample of simultaneously taken pairs of the two random variables, only a separate sample of each of the random variables is required. We demonstrate the practicability of the method on Ångström-Prescott-type relations and compare the results with those obtained by least square error fits. We present supporting theoretical background information based on copulas. We show that Regression by Integration leads to the strict interdependence of the two random variables; Spearman's rho is equal to one.
Keywords: Ångström-Prescott relation; Copula; Curve fits; Regression by Integration; Random variable
|
Copulas with Regression
I have recently devised a curve fitting method for the relationship between two random variables based on copulas:
Regression by Integration demonstrated on Ångström-Prescott-type relations,
Renewabl
|
47,984
|
Can $p(Y|a,b)$ ever be equal to $p(Y|a) \cdot p(Y|b)$?
|
The equation in the question only happens in the trivial case where both $p(Y|a)$ and $p(Y|b)$ are point masses on a single $Y$ value. If it were true, then $\sum_Y p(Y|a) p(Y|b) = 1$. However, consider the following bound:
$$
\sum_Y p(Y|a) p(Y|b) \leq \sum_Y p(Y|a) \max_{Y'} p(Y'|b) = \max_{Y'} p(Y'|b)
$$
For the LHS to be 1, there must be a value of Y for which $p(Y|b)=1$, which means that value of $Y$ is the only one possible. By symmetry, this must also be true for $p(Y|a)$.
|
Can $p(Y|a,b)$ ever be equal to $p(Y|a) \cdot p(Y|b)$?
|
The equation in the question only happens in the trivial case where both $p(Y|a)$ and $p(Y|b)$ are point masses on a single $Y$ value. If it were true, then $\sum_Y p(Y|a) p(Y|b) = 1$. However, cons
|
Can $p(Y|a,b)$ ever be equal to $p(Y|a) \cdot p(Y|b)$?
The equation in the question only happens in the trivial case where both $p(Y|a)$ and $p(Y|b)$ are point masses on a single $Y$ value. If it were true, then $\sum_Y p(Y|a) p(Y|b) = 1$. However, consider the following bound:
$$
\sum_Y p(Y|a) p(Y|b) \leq \sum_Y p(Y|a) \max_{Y'} p(Y'|b) = \max_{Y'} p(Y'|b)
$$
For the LHS to be 1, there must be a value of Y for which $p(Y|b)=1$, which means that value of $Y$ is the only one possible. By symmetry, this must also be true for $p(Y|a)$.
|
Can $p(Y|a,b)$ ever be equal to $p(Y|a) \cdot p(Y|b)$?
The equation in the question only happens in the trivial case where both $p(Y|a)$ and $p(Y|b)$ are point masses on a single $Y$ value. If it were true, then $\sum_Y p(Y|a) p(Y|b) = 1$. However, cons
|
47,985
|
Can $p(Y|a,b)$ ever be equal to $p(Y|a) \cdot p(Y|b)$?
|
Sure, if $a$ and $b$ are assumed to be non-interacting events yielding independent information about $Y$ and $Y$ has a uniform prior, then this is the only reasonable way of combining information. Geoff Hinton calls this a product of experts. One caveat, if $Y$ doesn't have a uniform prior, then you'll double-count it when you do the pointwise multiplication. So you should really do $$P(Y\mid a,b) \propto \frac{p(Y\mid a) \cdot p(Y\mid b)}{p(Y)}$$.
Maybe you could say that the likelihoods induced on $a$ and $b$ are independent given $Y$?
|
Can $p(Y|a,b)$ ever be equal to $p(Y|a) \cdot p(Y|b)$?
|
Sure, if $a$ and $b$ are assumed to be non-interacting events yielding independent information about $Y$ and $Y$ has a uniform prior, then this is the only reasonable way of combining information. Ge
|
Can $p(Y|a,b)$ ever be equal to $p(Y|a) \cdot p(Y|b)$?
Sure, if $a$ and $b$ are assumed to be non-interacting events yielding independent information about $Y$ and $Y$ has a uniform prior, then this is the only reasonable way of combining information. Geoff Hinton calls this a product of experts. One caveat, if $Y$ doesn't have a uniform prior, then you'll double-count it when you do the pointwise multiplication. So you should really do $$P(Y\mid a,b) \propto \frac{p(Y\mid a) \cdot p(Y\mid b)}{p(Y)}$$.
Maybe you could say that the likelihoods induced on $a$ and $b$ are independent given $Y$?
|
Can $p(Y|a,b)$ ever be equal to $p(Y|a) \cdot p(Y|b)$?
Sure, if $a$ and $b$ are assumed to be non-interacting events yielding independent information about $Y$ and $Y$ has a uniform prior, then this is the only reasonable way of combining information. Ge
|
47,986
|
Poisson confidence interval using the pivotal method
|
As far as I know, there isn't really a pivotal quantity for $\lambda$*, though it's possible to construct approximately pivotal quantities if $n\lambda$ isn't small. (I include $n$ there just in case you have multiple observations from the same Poisson. In many cases you'll just have the one count. From here on I'll just refer to $X$ and $\lambda$ as if we were using a single $X$.)
* However, that's not to say nothing can be done. You can derive an interval for $\lambda$ from the relationship between the Poisson and the chi-square (see the end of this section). I can't say that it counts as pivotal, though.
For example, $\sqrt{X+\frac{3}{8}}$ is approximately normal with nearly constant variance (this is related to the Anscombe transform for the Poisson), so could be used to construct an approximate pivotal quantity (by subtracting its mean for example); $\sqrt{X}+\sqrt{X+1}$ - Freeman-Tukey - is another, similar choice from which you could obtain an approximately pivotal quantity. (Indeed, the confidence interval here relies on just such an approach)
To my recollection, some papers have given other quantities - if I remember where I've seen these, I'll add references.
With large values of $\lambda$, the simpler $\frac{X-\lambda}{\sqrt \lambda}$ might be used as an approximately pivotal quantity.
But with small $\lambda$ (more generally small $n\lambda$), there's really not much that can be done - I don't think you get a function of the parameter and the data whose distribution doesn't depend on the parameter. If your $\lambda$ is down around 0.5 or 1 or 2, say, there's not much you can really do about it... the large spikes at 0, 1 and 2 change substantially in relative probability with $\lambda$ and no transformation is going to alter that.
|
Poisson confidence interval using the pivotal method
|
As far as I know, there isn't really a pivotal quantity for $\lambda$*, though it's possible to construct approximately pivotal quantities if $n\lambda$ isn't small. (I include $n$ there just in case
|
Poisson confidence interval using the pivotal method
As far as I know, there isn't really a pivotal quantity for $\lambda$*, though it's possible to construct approximately pivotal quantities if $n\lambda$ isn't small. (I include $n$ there just in case you have multiple observations from the same Poisson. In many cases you'll just have the one count. From here on I'll just refer to $X$ and $\lambda$ as if we were using a single $X$.)
* However, that's not to say nothing can be done. You can derive an interval for $\lambda$ from the relationship between the Poisson and the chi-square (see the end of this section). I can't say that it counts as pivotal, though.
For example, $\sqrt{X+\frac{3}{8}}$ is approximately normal with nearly constant variance (this is related to the Anscombe transform for the Poisson), so could be used to construct an approximate pivotal quantity (by subtracting its mean for example); $\sqrt{X}+\sqrt{X+1}$ - Freeman-Tukey - is another, similar choice from which you could obtain an approximately pivotal quantity. (Indeed, the confidence interval here relies on just such an approach)
To my recollection, some papers have given other quantities - if I remember where I've seen these, I'll add references.
With large values of $\lambda$, the simpler $\frac{X-\lambda}{\sqrt \lambda}$ might be used as an approximately pivotal quantity.
But with small $\lambda$ (more generally small $n\lambda$), there's really not much that can be done - I don't think you get a function of the parameter and the data whose distribution doesn't depend on the parameter. If your $\lambda$ is down around 0.5 or 1 or 2, say, there's not much you can really do about it... the large spikes at 0, 1 and 2 change substantially in relative probability with $\lambda$ and no transformation is going to alter that.
|
Poisson confidence interval using the pivotal method
As far as I know, there isn't really a pivotal quantity for $\lambda$*, though it's possible to construct approximately pivotal quantities if $n\lambda$ isn't small. (I include $n$ there just in case
|
47,987
|
Wherefore Big Data?
|
it isn't only the data that is big, also the problem is big.
Indeed, the benefits of increasing the sampling size aren't great if you were computing the mean of terabytes of data. Only that nobody is interested in the 10th digit of the mean anyways...
More often than not, big data problems are more like a big amount of problems to be solved at once. You have millions of users, thousands of products. The sample sizes for each of them aren't big data, but you have a lot of them... similarly, in image recognition, you have lots of pixels, lots of labels (imagenet has some 20000 categories or so), so more often than not you don't even have a single training example that is really similar...
When searching a large space of hypotheses, you also need to adjust for multiple testing problems. Say you are testing for problems with an $\alpha=0.999$ certainty. But you are testing just 100 hypotheses, then you end up with a certainty of only $\bar\alpha=0.90$ that the result is really correct. And this confidence drops quickly - at 1000 tests, there is a 2 in 3 chance of having a false positive. A (at least theoretical) way out is to use a larger $\alpha$ such as $\alpha=0.99999$ in the individual tests. But then you may need to get a larger sample to be able to reach such a confidence ever...
|
Wherefore Big Data?
|
it isn't only the data that is big, also the problem is big.
Indeed, the benefits of increasing the sampling size aren't great if you were computing the mean of terabytes of data. Only that nobody is
|
Wherefore Big Data?
it isn't only the data that is big, also the problem is big.
Indeed, the benefits of increasing the sampling size aren't great if you were computing the mean of terabytes of data. Only that nobody is interested in the 10th digit of the mean anyways...
More often than not, big data problems are more like a big amount of problems to be solved at once. You have millions of users, thousands of products. The sample sizes for each of them aren't big data, but you have a lot of them... similarly, in image recognition, you have lots of pixels, lots of labels (imagenet has some 20000 categories or so), so more often than not you don't even have a single training example that is really similar...
When searching a large space of hypotheses, you also need to adjust for multiple testing problems. Say you are testing for problems with an $\alpha=0.999$ certainty. But you are testing just 100 hypotheses, then you end up with a certainty of only $\bar\alpha=0.90$ that the result is really correct. And this confidence drops quickly - at 1000 tests, there is a 2 in 3 chance of having a false positive. A (at least theoretical) way out is to use a larger $\alpha$ such as $\alpha=0.99999$ in the individual tests. But then you may need to get a larger sample to be able to reach such a confidence ever...
|
Wherefore Big Data?
it isn't only the data that is big, also the problem is big.
Indeed, the benefits of increasing the sampling size aren't great if you were computing the mean of terabytes of data. Only that nobody is
|
47,988
|
Using K-fold cross validation to select a model's parameters
|
Cross-validation just gives you an estimate of your out-of-sample risk. It doesn't produce a better model. To get the most precise estimate of your coefficients, you should use all of your data.
|
Using K-fold cross validation to select a model's parameters
|
Cross-validation just gives you an estimate of your out-of-sample risk. It doesn't produce a better model. To get the most precise estimate of your coefficients, you should use all of your data.
|
Using K-fold cross validation to select a model's parameters
Cross-validation just gives you an estimate of your out-of-sample risk. It doesn't produce a better model. To get the most precise estimate of your coefficients, you should use all of your data.
|
Using K-fold cross validation to select a model's parameters
Cross-validation just gives you an estimate of your out-of-sample risk. It doesn't produce a better model. To get the most precise estimate of your coefficients, you should use all of your data.
|
47,989
|
Using K-fold cross validation to select a model's parameters
|
Actually, I've already understood how to do so. Just in case someone stumbles upon this question: cross validation can serve as a parameter tuning tool.To tune a model's parameters using K-fold cross validation you train and test each model K times against the K possible data combinations and average their out-of-sample error. The one that gets the best result will be the model that (probably) generalizes better.
|
Using K-fold cross validation to select a model's parameters
|
Actually, I've already understood how to do so. Just in case someone stumbles upon this question: cross validation can serve as a parameter tuning tool.To tune a model's parameters using K-fold cross
|
Using K-fold cross validation to select a model's parameters
Actually, I've already understood how to do so. Just in case someone stumbles upon this question: cross validation can serve as a parameter tuning tool.To tune a model's parameters using K-fold cross validation you train and test each model K times against the K possible data combinations and average their out-of-sample error. The one that gets the best result will be the model that (probably) generalizes better.
|
Using K-fold cross validation to select a model's parameters
Actually, I've already understood how to do so. Just in case someone stumbles upon this question: cross validation can serve as a parameter tuning tool.To tune a model's parameters using K-fold cross
|
47,990
|
Affine equivariance and consistency
|
For a counterexample consider a one-dimensional sequence, $x_i\sim \mathcal N(0, \sigma^2)$, and let $S$ be the sample variance
$$S(x_1, \ldots, x_n) = \frac{1}{n}\sum_{i=1}x_i^2 - \left(\frac{1}{n}\sum_{i=1}^n x_i\right)^2.$$
Set $B=S$ except let $B=0$ whenever $|x_1+x_2+\cdots+x_n| \lt 1$. Although $S$ is equivariant, $B$ obviously is not, because rescaling the $x_i$ to be sufficiently small will turn an almost surely positive value of $S$ into a zero value. Nevertheless, almost surely the limit of $B(x_1, \ldots, x_n)$ will equal $\sigma^2$ because the chance that $B=0$ is bounded above by $\frac{1}{n}\sqrt{2/\pi}\to 0$.
|
Affine equivariance and consistency
|
For a counterexample consider a one-dimensional sequence, $x_i\sim \mathcal N(0, \sigma^2)$, and let $S$ be the sample variance
$$S(x_1, \ldots, x_n) = \frac{1}{n}\sum_{i=1}x_i^2 - \left(\frac{1}{n}\s
|
Affine equivariance and consistency
For a counterexample consider a one-dimensional sequence, $x_i\sim \mathcal N(0, \sigma^2)$, and let $S$ be the sample variance
$$S(x_1, \ldots, x_n) = \frac{1}{n}\sum_{i=1}x_i^2 - \left(\frac{1}{n}\sum_{i=1}^n x_i\right)^2.$$
Set $B=S$ except let $B=0$ whenever $|x_1+x_2+\cdots+x_n| \lt 1$. Although $S$ is equivariant, $B$ obviously is not, because rescaling the $x_i$ to be sufficiently small will turn an almost surely positive value of $S$ into a zero value. Nevertheless, almost surely the limit of $B(x_1, \ldots, x_n)$ will equal $\sigma^2$ because the chance that $B=0$ is bounded above by $\frac{1}{n}\sqrt{2/\pi}\to 0$.
|
Affine equivariance and consistency
For a counterexample consider a one-dimensional sequence, $x_i\sim \mathcal N(0, \sigma^2)$, and let $S$ be the sample variance
$$S(x_1, \ldots, x_n) = \frac{1}{n}\sum_{i=1}x_i^2 - \left(\frac{1}{n}\s
|
47,991
|
Affine equivariance and consistency
|
Let $B(X_n)$ be an affine equivariant estimator. Let $A(X_n) = B(X_n) k(B(X_n))$ where $k$ is a scalar function such as trace. Then $A$ is consistent but not affine equivariant. This is called a weak covariance functional.
|
Affine equivariance and consistency
|
Let $B(X_n)$ be an affine equivariant estimator. Let $A(X_n) = B(X_n) k(B(X_n))$ where $k$ is a scalar function such as trace. Then $A$ is consistent but not affine equivariant. This is called a we
|
Affine equivariance and consistency
Let $B(X_n)$ be an affine equivariant estimator. Let $A(X_n) = B(X_n) k(B(X_n))$ where $k$ is a scalar function such as trace. Then $A$ is consistent but not affine equivariant. This is called a weak covariance functional.
|
Affine equivariance and consistency
Let $B(X_n)$ be an affine equivariant estimator. Let $A(X_n) = B(X_n) k(B(X_n))$ where $k$ is a scalar function such as trace. Then $A$ is consistent but not affine equivariant. This is called a we
|
47,992
|
Resources to write a statistical analysis plan
|
While I would argue that a statistical analysis plan (SAP) is a really good idea for any study or experiment, it is in clinical research that you will find most guidance. That is because the field is heavily regulated and because it is in the interest of industry to describe best practices.
In most such research, there is already a history and a huge set of accepted methodology. That is why you can find documents that lay out the structure of the SAP in detail, such as:
The FDA's Guidance for Industry, E9 Statistical Principles for Clinical Trials
The ENCePP Guide on Methodological Standards in Pharmacoepidemiology, Chapter 5: Statistical and epidemiological analysis plan
There are many excellent resources online for evaluating or constructing the SAP. These include many articles and sites offering criteria for assessing or evaluating SAPs --- for example, this Review of Statistical Analysis Plans.
Most companies and larger research institutions have templates for various documents, including the SAP --- this sample Word template is one of many.
Also, in such settings, you could expect the SAP to undergo just as rigorous a review as the protocol, case report forms, data management plan, statistical programs, and every other artifact associated with the research.
At its heart, the SAP is actually quite a practical document. It allows communication among many different players about what may be expected from the research. It is not going to be the love-child of academia and industry, though.
Most of the SAP is probably going to be devoted to very mundane things such as planned data listings, summary tables, and summary figures. The actual statistical analyses planned might occupy only a little bit of text in proportion to the rest of the document.
However, really good listings, summary tables, and summary figures can be very informative.
How deep should one go into detail there?
Depending on the type of study, you may wish to go into more or less detail. However, there is a line to walk. If you over-specify the statistical analysis, you may commit to things that cannot be done, depending on the actual distribution of the data that is collected. Also, it does no good to specify highly sophisticated statistical analysis that may be quite "brittle" to real-world exigencies, or that might pose a problem in communication with regulatory authorities.
There will probably need to be some sound basis given for the choice of statistical methods used. This will most likely be in the form of some more or less standard references depending a bit on the type of research.
While I really agree with your general approach of starting with first principles, in practice it is usually better to use an analysis that is as standard and as simple as can meet the needs of the study. Then, if resources and the data permit, more elaborate analyses could be planned.
This means that usually there needs to be statistical input into the design of the study, in order to allow a decently straightforward statistical analysis in the first place.
What am I allowed to omit in a SAP although I thought about it for
designing the analysis?
That is a good question. I would lean toward not putting in a lot of "sophisticated" analysis methods or branching logic, unless for some reason that was deemed absolutely necessary. Instead, you might wish to keep track of the ideas you have for extra analysis for the case where you have adequate time at hand.
If you can foresee bad things happening with respect to the distribution of the data, then perhaps the research should be redesigned or not be carried out. If you can foresee certain common problems, then you can certainly acknowledge them along with including a bit of discussion about how they will be handled.
For example, it is pretty common to provide that a nonparametric test will be used instead of a parametric test, if the data distribution requires.
In the main, you want to provide a straightforward description of the proposed statistical methods, with perhaps some discussion of why they are appropriate. But, if a lot of text is needed, I would rethink the analysis or the research.
If you over-promise results, then you may not be able to deliver on them. If you under-specify the analysis, then you may create unnecessary negative feedback or delays.
As mentioned, there are many online resources for SAPs used in clinical research. Most of those principles and practices are just as useful in every other field of research, from my experience. However, it is unrealistic to expect people to put the same amount of effort into the SAP unless perhaps the research will cost a lot of money and time. Even then...
|
Resources to write a statistical analysis plan
|
While I would argue that a statistical analysis plan (SAP) is a really good idea for any study or experiment, it is in clinical research that you will find most guidance. That is because the field is
|
Resources to write a statistical analysis plan
While I would argue that a statistical analysis plan (SAP) is a really good idea for any study or experiment, it is in clinical research that you will find most guidance. That is because the field is heavily regulated and because it is in the interest of industry to describe best practices.
In most such research, there is already a history and a huge set of accepted methodology. That is why you can find documents that lay out the structure of the SAP in detail, such as:
The FDA's Guidance for Industry, E9 Statistical Principles for Clinical Trials
The ENCePP Guide on Methodological Standards in Pharmacoepidemiology, Chapter 5: Statistical and epidemiological analysis plan
There are many excellent resources online for evaluating or constructing the SAP. These include many articles and sites offering criteria for assessing or evaluating SAPs --- for example, this Review of Statistical Analysis Plans.
Most companies and larger research institutions have templates for various documents, including the SAP --- this sample Word template is one of many.
Also, in such settings, you could expect the SAP to undergo just as rigorous a review as the protocol, case report forms, data management plan, statistical programs, and every other artifact associated with the research.
At its heart, the SAP is actually quite a practical document. It allows communication among many different players about what may be expected from the research. It is not going to be the love-child of academia and industry, though.
Most of the SAP is probably going to be devoted to very mundane things such as planned data listings, summary tables, and summary figures. The actual statistical analyses planned might occupy only a little bit of text in proportion to the rest of the document.
However, really good listings, summary tables, and summary figures can be very informative.
How deep should one go into detail there?
Depending on the type of study, you may wish to go into more or less detail. However, there is a line to walk. If you over-specify the statistical analysis, you may commit to things that cannot be done, depending on the actual distribution of the data that is collected. Also, it does no good to specify highly sophisticated statistical analysis that may be quite "brittle" to real-world exigencies, or that might pose a problem in communication with regulatory authorities.
There will probably need to be some sound basis given for the choice of statistical methods used. This will most likely be in the form of some more or less standard references depending a bit on the type of research.
While I really agree with your general approach of starting with first principles, in practice it is usually better to use an analysis that is as standard and as simple as can meet the needs of the study. Then, if resources and the data permit, more elaborate analyses could be planned.
This means that usually there needs to be statistical input into the design of the study, in order to allow a decently straightforward statistical analysis in the first place.
What am I allowed to omit in a SAP although I thought about it for
designing the analysis?
That is a good question. I would lean toward not putting in a lot of "sophisticated" analysis methods or branching logic, unless for some reason that was deemed absolutely necessary. Instead, you might wish to keep track of the ideas you have for extra analysis for the case where you have adequate time at hand.
If you can foresee bad things happening with respect to the distribution of the data, then perhaps the research should be redesigned or not be carried out. If you can foresee certain common problems, then you can certainly acknowledge them along with including a bit of discussion about how they will be handled.
For example, it is pretty common to provide that a nonparametric test will be used instead of a parametric test, if the data distribution requires.
In the main, you want to provide a straightforward description of the proposed statistical methods, with perhaps some discussion of why they are appropriate. But, if a lot of text is needed, I would rethink the analysis or the research.
If you over-promise results, then you may not be able to deliver on them. If you under-specify the analysis, then you may create unnecessary negative feedback or delays.
As mentioned, there are many online resources for SAPs used in clinical research. Most of those principles and practices are just as useful in every other field of research, from my experience. However, it is unrealistic to expect people to put the same amount of effort into the SAP unless perhaps the research will cost a lot of money and time. Even then...
|
Resources to write a statistical analysis plan
While I would argue that a statistical analysis plan (SAP) is a really good idea for any study or experiment, it is in clinical research that you will find most guidance. That is because the field is
|
47,993
|
Expected value of dot product between a random unit vector in $\mathbb{R}^n$ and another given unit vector
|
Judging from the result, it appears the context implicitly supposes the distribution of $x$ is invariant under orthogonal transformations: I would call this a spherically-symmetric distribution.
(There are plenty of spherically-symmetric distributionns. Starting with any distribution $F$ in $\mathbb{R}^n$, define $\tilde F$ to be the values of $F$ averaged over the action of the orthogonal group $O(n)$ of rotations and reflections about the origin. The average exists because $O(n)$ is compact and acts continuously. It is immediate that $\tilde F$ is invariant under $O(n)$. In particular, a Normal distribution of mean $(0,0,\ldots,0)$, diagonal variance matrix, and equal variances is spherical.)
It is a (simple and geometrically obvious) algebraic result that any unit vector $v$ can be extended to an orthonormal frame $(v=v_1, v_2, \ldots, v_n)$. Because the distribution is spherically symmetric and any element $v_i$ can be rotated into any other element $v_j$, the coordinates $x\cdot v_i$ all have the same distribution. Let $\mu_2$ be the common expected value of all the $(x\cdot v_i)^2$.
Since $x$ is assumed to be a unit vector,
$$1 = 1^2 = x\cdot x = \sum_i (x\cdot v_i)^2.$$
Take expectations of both sides and use the linearity of expectation to compute
$$1 = \mathbb{E}(1) = \mathbb{E}(x\cdot x) = \mathbb{E}\left( \sum_{i=1}^n (x\cdot v_i)^2\right) = \sum_{i=1}^n \mathbb{E}((x\cdot v_i)^2) = \sum_{i=1}^n \mu_2 = n\mu_2,$$
implying $1/n = \mu_2 = \mathbb E ((x\cdot v)^2)$.
|
Expected value of dot product between a random unit vector in $\mathbb{R}^n$ and another given unit
|
Judging from the result, it appears the context implicitly supposes the distribution of $x$ is invariant under orthogonal transformations: I would call this a spherically-symmetric distribution.
(Ther
|
Expected value of dot product between a random unit vector in $\mathbb{R}^n$ and another given unit vector
Judging from the result, it appears the context implicitly supposes the distribution of $x$ is invariant under orthogonal transformations: I would call this a spherically-symmetric distribution.
(There are plenty of spherically-symmetric distributionns. Starting with any distribution $F$ in $\mathbb{R}^n$, define $\tilde F$ to be the values of $F$ averaged over the action of the orthogonal group $O(n)$ of rotations and reflections about the origin. The average exists because $O(n)$ is compact and acts continuously. It is immediate that $\tilde F$ is invariant under $O(n)$. In particular, a Normal distribution of mean $(0,0,\ldots,0)$, diagonal variance matrix, and equal variances is spherical.)
It is a (simple and geometrically obvious) algebraic result that any unit vector $v$ can be extended to an orthonormal frame $(v=v_1, v_2, \ldots, v_n)$. Because the distribution is spherically symmetric and any element $v_i$ can be rotated into any other element $v_j$, the coordinates $x\cdot v_i$ all have the same distribution. Let $\mu_2$ be the common expected value of all the $(x\cdot v_i)^2$.
Since $x$ is assumed to be a unit vector,
$$1 = 1^2 = x\cdot x = \sum_i (x\cdot v_i)^2.$$
Take expectations of both sides and use the linearity of expectation to compute
$$1 = \mathbb{E}(1) = \mathbb{E}(x\cdot x) = \mathbb{E}\left( \sum_{i=1}^n (x\cdot v_i)^2\right) = \sum_{i=1}^n \mathbb{E}((x\cdot v_i)^2) = \sum_{i=1}^n \mu_2 = n\mu_2,$$
implying $1/n = \mu_2 = \mathbb E ((x\cdot v)^2)$.
|
Expected value of dot product between a random unit vector in $\mathbb{R}^n$ and another given unit
Judging from the result, it appears the context implicitly supposes the distribution of $x$ is invariant under orthogonal transformations: I would call this a spherically-symmetric distribution.
(Ther
|
47,994
|
Many dependent variables, few samples: is this an example of "large $p$, small $n$" problem?
|
There are several combinations on the size of $n$ and $p$: small $n$ - large $p$, small $p$ - large $n$, large $n$ - large $p$ ... See Johnstone & Titterington, 2009, Statistical challenges of high-dimensional data for an overview.
In your case, it appears that you have a small $p=1$ and relatively small $n$, with high dimension of the dependent variable. It is likely that your independent variable may not contain enough information to properly model $300$ responses.
The justification of this claim is as follows. If you use a GLM for your data, then you have $20$ samples to estimate $300+$ parameters in the covariance matrix of the errors. This may induce over-fitting, and the precision of the estimators will be unnecessarily vague (in the sense that confidence intervals for these parameters might be too wide) and inaccurate (far from the true value). However, if you restrict the structure of the covariance matrix, then it may be possible to estimate the parameters more accurately (How to restrict the structure of the covariance matrix? That's a big question which depends on the context). Moreover, the fewer covariates you use, the more "responsibility" the residual errors carry to explain the unobserved variability. This may, for instance, inflate the variances or induce the need for more flexible distributions than normal for modelling the residual errors.
Additional references of possible interest:
West, 2003, Bayesian Factor Regression Models in the “Large p, Small n” Paradigm
CV question: Summary of "Large p, Small n" results
|
Many dependent variables, few samples: is this an example of "large $p$, small $n$" problem?
|
There are several combinations on the size of $n$ and $p$: small $n$ - large $p$, small $p$ - large $n$, large $n$ - large $p$ ... See Johnstone & Titterington, 2009, Statistical challenges of high-di
|
Many dependent variables, few samples: is this an example of "large $p$, small $n$" problem?
There are several combinations on the size of $n$ and $p$: small $n$ - large $p$, small $p$ - large $n$, large $n$ - large $p$ ... See Johnstone & Titterington, 2009, Statistical challenges of high-dimensional data for an overview.
In your case, it appears that you have a small $p=1$ and relatively small $n$, with high dimension of the dependent variable. It is likely that your independent variable may not contain enough information to properly model $300$ responses.
The justification of this claim is as follows. If you use a GLM for your data, then you have $20$ samples to estimate $300+$ parameters in the covariance matrix of the errors. This may induce over-fitting, and the precision of the estimators will be unnecessarily vague (in the sense that confidence intervals for these parameters might be too wide) and inaccurate (far from the true value). However, if you restrict the structure of the covariance matrix, then it may be possible to estimate the parameters more accurately (How to restrict the structure of the covariance matrix? That's a big question which depends on the context). Moreover, the fewer covariates you use, the more "responsibility" the residual errors carry to explain the unobserved variability. This may, for instance, inflate the variances or induce the need for more flexible distributions than normal for modelling the residual errors.
Additional references of possible interest:
West, 2003, Bayesian Factor Regression Models in the “Large p, Small n” Paradigm
CV question: Summary of "Large p, Small n" results
|
Many dependent variables, few samples: is this an example of "large $p$, small $n$" problem?
There are several combinations on the size of $n$ and $p$: small $n$ - large $p$, small $p$ - large $n$, large $n$ - large $p$ ... See Johnstone & Titterington, 2009, Statistical challenges of high-di
|
47,995
|
Many dependent variables, few samples: is this an example of "large $p$, small $n$" problem?
|
[As I read it, the question is primarily about terminology, and @East's answer (good as it is) does not explicitly address it.]
Sometimes the distinction between dependent and independent variables is not so clear. As you are referring to MANOVA, you probably have $300$ variables measured for two groups. Technically, you are right, it is $300$ dependent variables, but imagine that you want to predict the group membership by looking at the variables (after all, the purpose of running MANOVA is to test if the groups are different or not). Now group identity suddenly becomes a dependent variable, and you have $300$ independent variables to make the prediction.
So I think the distinction between dependent and independent variables is not very important here, and your situation can be safely described as "large $p$, small $n$".
In practice, people definitely do refer to classification problems, e.g. linear discriminant analysis, with number of features $p\gg n$ as "large $p$, small $n$" (see e.g. The Elements of Statistical Learning 18.2). But linear discriminant analysis is almost the same thing as MANOVA, see here: How is MANOVA related to LDA? So I would advocate to go ahead and to call it "large $p$, small $n$" in MANOVA context as well.
|
Many dependent variables, few samples: is this an example of "large $p$, small $n$" problem?
|
[As I read it, the question is primarily about terminology, and @East's answer (good as it is) does not explicitly address it.]
Sometimes the distinction between dependent and independent variables is
|
Many dependent variables, few samples: is this an example of "large $p$, small $n$" problem?
[As I read it, the question is primarily about terminology, and @East's answer (good as it is) does not explicitly address it.]
Sometimes the distinction between dependent and independent variables is not so clear. As you are referring to MANOVA, you probably have $300$ variables measured for two groups. Technically, you are right, it is $300$ dependent variables, but imagine that you want to predict the group membership by looking at the variables (after all, the purpose of running MANOVA is to test if the groups are different or not). Now group identity suddenly becomes a dependent variable, and you have $300$ independent variables to make the prediction.
So I think the distinction between dependent and independent variables is not very important here, and your situation can be safely described as "large $p$, small $n$".
In practice, people definitely do refer to classification problems, e.g. linear discriminant analysis, with number of features $p\gg n$ as "large $p$, small $n$" (see e.g. The Elements of Statistical Learning 18.2). But linear discriminant analysis is almost the same thing as MANOVA, see here: How is MANOVA related to LDA? So I would advocate to go ahead and to call it "large $p$, small $n$" in MANOVA context as well.
|
Many dependent variables, few samples: is this an example of "large $p$, small $n$" problem?
[As I read it, the question is primarily about terminology, and @East's answer (good as it is) does not explicitly address it.]
Sometimes the distinction between dependent and independent variables is
|
47,996
|
Taylor's expansion on log likelihood
|
If one includes the notational dependency on $n$:
$$
\begin{align*}
\ell_n\left(\theta\right) & = \ell_n\left(\widehat{\theta}_n\right)+\frac{\partial\ell_n\left(\theta\right)}{\partial\theta}\Bigr|_{\theta=\widehat{\theta}_n}\left(\theta-\widehat{\theta}_n\right)+ o_n(|\widehat{\theta}_n - \theta|^2)
\\
\end{align*}
$$
we see that the puzzling point is the $n$-dependency of the $o$.
A rigorous way to get an approximate with a $n$-independent $o$:
$$
\begin{align*}
\ell_n\left(\theta\right) & = \ell_n\left(\widehat{\theta}_n\right)+\frac{\partial\ell_n\left(\theta\right)}{\partial\theta}\Bigr|_{\theta=\widehat{\theta}_n}\left(\theta-\widehat{\theta}_n\right)+ o(|\widehat{\theta}_n - \theta|^2)
\\
\end{align*}
$$
is Taylor-Lagrange's inequality: if you are able to majorate $\ell_n'' \leq M$ uniformly in $n$ (on an appropriate interval) then you get the uniform $o$ by Taylor-Lagrange's inequality.
|
Taylor's expansion on log likelihood
|
If one includes the notational dependency on $n$:
$$
\begin{align*}
\ell_n\left(\theta\right) & = \ell_n\left(\widehat{\theta}_n\right)+\frac{\partial\ell_n\left(\theta\right)}{\partial\theta}\Bigr|_{
|
Taylor's expansion on log likelihood
If one includes the notational dependency on $n$:
$$
\begin{align*}
\ell_n\left(\theta\right) & = \ell_n\left(\widehat{\theta}_n\right)+\frac{\partial\ell_n\left(\theta\right)}{\partial\theta}\Bigr|_{\theta=\widehat{\theta}_n}\left(\theta-\widehat{\theta}_n\right)+ o_n(|\widehat{\theta}_n - \theta|^2)
\\
\end{align*}
$$
we see that the puzzling point is the $n$-dependency of the $o$.
A rigorous way to get an approximate with a $n$-independent $o$:
$$
\begin{align*}
\ell_n\left(\theta\right) & = \ell_n\left(\widehat{\theta}_n\right)+\frac{\partial\ell_n\left(\theta\right)}{\partial\theta}\Bigr|_{\theta=\widehat{\theta}_n}\left(\theta-\widehat{\theta}_n\right)+ o(|\widehat{\theta}_n - \theta|^2)
\\
\end{align*}
$$
is Taylor-Lagrange's inequality: if you are able to majorate $\ell_n'' \leq M$ uniformly in $n$ (on an appropriate interval) then you get the uniform $o$ by Taylor-Lagrange's inequality.
|
Taylor's expansion on log likelihood
If one includes the notational dependency on $n$:
$$
\begin{align*}
\ell_n\left(\theta\right) & = \ell_n\left(\widehat{\theta}_n\right)+\frac{\partial\ell_n\left(\theta\right)}{\partial\theta}\Bigr|_{
|
47,997
|
Taylor's expansion on log likelihood
|
Strictly speaking, the likelihood function has two components: the observations and the parameters. It is typically seen as a function of the parameters when the sample is fixed but you can also study its behaviour as a random variable when you fix the parameters and see it as a function of the random variables (which are not fixed).
It is justified to use Taylor expansion when the sample is fixed, this is, a realisation of the corresponding random variables. The asymptotic behaviour is studied on a sequence of likelihood functions $\ell_n(\theta)$, indexed by the sample size in the usual way done in analysis.
The use of the Taylor expansion is actually quite common, since it allows for constructing a normal approximation to the likelihood by using a second order expansion as follows:
\begin{align*}
\ell\left(\theta\right) & \approx \ell\left(\widehat{\theta}\right) + \frac{\partial\ell\left(\theta\right)}{\partial\theta}\Bigr|_{\theta=\widehat{\theta}}\left(\theta-\widehat{\theta}\right) + \frac{\partial^2\ell\left(\theta\right)}{\partial\theta^2}\Bigr|_{\theta=\widehat{\theta}}\left(\theta-\widehat{\theta}\right)^2 ,
\\
\end{align*}
The first term is fixed, the second term is zero given that it is evaluated at its maximum. Then,
\begin{align*}
\ell\left(\theta\right) & \approx C + K\left(\theta-\widehat{\theta}\right)^2,
\\
\end{align*}
and finally taking exponential on both sides:
\begin{align*}
{\mathcal L}\left(\theta\right) & \approx C^{\prime}\exp K \left(\theta-\widehat{\theta}\right)^2
\\
\end{align*}
which resembles the kernel of a normal density. $K$ is a negative constant since it is the second derivative evaluated at the MLE. The only requirement is, as in any other function, differentiability.
|
Taylor's expansion on log likelihood
|
Strictly speaking, the likelihood function has two components: the observations and the parameters. It is typically seen as a function of the parameters when the sample is fixed but you can also study
|
Taylor's expansion on log likelihood
Strictly speaking, the likelihood function has two components: the observations and the parameters. It is typically seen as a function of the parameters when the sample is fixed but you can also study its behaviour as a random variable when you fix the parameters and see it as a function of the random variables (which are not fixed).
It is justified to use Taylor expansion when the sample is fixed, this is, a realisation of the corresponding random variables. The asymptotic behaviour is studied on a sequence of likelihood functions $\ell_n(\theta)$, indexed by the sample size in the usual way done in analysis.
The use of the Taylor expansion is actually quite common, since it allows for constructing a normal approximation to the likelihood by using a second order expansion as follows:
\begin{align*}
\ell\left(\theta\right) & \approx \ell\left(\widehat{\theta}\right) + \frac{\partial\ell\left(\theta\right)}{\partial\theta}\Bigr|_{\theta=\widehat{\theta}}\left(\theta-\widehat{\theta}\right) + \frac{\partial^2\ell\left(\theta\right)}{\partial\theta^2}\Bigr|_{\theta=\widehat{\theta}}\left(\theta-\widehat{\theta}\right)^2 ,
\\
\end{align*}
The first term is fixed, the second term is zero given that it is evaluated at its maximum. Then,
\begin{align*}
\ell\left(\theta\right) & \approx C + K\left(\theta-\widehat{\theta}\right)^2,
\\
\end{align*}
and finally taking exponential on both sides:
\begin{align*}
{\mathcal L}\left(\theta\right) & \approx C^{\prime}\exp K \left(\theta-\widehat{\theta}\right)^2
\\
\end{align*}
which resembles the kernel of a normal density. $K$ is a negative constant since it is the second derivative evaluated at the MLE. The only requirement is, as in any other function, differentiability.
|
Taylor's expansion on log likelihood
Strictly speaking, the likelihood function has two components: the observations and the parameters. It is typically seen as a function of the parameters when the sample is fixed but you can also study
|
47,998
|
Taylor's expansion on log likelihood
|
Strictly speaking that expression doesn't make sense a priori. But It can be made precise. The log-likelihood is a random function (or a sequence of random functions if you're in the asymptotic setting) on the parameter space. So sure, for a given realization of that random function, one can write (for sample size $n$)
\begin{align*}
\ell\left(\theta\right) & = \ell\left(\widehat{\theta}_n\right)+\frac{\partial\ell\left(\theta\right)}{\partial\theta}\Bigr|_{\theta=\widehat{\theta}_n}\left(\theta-\widehat{\theta}_n\right)+ o(|\widehat{\theta}_n - \theta|)
\\
\end{align*}
exactly as what you have. But that is useless unless you know the random variable $|\widehat{\theta}_n - \theta|$ is small, say in probability as $n \rightarrow \infty$. In other words, you need that the MLE estimator is weakly consistent.
In other words,
\begin{align*}
\ell\left(\theta\right) & = \ell\left(\widehat{\theta}_n\right)+\frac{\partial\ell\left(\theta\right)}{\partial\theta}\Bigr|_{\theta=\widehat{\theta}_n}\left(\theta-\widehat{\theta}_n\right)+ o_p(1).
\\
\end{align*}
Strictly speaking $l$ should be $l_n$. In the asymptotic setting, the log-likelihood is a squence of random functions but omitting $n$ is common.
|
Taylor's expansion on log likelihood
|
Strictly speaking that expression doesn't make sense a priori. But It can be made precise. The log-likelihood is a random function (or a sequence of random functions if you're in the asymptotic settin
|
Taylor's expansion on log likelihood
Strictly speaking that expression doesn't make sense a priori. But It can be made precise. The log-likelihood is a random function (or a sequence of random functions if you're in the asymptotic setting) on the parameter space. So sure, for a given realization of that random function, one can write (for sample size $n$)
\begin{align*}
\ell\left(\theta\right) & = \ell\left(\widehat{\theta}_n\right)+\frac{\partial\ell\left(\theta\right)}{\partial\theta}\Bigr|_{\theta=\widehat{\theta}_n}\left(\theta-\widehat{\theta}_n\right)+ o(|\widehat{\theta}_n - \theta|)
\\
\end{align*}
exactly as what you have. But that is useless unless you know the random variable $|\widehat{\theta}_n - \theta|$ is small, say in probability as $n \rightarrow \infty$. In other words, you need that the MLE estimator is weakly consistent.
In other words,
\begin{align*}
\ell\left(\theta\right) & = \ell\left(\widehat{\theta}_n\right)+\frac{\partial\ell\left(\theta\right)}{\partial\theta}\Bigr|_{\theta=\widehat{\theta}_n}\left(\theta-\widehat{\theta}_n\right)+ o_p(1).
\\
\end{align*}
Strictly speaking $l$ should be $l_n$. In the asymptotic setting, the log-likelihood is a squence of random functions but omitting $n$ is common.
|
Taylor's expansion on log likelihood
Strictly speaking that expression doesn't make sense a priori. But It can be made precise. The log-likelihood is a random function (or a sequence of random functions if you're in the asymptotic settin
|
47,999
|
When do kernel based method perform better than the regular
|
There is no easy answer to this question.
Typically you would start experimenting using the linear kernel function (to which I assume you refer as "regular"). If data is not linearly separable then there would be errors. So, if the performance is not satisfactory you would have to try some non-linear kernel functions.
The typical next choices are RBF and polynomial kernel of degrees 2 or 3. What is best depends on the geometry of the samples of your problems. The idea is that you try to find a kernel function that maps your samples in a higher dimensional space, in which samples become linearly separable. In practice, you have to try one by one the different options until you find something that works satisfactory for your problem. However, the more exotic kernels typically have some specific type of application or data in mind and unless you know you need them probably you can skip them.
There are some trade-offs you should have in mind as you increase complexity and start to explore the non-linear kernels:
it becomes easier to overfit the data (you start fitting noise)
the computational complexity increases (more time/memory requirements)
you have to tune more hyper parameters (like gamma and degree for polynomial kernels)
|
When do kernel based method perform better than the regular
|
There is no easy answer to this question.
Typically you would start experimenting using the linear kernel function (to which I assume you refer as "regular"). If data is not linearly separable then t
|
When do kernel based method perform better than the regular
There is no easy answer to this question.
Typically you would start experimenting using the linear kernel function (to which I assume you refer as "regular"). If data is not linearly separable then there would be errors. So, if the performance is not satisfactory you would have to try some non-linear kernel functions.
The typical next choices are RBF and polynomial kernel of degrees 2 or 3. What is best depends on the geometry of the samples of your problems. The idea is that you try to find a kernel function that maps your samples in a higher dimensional space, in which samples become linearly separable. In practice, you have to try one by one the different options until you find something that works satisfactory for your problem. However, the more exotic kernels typically have some specific type of application or data in mind and unless you know you need them probably you can skip them.
There are some trade-offs you should have in mind as you increase complexity and start to explore the non-linear kernels:
it becomes easier to overfit the data (you start fitting noise)
the computational complexity increases (more time/memory requirements)
you have to tune more hyper parameters (like gamma and degree for polynomial kernels)
|
When do kernel based method perform better than the regular
There is no easy answer to this question.
Typically you would start experimenting using the linear kernel function (to which I assume you refer as "regular"). If data is not linearly separable then t
|
48,000
|
When do kernel based method perform better than the regular
|
Just some more thoughts to previous answer:
What is Kernel or Kernel Method ?
Kernel or Positive-definite kernel is a generalization of a positive-definite matrix.In linear algebra, a symmetric n × n real matrix M is said to be positive definite if zTMz is positive for every non-zero column vector z of n real numbers. Here zT denotes the transpose of z. kernel methods are a class of algorithms for pattern analysis, whose best known member is the support vector machine (SVM).
What is SVM ?
From wikipedia: "In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning models with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other, making it a non-probabilistic binary linear classifier. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall on.
In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces."
Kernel methods owe their name to the use of kernel functions, which enable them to operate in a high-dimensional, implicit feature space without ever computing the coordinates of the data in that space, but rather by simply computing the inner products between the images of all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. This approach is called the kernel trick.
Algorithms capable of operating with kernels include the kernel perceptron, support vector machines (SVM), Gaussian processes, principal components analysis (PCA), canonical correlation analysis, ridge regression, spectral clustering, linear adaptive filters and many others. Any linear model can be turned into a non-linear model by applying the "kernel trick" to the model: replacing its features (predictors) by a kernel function.
Radial basis function
The (Gaussian) radial basis function kernel, or RBF kernel, is a popular kernel function used in support vector machine classification.
Fisher kernel
The Fisher kernel, named in honour of Sir Ronald Fisher, is a function that measures the similarity of two objects on the basis of sets of measurements for each object and a statistical model. In a classification procedure, the class for a new object (whose real class is unknown) can be estimated by minimising, across classes, an average of the Fisher kernel distance from the new object to each known member of the given class.The Fisher kernel is the kernel for a generative probabilistic model. As such, it constitutes a bridge between generative and probabilistic models of documents.
Polynomial kernel
The polynomial kernel is a kernel function commonly used with support vector machines (SVMs) and other kernelized models, that represents the similarity of vectors (training samples) in a feature space over polynomials of the original variables, allowing learning of non-linear models.
Intuitively, the polynomial kernel looks not only at the given features of input samples to determine their similarity, but also combinations of these. In the context of regression analysis, such combinations are known as interaction features. The (implicit) feature space of a polynomial kernel is equivalent to that of polynomial regression, but without the combinatorial blowup in the number of parameters to be learned.
The RBF kernel is more popular in SVM classification than the polynomial kernel. The most common degree is d=2.
Thus the answer to question which kernel to use is:
(1) Try simplest one first and then swith to more complex ones - as overfitting is danger
(2) Learn from other worked examples, for similar data types. This may be quick solution, many times
|
When do kernel based method perform better than the regular
|
Just some more thoughts to previous answer:
What is Kernel or Kernel Method ?
Kernel or Positive-definite kernel is a generalization of a positive-definite matrix.In linear algebra, a symmetric n × n
|
When do kernel based method perform better than the regular
Just some more thoughts to previous answer:
What is Kernel or Kernel Method ?
Kernel or Positive-definite kernel is a generalization of a positive-definite matrix.In linear algebra, a symmetric n × n real matrix M is said to be positive definite if zTMz is positive for every non-zero column vector z of n real numbers. Here zT denotes the transpose of z. kernel methods are a class of algorithms for pattern analysis, whose best known member is the support vector machine (SVM).
What is SVM ?
From wikipedia: "In machine learning, support vector machines (SVMs, also support vector networks) are supervised learning models with associated learning algorithms that analyze data and recognize patterns, used for classification and regression analysis. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that assigns new examples into one category or the other, making it a non-probabilistic binary linear classifier. An SVM model is a representation of the examples as points in space, mapped so that the examples of the separate categories are divided by a clear gap that is as wide as possible. New examples are then mapped into that same space and predicted to belong to a category based on which side of the gap they fall on.
In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces."
Kernel methods owe their name to the use of kernel functions, which enable them to operate in a high-dimensional, implicit feature space without ever computing the coordinates of the data in that space, but rather by simply computing the inner products between the images of all pairs of data in the feature space. This operation is often computationally cheaper than the explicit computation of the coordinates. This approach is called the kernel trick.
Algorithms capable of operating with kernels include the kernel perceptron, support vector machines (SVM), Gaussian processes, principal components analysis (PCA), canonical correlation analysis, ridge regression, spectral clustering, linear adaptive filters and many others. Any linear model can be turned into a non-linear model by applying the "kernel trick" to the model: replacing its features (predictors) by a kernel function.
Radial basis function
The (Gaussian) radial basis function kernel, or RBF kernel, is a popular kernel function used in support vector machine classification.
Fisher kernel
The Fisher kernel, named in honour of Sir Ronald Fisher, is a function that measures the similarity of two objects on the basis of sets of measurements for each object and a statistical model. In a classification procedure, the class for a new object (whose real class is unknown) can be estimated by minimising, across classes, an average of the Fisher kernel distance from the new object to each known member of the given class.The Fisher kernel is the kernel for a generative probabilistic model. As such, it constitutes a bridge between generative and probabilistic models of documents.
Polynomial kernel
The polynomial kernel is a kernel function commonly used with support vector machines (SVMs) and other kernelized models, that represents the similarity of vectors (training samples) in a feature space over polynomials of the original variables, allowing learning of non-linear models.
Intuitively, the polynomial kernel looks not only at the given features of input samples to determine their similarity, but also combinations of these. In the context of regression analysis, such combinations are known as interaction features. The (implicit) feature space of a polynomial kernel is equivalent to that of polynomial regression, but without the combinatorial blowup in the number of parameters to be learned.
The RBF kernel is more popular in SVM classification than the polynomial kernel. The most common degree is d=2.
Thus the answer to question which kernel to use is:
(1) Try simplest one first and then swith to more complex ones - as overfitting is danger
(2) Learn from other worked examples, for similar data types. This may be quick solution, many times
|
When do kernel based method perform better than the regular
Just some more thoughts to previous answer:
What is Kernel or Kernel Method ?
Kernel or Positive-definite kernel is a generalization of a positive-definite matrix.In linear algebra, a symmetric n × n
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.