idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
41,601
|
Beginner level: Help in learning Kalman Smoother (Part 1) [closed]
|
Let me take a few steps back. EM algorithm is not required in a Kalman Filter if the design matrices(A, B, Q, R, etc) are known. They are known only if you know which physical system you are modelling from the beginning. If not then you will have to estimate these matrices. Filtering and smoothing operations are performed assuming that these matrices are already known.
EM utilizes the filtering or smoothing equations by starting with some random initial values of design matrices and then running the filtering equations (expectation step) and then lowering the prediction error (maximization step). It is possible to replace filtering equations with smoothing equations in the above procedure. You get to choose either filtering or smoothing (To be very precise filtering is the first step of smoothing. Hence smoothing is like an add-on). The difference is filtering only uses past values, smoothing on the other hand takes also future values into consideration.
EM Derivation for Kalman Filter is probably the most complete derivation of the EM procedure for Kalman Filter/Smoother.
To sum up, when design matrices are known, you run either filtering or smoothing equations to execute the filter. If matrices are not known you execute the filter by either filtering or smoothing equations then modify the matrices so that the results of the previous operation are improved. You repeat this procedure until the matrices don't change much. And this is called the EM procedure. The good thing is Kalman EM has neat solutions for the derivatives in the maximization procedure therefore you don't need numerical techniques for maximization.
|
Beginner level: Help in learning Kalman Smoother (Part 1) [closed]
|
Let me take a few steps back. EM algorithm is not required in a Kalman Filter if the design matrices(A, B, Q, R, etc) are known. They are known only if you know which physical system you are modelling
|
Beginner level: Help in learning Kalman Smoother (Part 1) [closed]
Let me take a few steps back. EM algorithm is not required in a Kalman Filter if the design matrices(A, B, Q, R, etc) are known. They are known only if you know which physical system you are modelling from the beginning. If not then you will have to estimate these matrices. Filtering and smoothing operations are performed assuming that these matrices are already known.
EM utilizes the filtering or smoothing equations by starting with some random initial values of design matrices and then running the filtering equations (expectation step) and then lowering the prediction error (maximization step). It is possible to replace filtering equations with smoothing equations in the above procedure. You get to choose either filtering or smoothing (To be very precise filtering is the first step of smoothing. Hence smoothing is like an add-on). The difference is filtering only uses past values, smoothing on the other hand takes also future values into consideration.
EM Derivation for Kalman Filter is probably the most complete derivation of the EM procedure for Kalman Filter/Smoother.
To sum up, when design matrices are known, you run either filtering or smoothing equations to execute the filter. If matrices are not known you execute the filter by either filtering or smoothing equations then modify the matrices so that the results of the previous operation are improved. You repeat this procedure until the matrices don't change much. And this is called the EM procedure. The good thing is Kalman EM has neat solutions for the derivatives in the maximization procedure therefore you don't need numerical techniques for maximization.
|
Beginner level: Help in learning Kalman Smoother (Part 1) [closed]
Let me take a few steps back. EM algorithm is not required in a Kalman Filter if the design matrices(A, B, Q, R, etc) are known. They are known only if you know which physical system you are modelling
|
41,602
|
finding out 2D transformation given a list of sample points
|
This is almost least squares regression except for the constraint on the parameter. In fact, by viewing $(x,y)\sim x + iy = z$ and $(x^\prime, y^\prime) = x^\prime + i y^\prime = z^\prime$ as complex numbers, we may write the model as
$$z_j^\prime = \alpha + \beta z_j + \varepsilon_j$$
where $\alpha = \Delta_x + i\Delta_y$ and $\beta = \exp(i\theta)$.
Assuming the (complex) random errors $\varepsilon_j$ have zero mean, a common variance, and are independent, it is easy to derive the conclusion that the fit must pass through the "point of means" $(\bar z, \bar z^\prime)$. This determines $\hat \alpha$, reducing the problem to the form
$$z_j^\prime - \bar z^\prime = \beta(z_j - \bar z) + \varepsilon_j$$
subject to the constraint $|\beta|^2 = 1$. This is readily solved using Lagrange multipliers. However, even a univariate minimizer will have no problems with this: ask it to minimize the sum of squares
$$\sum_j |(z_j^\prime - \bar z^\prime) - \exp(i\theta)(z_j - \bar z)|^2$$
Initialize the solution by fitting the unconstrained model (using ordinary regression) to find $\hat\beta$. Rescale $\hat\beta$ to unit length, then apply the univariate minimizer.
All this can be done entirely without using complex numbers, simply by translating the foregoing back into 2D coordinates. This is the method used in the R code below. Only two lines of code are needed:
lambda <- function(theta) sum((xy.prime.shift - rotate(theta) %*% xy.shift)^2)
fit <- optimize(lambda, interval=c(0, 2*pi))
Everything else is preliminary processing or post-processing to apply the fit.
From left to right, the figure shows the original points $z$, the target points $z^\prime$, the graph of the sum of squares against $\theta$ (the optimal value of $\theta$ is marked in red), and the comparison of the fitted values (gray circles) to the target values (red disks). The black arrows show the residual displacements.
By minimizing the sum of squares, this example assumed the components of the errors were uncorrelated and of equal variance. It is readily generalized to the case where the errors may be correlated and/or have unequal variances, without making any fundamental change to the overall approach.
Those familiar with the Gauss-Markov theorem and Maximum Likelihood estimation will have no difficulties obtaining standard errors for the estimates and testing hypotheses. This is not necessary in many applications: often the purpose of this exercise is to find the optimal transformation between two sets of points. However, such auxiliary information would be very useful in estimating the accuracy of the fit when it is applied to new starting points (that is, to compute prediction intervals). Despite the potential value of such an approach, I have never seen anyone do that.
#
# Preliminaries.
#
set.seed(17)
rotate <- function(theta) matrix(cos(theta)*c(1,0,0,1)+sin(theta)*c(0,-1,1,0), 2, 2)
#
# Generate data.
#
n <- 10
xy <- matrix(rnorm(n*2), nrow=2) # Original points
delta <- c(1,-2) # True mean displacement
theta <- 1 # Rotation (radians)
beta <- rotate(theta) # Rotation matrix
epsilon <- matrix(rnorm(n*2, sd=0.5), nrow=2) # Errors
xy.prime <- beta %*% xy + delta + epsilon # New (target) points
#
# Find an initial rotation by conducting an unconstrained fit.
#
xy.shift <- xy - rowMeans(xy) # Recentered original points
xy.prime.shift <- xy.prime - rowMeans(xy.prime) # Recentered target points
fit.0 <- lm(t(xy.prime.shift) ~ t(xy.shift)) # Unconstrained fit
b.0 <- t(coef(fit.0)[-1, ]) # Estimate of beta
theta.0 <- atan2(b.0[1,2], b.0[1,1]) # The corresponding angle
delta.hat <- rowMeans(xy.prime - xy) # The estimated displacment
#
# Fit the data.
#
lambda <- function(theta) sum((xy.prime.shift - rotate(theta) %*% xy.shift)^2)
fit <- optimize(lambda, interval=c(0, 2*pi))
#
# Extract the estimated rotation angle and compute the fitted values.
#
theta.hat <- fit$minimum
xy.pred <- rotate(theta.hat) %*% xy.shift + rowMeans(xy.prime)
#
# Plot things.
#
#pairs(t(rbind(xy, xy.prime))) # Not too revealing!
par(mfrow=c(1,4))
plot(t(xy), pch=".", asp=1, type="b", col="Gray", xlab="X", ylab="Y",
main="Original points")
text(xy[1, ], xy[2, ], 1:n)
plot(t(xy.prime), pch=".", asp=1, type="b", col="Gray", xlab="X", ylab="Y",
main="Target points")
text(xy.prime[1, ], xy.prime[2, ], 1:n)
x <- seq(0, 2*pi, length.out=101)
plot(x, sapply(x, lambda), type="l", xlab="theta", ylab="Objective",
main="Objective function")
points(theta.hat, fit$objective, pch=16, col="Red")
plot(t(cbind(xy.prime, xy.pred)), type="n", xlab="X", ylab="Y", main="Fit")
lines(t(xy.prime), col="Gray")
arrows(xy.pred[1, ], xy.pred[2, ], xy.prime[1, ], xy.prime[2, ], length=0.1)
points(t(xy.pred), col="Gray")
points(t(xy.prime), pch=16, col="Red")
|
finding out 2D transformation given a list of sample points
|
This is almost least squares regression except for the constraint on the parameter. In fact, by viewing $(x,y)\sim x + iy = z$ and $(x^\prime, y^\prime) = x^\prime + i y^\prime = z^\prime$ as complex
|
finding out 2D transformation given a list of sample points
This is almost least squares regression except for the constraint on the parameter. In fact, by viewing $(x,y)\sim x + iy = z$ and $(x^\prime, y^\prime) = x^\prime + i y^\prime = z^\prime$ as complex numbers, we may write the model as
$$z_j^\prime = \alpha + \beta z_j + \varepsilon_j$$
where $\alpha = \Delta_x + i\Delta_y$ and $\beta = \exp(i\theta)$.
Assuming the (complex) random errors $\varepsilon_j$ have zero mean, a common variance, and are independent, it is easy to derive the conclusion that the fit must pass through the "point of means" $(\bar z, \bar z^\prime)$. This determines $\hat \alpha$, reducing the problem to the form
$$z_j^\prime - \bar z^\prime = \beta(z_j - \bar z) + \varepsilon_j$$
subject to the constraint $|\beta|^2 = 1$. This is readily solved using Lagrange multipliers. However, even a univariate minimizer will have no problems with this: ask it to minimize the sum of squares
$$\sum_j |(z_j^\prime - \bar z^\prime) - \exp(i\theta)(z_j - \bar z)|^2$$
Initialize the solution by fitting the unconstrained model (using ordinary regression) to find $\hat\beta$. Rescale $\hat\beta$ to unit length, then apply the univariate minimizer.
All this can be done entirely without using complex numbers, simply by translating the foregoing back into 2D coordinates. This is the method used in the R code below. Only two lines of code are needed:
lambda <- function(theta) sum((xy.prime.shift - rotate(theta) %*% xy.shift)^2)
fit <- optimize(lambda, interval=c(0, 2*pi))
Everything else is preliminary processing or post-processing to apply the fit.
From left to right, the figure shows the original points $z$, the target points $z^\prime$, the graph of the sum of squares against $\theta$ (the optimal value of $\theta$ is marked in red), and the comparison of the fitted values (gray circles) to the target values (red disks). The black arrows show the residual displacements.
By minimizing the sum of squares, this example assumed the components of the errors were uncorrelated and of equal variance. It is readily generalized to the case where the errors may be correlated and/or have unequal variances, without making any fundamental change to the overall approach.
Those familiar with the Gauss-Markov theorem and Maximum Likelihood estimation will have no difficulties obtaining standard errors for the estimates and testing hypotheses. This is not necessary in many applications: often the purpose of this exercise is to find the optimal transformation between two sets of points. However, such auxiliary information would be very useful in estimating the accuracy of the fit when it is applied to new starting points (that is, to compute prediction intervals). Despite the potential value of such an approach, I have never seen anyone do that.
#
# Preliminaries.
#
set.seed(17)
rotate <- function(theta) matrix(cos(theta)*c(1,0,0,1)+sin(theta)*c(0,-1,1,0), 2, 2)
#
# Generate data.
#
n <- 10
xy <- matrix(rnorm(n*2), nrow=2) # Original points
delta <- c(1,-2) # True mean displacement
theta <- 1 # Rotation (radians)
beta <- rotate(theta) # Rotation matrix
epsilon <- matrix(rnorm(n*2, sd=0.5), nrow=2) # Errors
xy.prime <- beta %*% xy + delta + epsilon # New (target) points
#
# Find an initial rotation by conducting an unconstrained fit.
#
xy.shift <- xy - rowMeans(xy) # Recentered original points
xy.prime.shift <- xy.prime - rowMeans(xy.prime) # Recentered target points
fit.0 <- lm(t(xy.prime.shift) ~ t(xy.shift)) # Unconstrained fit
b.0 <- t(coef(fit.0)[-1, ]) # Estimate of beta
theta.0 <- atan2(b.0[1,2], b.0[1,1]) # The corresponding angle
delta.hat <- rowMeans(xy.prime - xy) # The estimated displacment
#
# Fit the data.
#
lambda <- function(theta) sum((xy.prime.shift - rotate(theta) %*% xy.shift)^2)
fit <- optimize(lambda, interval=c(0, 2*pi))
#
# Extract the estimated rotation angle and compute the fitted values.
#
theta.hat <- fit$minimum
xy.pred <- rotate(theta.hat) %*% xy.shift + rowMeans(xy.prime)
#
# Plot things.
#
#pairs(t(rbind(xy, xy.prime))) # Not too revealing!
par(mfrow=c(1,4))
plot(t(xy), pch=".", asp=1, type="b", col="Gray", xlab="X", ylab="Y",
main="Original points")
text(xy[1, ], xy[2, ], 1:n)
plot(t(xy.prime), pch=".", asp=1, type="b", col="Gray", xlab="X", ylab="Y",
main="Target points")
text(xy.prime[1, ], xy.prime[2, ], 1:n)
x <- seq(0, 2*pi, length.out=101)
plot(x, sapply(x, lambda), type="l", xlab="theta", ylab="Objective",
main="Objective function")
points(theta.hat, fit$objective, pch=16, col="Red")
plot(t(cbind(xy.prime, xy.pred)), type="n", xlab="X", ylab="Y", main="Fit")
lines(t(xy.prime), col="Gray")
arrows(xy.pred[1, ], xy.pred[2, ], xy.prime[1, ], xy.prime[2, ], length=0.1)
points(t(xy.pred), col="Gray")
points(t(xy.prime), pch=16, col="Red")
|
finding out 2D transformation given a list of sample points
This is almost least squares regression except for the constraint on the parameter. In fact, by viewing $(x,y)\sim x + iy = z$ and $(x^\prime, y^\prime) = x^\prime + i y^\prime = z^\prime$ as complex
|
41,603
|
Parallel regression assumption
|
The parallel regression assumption (aka proportional regression assumption) in ordinal logistic regression says that the coefficients that describe the odds of being in the lowest category vs. all higher categories of the response variable are the same as those that describe the odds between the second lowest category and all higher categories, etc.
This is a consequence of how the ordered logistic model is defined. It is a consequence of the fact that there is only one set of coefficients for all odds you're modeling (lowest category vs. all higher, second lowest vs. all higher, etc). If you include quadratic or non-linear terms then it is still the same set of coefficients for all odds you're modeling. So, whether or not you add quadratic or non-linear terms has nothing to do with the parallel regression assumption.
|
Parallel regression assumption
|
The parallel regression assumption (aka proportional regression assumption) in ordinal logistic regression says that the coefficients that describe the odds of being in the lowest category vs. all hig
|
Parallel regression assumption
The parallel regression assumption (aka proportional regression assumption) in ordinal logistic regression says that the coefficients that describe the odds of being in the lowest category vs. all higher categories of the response variable are the same as those that describe the odds between the second lowest category and all higher categories, etc.
This is a consequence of how the ordered logistic model is defined. It is a consequence of the fact that there is only one set of coefficients for all odds you're modeling (lowest category vs. all higher, second lowest vs. all higher, etc). If you include quadratic or non-linear terms then it is still the same set of coefficients for all odds you're modeling. So, whether or not you add quadratic or non-linear terms has nothing to do with the parallel regression assumption.
|
Parallel regression assumption
The parallel regression assumption (aka proportional regression assumption) in ordinal logistic regression says that the coefficients that describe the odds of being in the lowest category vs. all hig
|
41,604
|
Interpreting case influence statistics (leverage, studentized residuals, and Cook's distance)
|
No, the fact that you have a large studentized residual does not necessarily mean that the observation is an outlier. (Although some define outlier as simply a large residual, in which case it would be by definition.)
"Influential" is somewhat ambiguous. One could think of leverage as a measure of influence, or of DFbeta as a measure of influence, and neither of these will track Cook's distance perfectly. Thus, Cook's distance is not necessarily the same as influence; but I imagine that you are using them as synonymous, which may be a reasonable thing to do in some context. In that case, Cook's distance does measure influence, but that is tautological.
Yes, a case can have $0$ leverage (if $x_i=\bar x$), and have any size residual.
It may help you to read my answer here: Interpreting plot.lm().
|
Interpreting case influence statistics (leverage, studentized residuals, and Cook's distance)
|
No, the fact that you have a large studentized residual does not necessarily mean that the observation is an outlier. (Although some define outlier as simply a large residual, in which case it would
|
Interpreting case influence statistics (leverage, studentized residuals, and Cook's distance)
No, the fact that you have a large studentized residual does not necessarily mean that the observation is an outlier. (Although some define outlier as simply a large residual, in which case it would be by definition.)
"Influential" is somewhat ambiguous. One could think of leverage as a measure of influence, or of DFbeta as a measure of influence, and neither of these will track Cook's distance perfectly. Thus, Cook's distance is not necessarily the same as influence; but I imagine that you are using them as synonymous, which may be a reasonable thing to do in some context. In that case, Cook's distance does measure influence, but that is tautological.
Yes, a case can have $0$ leverage (if $x_i=\bar x$), and have any size residual.
It may help you to read my answer here: Interpreting plot.lm().
|
Interpreting case influence statistics (leverage, studentized residuals, and Cook's distance)
No, the fact that you have a large studentized residual does not necessarily mean that the observation is an outlier. (Although some define outlier as simply a large residual, in which case it would
|
41,605
|
How to handle data normalization in kNN when new test data is received
|
Your validation process and the reasoning behind is entirely correct.
Using the same reasoning / model building process: After you have selected a k by validation, you build the final model using all the training data $X$ and calculate the mean and variance based only on $X$ since these values are also part of the model.
Additionally: A classification model should classify new unlabeled instances independent of each other. But if you calculate mean and variance also based on $Z$, then the prediction might change for the same unlabeled instance dependent on how the rest of $Z$ looks like. This is not correct.
I guess the confusion originates from k-nearest-neighbor being a lazy learner, i.e. storing all the instances instead of deriving a model with reduced complexity. In other learners, this is not done, so calculating the normalization parameters across the whole combined set is not even possible. See this related question without a specific learner: Perform feature normalization before or within model validation?
|
How to handle data normalization in kNN when new test data is received
|
Your validation process and the reasoning behind is entirely correct.
Using the same reasoning / model building process: After you have selected a k by validation, you build the final model using all
|
How to handle data normalization in kNN when new test data is received
Your validation process and the reasoning behind is entirely correct.
Using the same reasoning / model building process: After you have selected a k by validation, you build the final model using all the training data $X$ and calculate the mean and variance based only on $X$ since these values are also part of the model.
Additionally: A classification model should classify new unlabeled instances independent of each other. But if you calculate mean and variance also based on $Z$, then the prediction might change for the same unlabeled instance dependent on how the rest of $Z$ looks like. This is not correct.
I guess the confusion originates from k-nearest-neighbor being a lazy learner, i.e. storing all the instances instead of deriving a model with reduced complexity. In other learners, this is not done, so calculating the normalization parameters across the whole combined set is not even possible. See this related question without a specific learner: Perform feature normalization before or within model validation?
|
How to handle data normalization in kNN when new test data is received
Your validation process and the reasoning behind is entirely correct.
Using the same reasoning / model building process: After you have selected a k by validation, you build the final model using all
|
41,606
|
What to do when parallel regression assumption violated
|
It looks to me that you are looking for the "partial" version of proportional odds. Reference:
R. S. Society, “Partial Proportional Odds Models for Ordinal Response Variables,” vol. 39, no. 2, pp. 205–217, 1999.
If in the "standard" ordered logit (or proportional odds), the cumulative probability is modeled as
$$
P(Y > j | X_i) = \frac{1}{1 + \exp(-\alpha_j - X_i \beta)}
$$
where $\alpha$ is the vector of thresholds (as many as the number of classes - 1) and $\beta$ is the vector of coefficients. In the partial version of the proportional odds model, the cumulative probability takes instead the more general form
$$
P(Y > j | X_i) = \frac{1}{1 + \exp(-\alpha_j - X_i \beta - T_i \gamma_j )}
$$
where $T$ is a vector containing the values of observation $i$ on that subset of the explanatory variables for which the proportional odds assumption is either not assumed or not verified, and $\gamma_i$ is a vector of coefficients (to be estimated) associated with the variables in $T$.
|
What to do when parallel regression assumption violated
|
It looks to me that you are looking for the "partial" version of proportional odds. Reference:
R. S. Society, “Partial Proportional Odds Models for Ordinal Response Variables,” vol. 39, no. 2, pp. 20
|
What to do when parallel regression assumption violated
It looks to me that you are looking for the "partial" version of proportional odds. Reference:
R. S. Society, “Partial Proportional Odds Models for Ordinal Response Variables,” vol. 39, no. 2, pp. 205–217, 1999.
If in the "standard" ordered logit (or proportional odds), the cumulative probability is modeled as
$$
P(Y > j | X_i) = \frac{1}{1 + \exp(-\alpha_j - X_i \beta)}
$$
where $\alpha$ is the vector of thresholds (as many as the number of classes - 1) and $\beta$ is the vector of coefficients. In the partial version of the proportional odds model, the cumulative probability takes instead the more general form
$$
P(Y > j | X_i) = \frac{1}{1 + \exp(-\alpha_j - X_i \beta - T_i \gamma_j )}
$$
where $T$ is a vector containing the values of observation $i$ on that subset of the explanatory variables for which the proportional odds assumption is either not assumed or not verified, and $\gamma_i$ is a vector of coefficients (to be estimated) associated with the variables in $T$.
|
What to do when parallel regression assumption violated
It looks to me that you are looking for the "partial" version of proportional odds. Reference:
R. S. Society, “Partial Proportional Odds Models for Ordinal Response Variables,” vol. 39, no. 2, pp. 20
|
41,607
|
Solving formula for confidence intervals on two proportions
|
If you did want to create a confidence interval for the difference of two proportions, there are better procedures than the one you are using: you might consider making use of the Wilson procedure. The Wikipedia article on binomial confidence intervals mentions this and several other possibilities, but does not show the method applied to differences in proportions: for that, consult a reference such as Newcombe, Robert G., "Interval Estimation for the Difference Between Independent Proportions: Comparison of Eleven Methods," Statistics in Medicine, 17, 873-890 (1998).
However it appears that you don't want to create a confidence interval, but rather perform a hypothesis test. The formula you would want to use is a rearranged version of the given one. Let me write $p_a$ and $p_b$ for the proportions in groups A and B, and their sample sizes as $m$ and $n$ respectively. Then your test statistic is:
$$z = \frac{p_a - p_b}{\sqrt{\frac{p_a(1-p_a)}{m}+\frac{p_b(1-p_b)}{n}}}$$
Your result will be significant if this exceeds the upper critical value $z_\text{crit} = \Phi^{-1}(1 - \alpha/2)$ (where $\alpha$ is your significance level) or if it is below the lower critical value, which by symmetry of the normal distribution is $-z_\text{crit}$. If you were interested in a 90% confidence interval, note that this is equivalent to setting $\alpha$ as 10%, not as 90%! These two sides of significance correspond to whether the proportions differ because $p_a$ exceeds $p_b$ (positive $z$) or vice versa (negative $z$). If you are interested in the critical values for $p_a$ which make this just significant, then you need to solve this equation as equal to $\pm z_\text{crit}$. Although your question mentions only the uppper critical value I will write as if you are interested in both possibilities - partly because this corresponds better to what you mentioned about the upper and lower bounds of a 90% confidence interval, and partly because two-sided testing is usually a good idea in general. The upper limit you get by testing with $\alpha$ as 10% is equivalent to the one-sided test you'd get if you tested at 5%, so no great variation in the method is required.
If you feel daunted by the task of algebraic rearrangement, one option is to use a computer algebra system to do the work for you. One freely available, open source product is Sage (which is actually rather more powerful than just a CAS). Rearranging to make one variable the subject, is essentially the same as solving the equation for that variable in terms of the other variables. A brief tutorial on how to solve equations symbolically in Sage is here. This would then give you a formula you can set up in Excel.
A paid-for product is Mathematica, but many basic features of Mathematica are freely available online at Wolfram Alpha. Go there and type:
solve z=(a-b)/sqrt((a(1-a))/m + (b(1-b))/n) for a
The output will be:
Here I have written $z_\text{crit}$ as $z$, $p_a$ as $a$ and $p_b$ as $b$ but I hope the meaning is still clear. Simply by changing $m$, $n$, $z$ and $b$ into appropriate cell references you can easily implement this formula in Excel. If cell A1 contains your level of signficance, $\alpha$, then then the cell you use for the critical $z$-score should contain the formula =NORM.S.INV(1-A1/2) so you should get the famous 1.96 (to two decimal places) if you set $\alpha$ at the 5% level, or 1.64 if you test at 10%.
Note that we actually find two solutions arise, corresponding to the two critical values for $a$, without having to check solve -z=(a-b)/sqrt((a(1-a))/m + (b(1-b))/n) for a for the case with negative $z$. It is clear that the first line of the rearrangement is must be:
$$z_\text{crit}^2 = \frac{(p_a - p_b)^2}{\frac{p_a(1-p_a)}{m}+\frac{p_b(1-p_b)}{n}}$$
Beyond this point it no longer matters whether we used the positive or negative value for $z_\text{crit}$. It's not so hard to see where Mathematica derives it solution from. Multiply by the denominator and we obtain:
$$z_\text{crit}^2 \left(\frac{p_a(1-p_a)}{m}+\frac{p_b(1-p_b)}{n}\right)= (p_a - p_b)^2$$
Then multiply by $mn$:
$$z_\text{crit}^2 \left(np_a(1-p_a) + mp_b(1-p_b)\right)= mn(p_a - p_b)^2$$
Once the brackets are multiplied out and terms are collected together, this will be a quadratic in $p_a$. The form of Mathematica's solutions were just the two roots to the quadratic formula but it's easier to let it deal with the simplification!
|
Solving formula for confidence intervals on two proportions
|
If you did want to create a confidence interval for the difference of two proportions, there are better procedures than the one you are using: you might consider making use of the Wilson procedure. Th
|
Solving formula for confidence intervals on two proportions
If you did want to create a confidence interval for the difference of two proportions, there are better procedures than the one you are using: you might consider making use of the Wilson procedure. The Wikipedia article on binomial confidence intervals mentions this and several other possibilities, but does not show the method applied to differences in proportions: for that, consult a reference such as Newcombe, Robert G., "Interval Estimation for the Difference Between Independent Proportions: Comparison of Eleven Methods," Statistics in Medicine, 17, 873-890 (1998).
However it appears that you don't want to create a confidence interval, but rather perform a hypothesis test. The formula you would want to use is a rearranged version of the given one. Let me write $p_a$ and $p_b$ for the proportions in groups A and B, and their sample sizes as $m$ and $n$ respectively. Then your test statistic is:
$$z = \frac{p_a - p_b}{\sqrt{\frac{p_a(1-p_a)}{m}+\frac{p_b(1-p_b)}{n}}}$$
Your result will be significant if this exceeds the upper critical value $z_\text{crit} = \Phi^{-1}(1 - \alpha/2)$ (where $\alpha$ is your significance level) or if it is below the lower critical value, which by symmetry of the normal distribution is $-z_\text{crit}$. If you were interested in a 90% confidence interval, note that this is equivalent to setting $\alpha$ as 10%, not as 90%! These two sides of significance correspond to whether the proportions differ because $p_a$ exceeds $p_b$ (positive $z$) or vice versa (negative $z$). If you are interested in the critical values for $p_a$ which make this just significant, then you need to solve this equation as equal to $\pm z_\text{crit}$. Although your question mentions only the uppper critical value I will write as if you are interested in both possibilities - partly because this corresponds better to what you mentioned about the upper and lower bounds of a 90% confidence interval, and partly because two-sided testing is usually a good idea in general. The upper limit you get by testing with $\alpha$ as 10% is equivalent to the one-sided test you'd get if you tested at 5%, so no great variation in the method is required.
If you feel daunted by the task of algebraic rearrangement, one option is to use a computer algebra system to do the work for you. One freely available, open source product is Sage (which is actually rather more powerful than just a CAS). Rearranging to make one variable the subject, is essentially the same as solving the equation for that variable in terms of the other variables. A brief tutorial on how to solve equations symbolically in Sage is here. This would then give you a formula you can set up in Excel.
A paid-for product is Mathematica, but many basic features of Mathematica are freely available online at Wolfram Alpha. Go there and type:
solve z=(a-b)/sqrt((a(1-a))/m + (b(1-b))/n) for a
The output will be:
Here I have written $z_\text{crit}$ as $z$, $p_a$ as $a$ and $p_b$ as $b$ but I hope the meaning is still clear. Simply by changing $m$, $n$, $z$ and $b$ into appropriate cell references you can easily implement this formula in Excel. If cell A1 contains your level of signficance, $\alpha$, then then the cell you use for the critical $z$-score should contain the formula =NORM.S.INV(1-A1/2) so you should get the famous 1.96 (to two decimal places) if you set $\alpha$ at the 5% level, or 1.64 if you test at 10%.
Note that we actually find two solutions arise, corresponding to the two critical values for $a$, without having to check solve -z=(a-b)/sqrt((a(1-a))/m + (b(1-b))/n) for a for the case with negative $z$. It is clear that the first line of the rearrangement is must be:
$$z_\text{crit}^2 = \frac{(p_a - p_b)^2}{\frac{p_a(1-p_a)}{m}+\frac{p_b(1-p_b)}{n}}$$
Beyond this point it no longer matters whether we used the positive or negative value for $z_\text{crit}$. It's not so hard to see where Mathematica derives it solution from. Multiply by the denominator and we obtain:
$$z_\text{crit}^2 \left(\frac{p_a(1-p_a)}{m}+\frac{p_b(1-p_b)}{n}\right)= (p_a - p_b)^2$$
Then multiply by $mn$:
$$z_\text{crit}^2 \left(np_a(1-p_a) + mp_b(1-p_b)\right)= mn(p_a - p_b)^2$$
Once the brackets are multiplied out and terms are collected together, this will be a quadratic in $p_a$. The form of Mathematica's solutions were just the two roots to the quadratic formula but it's easier to let it deal with the simplification!
|
Solving formula for confidence intervals on two proportions
If you did want to create a confidence interval for the difference of two proportions, there are better procedures than the one you are using: you might consider making use of the Wilson procedure. Th
|
41,608
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
|
The intuition behind the argument saying that the optimal policy is independent of initial state is the following:
The optimal policy is defined by a function that selects an action for every possible state and actions in different states are independent.
Formally speaking, for an unknown initial distribution, the value function to maximize would be the following (not conditioned on initial state)
$v^\pi = E[ R(s_0,a_0) + \gamma R(s_1,a_1) + ... | \pi ] $
Thus the optimal policy is the policy that maximizes $v^\pi=x_0^T V^\pi$ where $x_0$ is the vector defined as $x_0(s)=Prob[s_0=s]$ and $V^\pi$ is the vector with $V^\pi (s)$ defined the same as your definition. Thus the optimal policy $\pi ^*$ is a solution of the following optimization
$\max_{\pi} x_0^T V^\pi = \max_{\pi} \sum _{s} x_0(s)V^\pi (s)= \sum _{s} x_0(s) \max_{a\in A_s}V^\pi (s)= \sum _{s} x_0(s)V^* (s)$
where $A_s$ is the set of actions available at state $s$. Note that the third equation is only valid because $x_0(s)\geq 0$ and we can separate the decision policy by selecting independent actions at different states. In other words, if there were constraints for example on $x_t$ (e.g., if the optimization only searches among policies that guarantee $x_t \leq d$ for $t>0$), then the argument is not valid anymore and the optimal policy is a function of the initial distribution.
For more details, you can check the following arXiv report:
"Finite-Horizon Markov Decision Processes with State Constraints", by
Mahmoud El Chamie, Behcet Acikmese
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
|
The intuition behind the argument saying that the optimal policy is independent of initial state is the following:
The optimal policy is defined by a function that selects an action for every possible
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
The intuition behind the argument saying that the optimal policy is independent of initial state is the following:
The optimal policy is defined by a function that selects an action for every possible state and actions in different states are independent.
Formally speaking, for an unknown initial distribution, the value function to maximize would be the following (not conditioned on initial state)
$v^\pi = E[ R(s_0,a_0) + \gamma R(s_1,a_1) + ... | \pi ] $
Thus the optimal policy is the policy that maximizes $v^\pi=x_0^T V^\pi$ where $x_0$ is the vector defined as $x_0(s)=Prob[s_0=s]$ and $V^\pi$ is the vector with $V^\pi (s)$ defined the same as your definition. Thus the optimal policy $\pi ^*$ is a solution of the following optimization
$\max_{\pi} x_0^T V^\pi = \max_{\pi} \sum _{s} x_0(s)V^\pi (s)= \sum _{s} x_0(s) \max_{a\in A_s}V^\pi (s)= \sum _{s} x_0(s)V^* (s)$
where $A_s$ is the set of actions available at state $s$. Note that the third equation is only valid because $x_0(s)\geq 0$ and we can separate the decision policy by selecting independent actions at different states. In other words, if there were constraints for example on $x_t$ (e.g., if the optimization only searches among policies that guarantee $x_t \leq d$ for $t>0$), then the argument is not valid anymore and the optimal policy is a function of the initial distribution.
For more details, you can check the following arXiv report:
"Finite-Horizon Markov Decision Processes with State Constraints", by
Mahmoud El Chamie, Behcet Acikmese
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
The intuition behind the argument saying that the optimal policy is independent of initial state is the following:
The optimal policy is defined by a function that selects an action for every possible
|
41,609
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
|
why is it that $π^∗$ has the property that its the optimal policy for all states?
This is actually because of Markov property of environment. Markov property states that the history of previous states and actions leading to state $s$ does not affect $R(s)$ and $P_{sa}(s')$. So in any state s, the optimal policy for that state can only consider $\forall a: R(s, a), P_{sa}(s')$ without considering how it has reached s.
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
|
why is it that $π^∗$ has the property that its the optimal policy for all states?
This is actually because of Markov property of environment. Markov property states that the history of previous state
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
why is it that $π^∗$ has the property that its the optimal policy for all states?
This is actually because of Markov property of environment. Markov property states that the history of previous states and actions leading to state $s$ does not affect $R(s)$ and $P_{sa}(s')$. So in any state s, the optimal policy for that state can only consider $\forall a: R(s, a), P_{sa}(s')$ without considering how it has reached s.
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
why is it that $π^∗$ has the property that its the optimal policy for all states?
This is actually because of Markov property of environment. Markov property states that the history of previous state
|
41,610
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
|
I didn't dive in the details for quite some time, but it appears very intuitive to me that $π^∗$ is valid for all states if you take practical examples.
The case of solving a maze is one of these: the agent is (potentially randomly positioned) in a maze and is trying to get out of it. It gets rewarded only when it finds the exit (first image). Through experiences and propagation of the reward, it will learn what path to choose from any position (second image). So the optimal policy does provide the best direction (action) choice for any position (state).
Of course, a more thorough explanation through the math is still due ;-)
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
|
I didn't dive in the details for quite some time, but it appears very intuitive to me that $π^∗$ is valid for all states if you take practical examples.
The case of solving a maze is one of these: the
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
I didn't dive in the details for quite some time, but it appears very intuitive to me that $π^∗$ is valid for all states if you take practical examples.
The case of solving a maze is one of these: the agent is (potentially randomly positioned) in a maze and is trying to get out of it. It gets rewarded only when it finds the exit (first image). Through experiences and propagation of the reward, it will learn what path to choose from any position (second image). So the optimal policy does provide the best direction (action) choice for any position (state).
Of course, a more thorough explanation through the math is still due ;-)
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
I didn't dive in the details for quite some time, but it appears very intuitive to me that $π^∗$ is valid for all states if you take practical examples.
The case of solving a maze is one of these: the
|
41,611
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
|
In my opinion, any policy that achieves the optimal value is an optimal policy. Since the optimal value function for a given MDP is unique, this optimal value function actually defines a equivalent class over the policy space, i.e., those whose value is optimal are actually equivalent. In other words, although optimal policies may be different to each other (for a given state they may take different action sequences to achieve the same optimal value), their value function is the same. In this sense, no matter which state you are at, as the optimal value for this state is the same, the policies achieving this optimal value are essentially the same and they are all optimal policy at this state.
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
|
In my opinion, any policy that achieves the optimal value is an optimal policy. Since the optimal value function for a given MDP is unique, this optimal value function actually defines a equivalent cl
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
In my opinion, any policy that achieves the optimal value is an optimal policy. Since the optimal value function for a given MDP is unique, this optimal value function actually defines a equivalent class over the policy space, i.e., those whose value is optimal are actually equivalent. In other words, although optimal policies may be different to each other (for a given state they may take different action sequences to achieve the same optimal value), their value function is the same. In this sense, no matter which state you are at, as the optimal value for this state is the same, the policies achieving this optimal value are essentially the same and they are all optimal policy at this state.
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
In my opinion, any policy that achieves the optimal value is an optimal policy. Since the optimal value function for a given MDP is unique, this optimal value function actually defines a equivalent cl
|
41,612
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
|
Perhaps, one should view the optimization process taken over the space of functions span by the admissible policies, however explicitly described by the MDP definition.
probabilities could always be forced to follow the strict more explicit notation of functions of many variables from general mathematics. All moments could also always be made explicit integrals of many variables (or discrete versions of it). But, the probability space does add some extra structure into the set of such multi-variable functions. And since whatever mathematical object in the space of dependent variables is the focus of all this machinery, you could bet that you might have to consider various partial integrals, probability oblige, unless you want to forgo the extra structure that Bayes theorem provides. It is a bit like the complex plane: would look like R2, if there was not that quirky relationship between the dimensions. this is just an analogy.
my point, is that given the multivariate interactions, some shortcuts become inevitable for the sake of readability. However, care should be taken to systematically be aware and make the readers aware that shortcuts have been taken, in notations, specially when communicating with multidisciplinary audiences, and various notation expectations or fluency that come with that. (or just to not get lost oneself...)
I am not finished, there is an additional twist that I need to mention to make the above relevant to this thread: MDP are considered to answer a restricted class of mathematical problems, they impose the question of finding some non-empty subset of optimal policies in the valuation sense. So we are going to be optimizing of space of policies (they become points, variables themselves, hence the confusing subscript, compress into one symbol meaning not a value, but a function as conditioned variable. Does this make sense?
Maybe some body should tweak what I tried to say. if too bad, I could make it wiki, for those who get my intent, that is.
I just thought this was relevant because I sensed that part of the difficulty lies in different notation assumptions/expectations per individual involved (include original authors). I may be wrong. let's see.
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
|
Perhaps, one should view the optimization process taken over the space of functions span by the admissible policies, however explicitly described by the MDP definition.
probabilities could always be f
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
Perhaps, one should view the optimization process taken over the space of functions span by the admissible policies, however explicitly described by the MDP definition.
probabilities could always be forced to follow the strict more explicit notation of functions of many variables from general mathematics. All moments could also always be made explicit integrals of many variables (or discrete versions of it). But, the probability space does add some extra structure into the set of such multi-variable functions. And since whatever mathematical object in the space of dependent variables is the focus of all this machinery, you could bet that you might have to consider various partial integrals, probability oblige, unless you want to forgo the extra structure that Bayes theorem provides. It is a bit like the complex plane: would look like R2, if there was not that quirky relationship between the dimensions. this is just an analogy.
my point, is that given the multivariate interactions, some shortcuts become inevitable for the sake of readability. However, care should be taken to systematically be aware and make the readers aware that shortcuts have been taken, in notations, specially when communicating with multidisciplinary audiences, and various notation expectations or fluency that come with that. (or just to not get lost oneself...)
I am not finished, there is an additional twist that I need to mention to make the above relevant to this thread: MDP are considered to answer a restricted class of mathematical problems, they impose the question of finding some non-empty subset of optimal policies in the valuation sense. So we are going to be optimizing of space of policies (they become points, variables themselves, hence the confusing subscript, compress into one symbol meaning not a value, but a function as conditioned variable. Does this make sense?
Maybe some body should tweak what I tried to say. if too bad, I could make it wiki, for those who get my intent, that is.
I just thought this was relevant because I sensed that part of the difficulty lies in different notation assumptions/expectations per individual involved (include original authors). I may be wrong. let's see.
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
Perhaps, one should view the optimization process taken over the space of functions span by the admissible policies, however explicitly described by the MDP definition.
probabilities could always be f
|
41,613
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
|
By definition, $\pi^*$ is optimal if $V_{\pi^*}(s) = \max_{\pi} V_{\pi}(s)$ for all $s$. So isn't just definition?
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
|
By definition, $\pi^*$ is optimal if $V_{\pi^*}(s) = \max_{\pi} V_{\pi}(s)$ for all $s$. So isn't just definition?
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
By definition, $\pi^*$ is optimal if $V_{\pi^*}(s) = \max_{\pi} V_{\pi}(s)$ for all $s$. So isn't just definition?
|
Why is the optimal policy in Markov Decision Process (MDP), independent of the initial state?
By definition, $\pi^*$ is optimal if $V_{\pi^*}(s) = \max_{\pi} V_{\pi}(s)$ for all $s$. So isn't just definition?
|
41,614
|
Do I need more than one random slope?
|
It does make sense, but you have to be a little bit careful in setting up the model. The way you've written the model,
S ~ X1 + X2 + X3 + (X1|biome) + (X2|biome) + (X3|biome)
it implicitly incorporates an intercept term with each random effect slope. You could write it as
S ~ X1 + X2 + X3 + (1|biome) + (X1+0|biome) + (X2+0|biome) + (X3+0|biome)
which will estimate the intercept and all of the slopes separately. (It might be a good idea to center your covariates as recommended by Schielzeth 2010 ...)
Alternatively, in principle (but see below for caveats) you could use
S ~ X1 + X2 + X3 + (X1+X2+X3|biome)
which would fit correlations among the slopes.
More fundamentally, however, I would consider (recommend?) fitting the among-biome variation as fixed effects rather than random effects,
S ~ (X1+X2+X3)*biome
(then you could just use lm instead of lmer). Because you only have samples for 6 biomes, you will be estimating random-effects variances (and in the case of (X1+X2+X3|biome), a 4x4 random-effects variance-covariance matrix) from only 6 parameters.
One more comment: from your data, it looks like you have multiple observations with the same covariate (temperature range) value, which suggests that you are getting multiple observations from the same site. I would think about incorporating site as a random effect ...
|
Do I need more than one random slope?
|
It does make sense, but you have to be a little bit careful in setting up the model. The way you've written the model,
S ~ X1 + X2 + X3 + (X1|biome) + (X2|biome) + (X3|biome)
it implicitly incorpor
|
Do I need more than one random slope?
It does make sense, but you have to be a little bit careful in setting up the model. The way you've written the model,
S ~ X1 + X2 + X3 + (X1|biome) + (X2|biome) + (X3|biome)
it implicitly incorporates an intercept term with each random effect slope. You could write it as
S ~ X1 + X2 + X3 + (1|biome) + (X1+0|biome) + (X2+0|biome) + (X3+0|biome)
which will estimate the intercept and all of the slopes separately. (It might be a good idea to center your covariates as recommended by Schielzeth 2010 ...)
Alternatively, in principle (but see below for caveats) you could use
S ~ X1 + X2 + X3 + (X1+X2+X3|biome)
which would fit correlations among the slopes.
More fundamentally, however, I would consider (recommend?) fitting the among-biome variation as fixed effects rather than random effects,
S ~ (X1+X2+X3)*biome
(then you could just use lm instead of lmer). Because you only have samples for 6 biomes, you will be estimating random-effects variances (and in the case of (X1+X2+X3|biome), a 4x4 random-effects variance-covariance matrix) from only 6 parameters.
One more comment: from your data, it looks like you have multiple observations with the same covariate (temperature range) value, which suggests that you are getting multiple observations from the same site. I would think about incorporating site as a random effect ...
|
Do I need more than one random slope?
It does make sense, but you have to be a little bit careful in setting up the model. The way you've written the model,
S ~ X1 + X2 + X3 + (X1|biome) + (X2|biome) + (X3|biome)
it implicitly incorpor
|
41,615
|
What makes constant function an estimator?
|
I think it's not so much a question of ''what makes constant function'' an estimator but ''what makes estimator an estimator''.
First, from mathematical point of view an estimator is a function of special kind, it's a random variable, that fulfills some requirements - it's a statistic, which means it has to be independent from $\theta$ (its ''estimand''). Constant function is independent of $\theta$, (it's independent of anything :).
Example. $T =\bar{X}$ is a statistic of $\mu$, and $S=\bar{X}-\mu$ is not a statistic of $\mu$ ('because it's dependent on $\mu$ itself).
So, a constant function is an object that possess these two qualities, that justify calling it ''an estimator''.
The quite important thing is that what we desire is not "any" estimator.
Any estimator may be biased, which means that with every sample we obtain it adds or subtracts something. F.e. you want your bathroom scale to show your weight exacly as it is (or maybe women more tend to cheat themselves:).
We want an estimator that will minimize Mean Square Error ($MSE=E(\hat{\theta}-\theta)^2$). But there is no one estimator, that minimize this error - it's a family of such estimators. So which one is the best one? A good estimator is the one, that fulfills some requirements. I know about three of them:
unbiasedness - it does not adds or subtracts anything. Mathematically
it's $E\hat{\theta} = \theta$
consistency (this what is written in your book)
maximal efficiency which refers to estimator's variance - we want as small variance as possible.
Someone wrote about computational cost, but it's not mathematical/probabilistic issue.
Thus, a constant function actually is an estimator, however is not desired one, 'cause at least it's biased and not consistent (as you noticed). These are differences between constant function and other (good) estimators.
This is more or less my answer to your question. I think, going further will make us dig in some mathematical equations to show more differences or similarities, etc.
|
What makes constant function an estimator?
|
I think it's not so much a question of ''what makes constant function'' an estimator but ''what makes estimator an estimator''.
First, from mathematical point of view an estimator is a function of spe
|
What makes constant function an estimator?
I think it's not so much a question of ''what makes constant function'' an estimator but ''what makes estimator an estimator''.
First, from mathematical point of view an estimator is a function of special kind, it's a random variable, that fulfills some requirements - it's a statistic, which means it has to be independent from $\theta$ (its ''estimand''). Constant function is independent of $\theta$, (it's independent of anything :).
Example. $T =\bar{X}$ is a statistic of $\mu$, and $S=\bar{X}-\mu$ is not a statistic of $\mu$ ('because it's dependent on $\mu$ itself).
So, a constant function is an object that possess these two qualities, that justify calling it ''an estimator''.
The quite important thing is that what we desire is not "any" estimator.
Any estimator may be biased, which means that with every sample we obtain it adds or subtracts something. F.e. you want your bathroom scale to show your weight exacly as it is (or maybe women more tend to cheat themselves:).
We want an estimator that will minimize Mean Square Error ($MSE=E(\hat{\theta}-\theta)^2$). But there is no one estimator, that minimize this error - it's a family of such estimators. So which one is the best one? A good estimator is the one, that fulfills some requirements. I know about three of them:
unbiasedness - it does not adds or subtracts anything. Mathematically
it's $E\hat{\theta} = \theta$
consistency (this what is written in your book)
maximal efficiency which refers to estimator's variance - we want as small variance as possible.
Someone wrote about computational cost, but it's not mathematical/probabilistic issue.
Thus, a constant function actually is an estimator, however is not desired one, 'cause at least it's biased and not consistent (as you noticed). These are differences between constant function and other (good) estimators.
This is more or less my answer to your question. I think, going further will make us dig in some mathematical equations to show more differences or similarities, etc.
|
What makes constant function an estimator?
I think it's not so much a question of ''what makes constant function'' an estimator but ''what makes estimator an estimator''.
First, from mathematical point of view an estimator is a function of spe
|
41,616
|
What makes constant function an estimator?
|
An estimator is simply some function of a potential sample of data that seeks to estimate an unknown population parameter. It's a recipe or a formula. Your constant is an estimator that does not depend on the data at all: the estimate that is produces will always be the same.
There's an infinite number of estimators, and most of them are "bad". What does that mean? Estimators have desirable properties, which leads them to produce "good" estimates under certain conditions. Some of these are
Computational cost
Unbiasedness
Consistency
Efficiency
Robustness (insensitivity to violations of the assumptions under which the estimator retains its desirable properties)
These goals are often at odds with each other. The constant has the lowest computational cost, but arguably none of the others.
|
What makes constant function an estimator?
|
An estimator is simply some function of a potential sample of data that seeks to estimate an unknown population parameter. It's a recipe or a formula. Your constant is an estimator that does not depen
|
What makes constant function an estimator?
An estimator is simply some function of a potential sample of data that seeks to estimate an unknown population parameter. It's a recipe or a formula. Your constant is an estimator that does not depend on the data at all: the estimate that is produces will always be the same.
There's an infinite number of estimators, and most of them are "bad". What does that mean? Estimators have desirable properties, which leads them to produce "good" estimates under certain conditions. Some of these are
Computational cost
Unbiasedness
Consistency
Efficiency
Robustness (insensitivity to violations of the assumptions under which the estimator retains its desirable properties)
These goals are often at odds with each other. The constant has the lowest computational cost, but arguably none of the others.
|
What makes constant function an estimator?
An estimator is simply some function of a potential sample of data that seeks to estimate an unknown population parameter. It's a recipe or a formula. Your constant is an estimator that does not depen
|
41,617
|
What makes constant function an estimator?
|
Constant estimators/predictors have a use as benchmarks against which one judges the performance of "proper" estimators/predictors.
A standard example is in the context of binary logistic regression, where we attempt to estimate conditional probabilities, exploiting the information that possibly resides in the regressors in order to predict better, in some sense, the probability related to the dependent variable,
$$P(Y_i=1 \mid \mathbf x_i) = \Lambda(g(\mathbf x_i'\beta))$$
where $\Lambda()$ is the Logistic cumulative distribution function, and $g(\mathbf x_i'\beta)$ is the logit.
But since we have the sample available, we can also very cheaply estimate the unconditional probability,
$$\hat P(Y=1) = \frac 1n \sum_{i=1}^n y_i$$
We can then compare the predictive performance of $\hat P(Y_i=1 \mid \mathbf x_i) = \Lambda(g(\mathbf x_i'\hat \beta))$ against the "naive" (and constant) estimator $\hat P(Y=1)$. The former should do better, otherwise all the trouble we went into trying to use the information about the probability of $Y$ included in the $X$'s did not pay off.
A CV thread exactly on this issue can be found here (look also at the comments).
|
What makes constant function an estimator?
|
Constant estimators/predictors have a use as benchmarks against which one judges the performance of "proper" estimators/predictors.
A standard example is in the context of binary logistic regression,
|
What makes constant function an estimator?
Constant estimators/predictors have a use as benchmarks against which one judges the performance of "proper" estimators/predictors.
A standard example is in the context of binary logistic regression, where we attempt to estimate conditional probabilities, exploiting the information that possibly resides in the regressors in order to predict better, in some sense, the probability related to the dependent variable,
$$P(Y_i=1 \mid \mathbf x_i) = \Lambda(g(\mathbf x_i'\beta))$$
where $\Lambda()$ is the Logistic cumulative distribution function, and $g(\mathbf x_i'\beta)$ is the logit.
But since we have the sample available, we can also very cheaply estimate the unconditional probability,
$$\hat P(Y=1) = \frac 1n \sum_{i=1}^n y_i$$
We can then compare the predictive performance of $\hat P(Y_i=1 \mid \mathbf x_i) = \Lambda(g(\mathbf x_i'\hat \beta))$ against the "naive" (and constant) estimator $\hat P(Y=1)$. The former should do better, otherwise all the trouble we went into trying to use the information about the probability of $Y$ included in the $X$'s did not pay off.
A CV thread exactly on this issue can be found here (look also at the comments).
|
What makes constant function an estimator?
Constant estimators/predictors have a use as benchmarks against which one judges the performance of "proper" estimators/predictors.
A standard example is in the context of binary logistic regression,
|
41,618
|
Residual variance for glmer
|
One possible interpretation of the logistic regression model is to state that there is an underlying score $$ y^*_i = x_i'\beta + \epsilon_i, $$ with the observed variable being $$y_i = \left\{ \begin{array}{ll} 1, & y^*_i > 0 \\ 0, & y^*_i \le 0\end{array}\right.$$ This would be the way logistic regression would be introduced in social sciences, as opposed to biostatistics. In this formulation, $\epsilon$ follows a logistic distribution, which does have the variance of $\pi^2/3$. Mixed models stick an additional random effects term into the equation, and introduce the double subscripts, making it $$ y^*_{ij} = x_{ij}'\beta + u_i + \epsilon_{ij}, $$ where $u_i$ is assumed normal because the normal distribution is something that everybody understands. The variance of $u_i$ is estimated by your mixed model package (although without the standard errors; Douglas Bates has a pretty strong stand on it). So the total variance is then $\sigma^2_u + \mathbb{V}[\epsilon] = \sigma^2_u + \pi^2/3$.
In your somewhat more complicated model, you just need to add all the variances of the variance components. It seems weird to me that the strongest effect that you have is that of the interaction, with the magnitudes of the main effects being smaller. See if this makes sense in your application. Also, Laplace approximation is at best a starting point; you need to increase the number of integration points to get accurate estimates of variance components.
|
Residual variance for glmer
|
One possible interpretation of the logistic regression model is to state that there is an underlying score $$ y^*_i = x_i'\beta + \epsilon_i, $$ with the observed variable being $$y_i = \left\{ \begin
|
Residual variance for glmer
One possible interpretation of the logistic regression model is to state that there is an underlying score $$ y^*_i = x_i'\beta + \epsilon_i, $$ with the observed variable being $$y_i = \left\{ \begin{array}{ll} 1, & y^*_i > 0 \\ 0, & y^*_i \le 0\end{array}\right.$$ This would be the way logistic regression would be introduced in social sciences, as opposed to biostatistics. In this formulation, $\epsilon$ follows a logistic distribution, which does have the variance of $\pi^2/3$. Mixed models stick an additional random effects term into the equation, and introduce the double subscripts, making it $$ y^*_{ij} = x_{ij}'\beta + u_i + \epsilon_{ij}, $$ where $u_i$ is assumed normal because the normal distribution is something that everybody understands. The variance of $u_i$ is estimated by your mixed model package (although without the standard errors; Douglas Bates has a pretty strong stand on it). So the total variance is then $\sigma^2_u + \mathbb{V}[\epsilon] = \sigma^2_u + \pi^2/3$.
In your somewhat more complicated model, you just need to add all the variances of the variance components. It seems weird to me that the strongest effect that you have is that of the interaction, with the magnitudes of the main effects being smaller. See if this makes sense in your application. Also, Laplace approximation is at best a starting point; you need to increase the number of integration points to get accurate estimates of variance components.
|
Residual variance for glmer
One possible interpretation of the logistic regression model is to state that there is an underlying score $$ y^*_i = x_i'\beta + \epsilon_i, $$ with the observed variable being $$y_i = \left\{ \begin
|
41,619
|
Show the shortest confidence interval of a normal distribution
|
The length of the interval is
$$\left(\bar{X}-Z_{(1-k)\alpha}\frac{\sigma}{\sqrt{n}}\right) - \left(\bar{X}-Z_{1 - k\alpha}\frac{\sigma}{\sqrt{n}}\right) = \left(Z_{1 - k\alpha} - Z_{(1-k)\alpha}\right)\frac{\sigma}{\sqrt{n}}. $$
Because when $k$ is varied $\sigma/\sqrt{n}$ remains constant, this is minimized provided $Z_{1 - k\alpha} - Z_{(1-k)\alpha}$ is minimized.
Another way to look at it is to write
$$z = Z_{(1-k)\alpha},\ w = Z_{1 - k\alpha}.$$
Because the interval $[z,w]$ should contain $1-\alpha$ probability and obviously both $z$ and $w$ will be finite at a minimum, necessarily
$$-\infty \lt z \lt Z_\alpha$$
and $k$ must lie between $0$ and $1$.
According to the Fundamental Theorem of Calculus, when $z$ is increased infinitesimally to $z+dz$, the probability of the interval decreases by $f(z)dz$ where $f$ is the PDF for $\bar X$. To compensate, $w$ must increase by an infinitesimal amount $dw$ for which
$$f(z)dz = f(w)dw.$$
In the figure, the interval $[z,w]$ has been shifted to $[z+dz,w+dw]$. To keep the probabilities the same, $dw$ is only about half of $dz$ because the height of the PDF at $z$, $f(z)$, is only about half the height at $w$. Therefore this shift has shrunk the interval. Shifting should continue until no more shrinking is possible, which will therefore occur when the heights at the interval endpoints are equal (as argued below).
At the same time the length of the interval, given by $w-z$, changes by $dw-dz$. A minimum will occur at a critical point, giving the criterion $0 = dw-dz$, implying by virtue of the preceding result that
$$f(z) = f(w).$$
For any unimodal continuous distribution with PDF $f$ there will be (practically by definition) at most two solutions to the equation $f(z) = c$ for any number $c$. Moreover, as $c$ decreases, those solutions--if they exist--must draw further apart. That shows there will be a single solution to the preceding equation, with $z$ less than the mode and $w$ greater than the mode, provided $0 \lt \alpha \lt 1/2$, and that it will be a global minimum. (For $\alpha=1/2$ the interval will reduce to a point. Although any point would do, a mode would be a point of greatest density. For $1/2\lt \alpha \lt 1$ there are no solutions.)
Finally, when the distribution is also symmetric (as in the case of a Normal distribution), then necessarily $z$ and $w$ must be equidistant from the mode, implying $k=1/2$.
|
Show the shortest confidence interval of a normal distribution
|
The length of the interval is
$$\left(\bar{X}-Z_{(1-k)\alpha}\frac{\sigma}{\sqrt{n}}\right) - \left(\bar{X}-Z_{1 - k\alpha}\frac{\sigma}{\sqrt{n}}\right) = \left(Z_{1 - k\alpha} - Z_{(1-k)\alpha}\righ
|
Show the shortest confidence interval of a normal distribution
The length of the interval is
$$\left(\bar{X}-Z_{(1-k)\alpha}\frac{\sigma}{\sqrt{n}}\right) - \left(\bar{X}-Z_{1 - k\alpha}\frac{\sigma}{\sqrt{n}}\right) = \left(Z_{1 - k\alpha} - Z_{(1-k)\alpha}\right)\frac{\sigma}{\sqrt{n}}. $$
Because when $k$ is varied $\sigma/\sqrt{n}$ remains constant, this is minimized provided $Z_{1 - k\alpha} - Z_{(1-k)\alpha}$ is minimized.
Another way to look at it is to write
$$z = Z_{(1-k)\alpha},\ w = Z_{1 - k\alpha}.$$
Because the interval $[z,w]$ should contain $1-\alpha$ probability and obviously both $z$ and $w$ will be finite at a minimum, necessarily
$$-\infty \lt z \lt Z_\alpha$$
and $k$ must lie between $0$ and $1$.
According to the Fundamental Theorem of Calculus, when $z$ is increased infinitesimally to $z+dz$, the probability of the interval decreases by $f(z)dz$ where $f$ is the PDF for $\bar X$. To compensate, $w$ must increase by an infinitesimal amount $dw$ for which
$$f(z)dz = f(w)dw.$$
In the figure, the interval $[z,w]$ has been shifted to $[z+dz,w+dw]$. To keep the probabilities the same, $dw$ is only about half of $dz$ because the height of the PDF at $z$, $f(z)$, is only about half the height at $w$. Therefore this shift has shrunk the interval. Shifting should continue until no more shrinking is possible, which will therefore occur when the heights at the interval endpoints are equal (as argued below).
At the same time the length of the interval, given by $w-z$, changes by $dw-dz$. A minimum will occur at a critical point, giving the criterion $0 = dw-dz$, implying by virtue of the preceding result that
$$f(z) = f(w).$$
For any unimodal continuous distribution with PDF $f$ there will be (practically by definition) at most two solutions to the equation $f(z) = c$ for any number $c$. Moreover, as $c$ decreases, those solutions--if they exist--must draw further apart. That shows there will be a single solution to the preceding equation, with $z$ less than the mode and $w$ greater than the mode, provided $0 \lt \alpha \lt 1/2$, and that it will be a global minimum. (For $\alpha=1/2$ the interval will reduce to a point. Although any point would do, a mode would be a point of greatest density. For $1/2\lt \alpha \lt 1$ there are no solutions.)
Finally, when the distribution is also symmetric (as in the case of a Normal distribution), then necessarily $z$ and $w$ must be equidistant from the mode, implying $k=1/2$.
|
Show the shortest confidence interval of a normal distribution
The length of the interval is
$$\left(\bar{X}-Z_{(1-k)\alpha}\frac{\sigma}{\sqrt{n}}\right) - \left(\bar{X}-Z_{1 - k\alpha}\frac{\sigma}{\sqrt{n}}\right) = \left(Z_{1 - k\alpha} - Z_{(1-k)\alpha}\righ
|
41,620
|
Are eigenvectors obtained in Kernel PCA orthogonal?
|
Yes, they are orthogonal. To see that your last expression equals zero, write it in the vector notation: $$\sum_{i,j=1}^n \alpha_i \beta_j K_{ij} = \boldsymbol \alpha^\top \boldsymbol K \boldsymbol \beta=0,$$ because $\boldsymbol \alpha$ and $\boldsymbol \beta$ are two different eigenvectors of $\boldsymbol K$. It is a standard linear algebra result: $$\boldsymbol K \boldsymbol \alpha = \lambda_1 \boldsymbol \alpha,\, \boldsymbol K\boldsymbol \beta = \lambda_2 \boldsymbol \beta \\ \Rightarrow \boldsymbol \beta^\top \boldsymbol K\boldsymbol \alpha = \boldsymbol \beta^\top \lambda_1 \boldsymbol \alpha = (\boldsymbol \beta^\top \lambda_1 \boldsymbol \alpha)^\top = \boldsymbol \alpha^\top \lambda_1 \boldsymbol \beta = \frac{\lambda_1}{\lambda_2}\boldsymbol \alpha^\top \lambda_2 \boldsymbol \beta = \frac{\lambda_1}{\lambda_2}\boldsymbol \alpha^\top \boldsymbol K\boldsymbol \beta = \frac{\lambda_1}{\lambda_2}(\boldsymbol \alpha^\top \boldsymbol K\boldsymbol \beta)^\top = \frac{\lambda_1}{\lambda_2}\boldsymbol \beta^\top \boldsymbol K\boldsymbol \alpha,$$ so if $\lambda_1 \ne \lambda_2$ then $\boldsymbol \beta^\top \boldsymbol K\boldsymbol \alpha=\boldsymbol \alpha^\top \boldsymbol K\boldsymbol \beta=0$.
Note, however, that covariance matrix in the target space often cannot even be defined, because target space can be infinite-dimensional (this is the case e.g. with Gaussian kernel). Meaning that your $\mathbf a$ and $\mathbf b$ can be infinite-dimensional, making the above computations somewhat sloppy... What I think is more important, is that $\boldsymbol \alpha$ and $\boldsymbol \beta$ are orthogonal -- this means that kernel PCs have zero correlation. See my answer here for more details: Is Kernel PCA with linear kernel equivalent to standard PCA?
|
Are eigenvectors obtained in Kernel PCA orthogonal?
|
Yes, they are orthogonal. To see that your last expression equals zero, write it in the vector notation: $$\sum_{i,j=1}^n \alpha_i \beta_j K_{ij} = \boldsymbol \alpha^\top \boldsymbol K \boldsymbol \b
|
Are eigenvectors obtained in Kernel PCA orthogonal?
Yes, they are orthogonal. To see that your last expression equals zero, write it in the vector notation: $$\sum_{i,j=1}^n \alpha_i \beta_j K_{ij} = \boldsymbol \alpha^\top \boldsymbol K \boldsymbol \beta=0,$$ because $\boldsymbol \alpha$ and $\boldsymbol \beta$ are two different eigenvectors of $\boldsymbol K$. It is a standard linear algebra result: $$\boldsymbol K \boldsymbol \alpha = \lambda_1 \boldsymbol \alpha,\, \boldsymbol K\boldsymbol \beta = \lambda_2 \boldsymbol \beta \\ \Rightarrow \boldsymbol \beta^\top \boldsymbol K\boldsymbol \alpha = \boldsymbol \beta^\top \lambda_1 \boldsymbol \alpha = (\boldsymbol \beta^\top \lambda_1 \boldsymbol \alpha)^\top = \boldsymbol \alpha^\top \lambda_1 \boldsymbol \beta = \frac{\lambda_1}{\lambda_2}\boldsymbol \alpha^\top \lambda_2 \boldsymbol \beta = \frac{\lambda_1}{\lambda_2}\boldsymbol \alpha^\top \boldsymbol K\boldsymbol \beta = \frac{\lambda_1}{\lambda_2}(\boldsymbol \alpha^\top \boldsymbol K\boldsymbol \beta)^\top = \frac{\lambda_1}{\lambda_2}\boldsymbol \beta^\top \boldsymbol K\boldsymbol \alpha,$$ so if $\lambda_1 \ne \lambda_2$ then $\boldsymbol \beta^\top \boldsymbol K\boldsymbol \alpha=\boldsymbol \alpha^\top \boldsymbol K\boldsymbol \beta=0$.
Note, however, that covariance matrix in the target space often cannot even be defined, because target space can be infinite-dimensional (this is the case e.g. with Gaussian kernel). Meaning that your $\mathbf a$ and $\mathbf b$ can be infinite-dimensional, making the above computations somewhat sloppy... What I think is more important, is that $\boldsymbol \alpha$ and $\boldsymbol \beta$ are orthogonal -- this means that kernel PCs have zero correlation. See my answer here for more details: Is Kernel PCA with linear kernel equivalent to standard PCA?
|
Are eigenvectors obtained in Kernel PCA orthogonal?
Yes, they are orthogonal. To see that your last expression equals zero, write it in the vector notation: $$\sum_{i,j=1}^n \alpha_i \beta_j K_{ij} = \boldsymbol \alpha^\top \boldsymbol K \boldsymbol \b
|
41,621
|
Are there problems with arbitrary application of bootstrap?
|
The term "bootstrap" covers many (somewhat related) things -- enough to fill a number of books (which indeed it does). Some things are more prone to problems with naive application than others.
If you're just applying resampling directly to observations, the most obvious problem you might encounter with a naive application of the bootstrap to an index is that the data values are
(i) almost surely not stationary; the raw values are nothing like exchangeable (that's pretty much the point of an index, really).
(ii) dependent over time; even if they were stationary you still can't simply shuffle them about willy nilly without messing up time-dependence, and hence inference about variability.
If it is OK, are the rsulting standard errors robust to heteroskedasticity, serial correlation etc.?
No, that's part of why it's not okay. If you have a model for the way the expectation moves, the way the variance changes and the dependence over time, you might be able to do something like block-bootstrap the residuals, say. Blocks of residuals might be nearly exchangeable and might preserve enough of the dependence structure to give reasonable results.
|
Are there problems with arbitrary application of bootstrap?
|
The term "bootstrap" covers many (somewhat related) things -- enough to fill a number of books (which indeed it does). Some things are more prone to problems with naive application than others.
If y
|
Are there problems with arbitrary application of bootstrap?
The term "bootstrap" covers many (somewhat related) things -- enough to fill a number of books (which indeed it does). Some things are more prone to problems with naive application than others.
If you're just applying resampling directly to observations, the most obvious problem you might encounter with a naive application of the bootstrap to an index is that the data values are
(i) almost surely not stationary; the raw values are nothing like exchangeable (that's pretty much the point of an index, really).
(ii) dependent over time; even if they were stationary you still can't simply shuffle them about willy nilly without messing up time-dependence, and hence inference about variability.
If it is OK, are the rsulting standard errors robust to heteroskedasticity, serial correlation etc.?
No, that's part of why it's not okay. If you have a model for the way the expectation moves, the way the variance changes and the dependence over time, you might be able to do something like block-bootstrap the residuals, say. Blocks of residuals might be nearly exchangeable and might preserve enough of the dependence structure to give reasonable results.
|
Are there problems with arbitrary application of bootstrap?
The term "bootstrap" covers many (somewhat related) things -- enough to fill a number of books (which indeed it does). Some things are more prone to problems with naive application than others.
If y
|
41,622
|
What’s wrong with this way of fitting time-dependent coefficients in a Cox regression?
|
I am not very familiar with the mechanics of survival analysis data management in R, but I think I can explain why this happens and show an example using Stata.
The hazard of the risk changes the instant the variable changes as time flows (no delays or anticipation allowed), though it remains constant in the intervals that forms the rows in the data. You can achieve the right result by splitting the data at the observed failure times and manually generating the time-varying covariates that you include in the model. Splitting the data allows you to estimate a separate HR for each episode and get the right time-varying coefficient. The cost is data inflation.
Here's an example using the hip fracture study where some elderly folks were given an inflatable device to protect them from falls and an initial dosage of bone-fortifying drug. We will treat the initial dosage as continuous and interact it with time. The HR model is
$$
h(t \vert x) = h_0(t) \cdot \exp(\beta \cdot protect + \gamma \cdot init_-dosage + \eta \cdot init_-dosage \times t)
$$
This probably makes very little sense pharmacologically since the effect of the initial dosage should decay over time, but we will roll with it to make the example more similar to your question.
First we load the data and take a peek at it:
. set more off
. use http://www.stata-press.com/data/cggm3/hip4, clear
(hip fracture study)
. sort id _t
. list id _t0 _t _d init_drug_level if inlist(id,1,5,9), sepby(id) noobs ab(30)
+--------------------------------------+
| id _t0 _t _d init_drug_level |
|--------------------------------------|
| 1 0 1 1 50 |
|--------------------------------------|
| 5 0 4 1 100 |
|--------------------------------------|
| 9 0 5 0 50 |
| 9 5 8 1 50 |
+--------------------------------------+
The variable id indexes patients, _t0 is the entry date, _t is study time in months, and _d indicates failure. The initial dosage was either 50 or 100 mg.
Here's wrong way to do things (create the interaction between dosage and time and use the data as is):
. gen current_drug_level1 = init_drug_level *_t
. stcox protect init_drug_level current_drug_level1, nolog
failure _d: fracture
analysis time _t: time1
id: id
Cox regression -- Breslow method for ties
No. of subjects = 48 Number of obs = 106
No. of failures = 31
Time at risk = 714
LR chi2(3) = 56.88
Log likelihood = -70.129892 Prob > chi2 = 0.0000
-------------------------------------------------------------------------------------
_t | Haz. Ratio Std. Err. z P>|z| [95% Conf. Interval]
--------------------+----------------------------------------------------------------
protect | .1002764 .0563649 -4.09 0.000 .0333229 .3017554
init_drug_level | 1.034508 .0152702 2.30 0.022 1.005008 1.064874
current_drug_level1 | .9954672 .0011483 -3.94 0.000 .9932191 .9977204
-------------------------------------------------------------------------------------
Here's the right way using the automated option (so there's no need to split the data):
. stcox protect init_drug_level, tvc(init_drug_level) texp(_t) nolog
failure _d: fracture
analysis time _t: time1
id: id
Cox regression -- Breslow method for ties
No. of subjects = 48 Number of obs = 106
No. of failures = 31
Time at risk = 714
LR chi2(3) = 33.23
Log likelihood = -81.95591 Prob > chi2 = 0.0000
---------------------------------------------------------------------------------
_t | Haz. Ratio Std. Err. z P>|z| [95% Conf. Interval]
----------------+----------------------------------------------------------------
main |
protect | .0868497 .0417166 -5.09 0.000 .0338774 .2226521
init_drug_level | .9770202 .0134021 -1.69 0.090 .9511026 1.003644
----------------+----------------------------------------------------------------
tvc |
init_drug_level | .9999956 .0009067 -0.00 0.996 .9982201 1.001774
---------------------------------------------------------------------------------
Note: variables in tvc equation interacted with _t
Here's how you would do it by hand, after splitting the records:
. stsplit, at(failures)
(21 failure times)
(452 observations (episodes) created)
. gen current_drug_level2 = init_drug_level *_t
. sort id _t
. list id _t0 _t _d *_drug_level* if inlist(id,1,5,9), sepby(id) noobs ab(30)
+----------------------------------------------------------------------------------+
| id _t0 _t _d init_drug_level current_drug_level1 current_drug_level2 |
|----------------------------------------------------------------------------------|
| 1 0 1 1 50 50 50 |
|----------------------------------------------------------------------------------|
| 5 0 1 0 100 400 100 |
| 5 1 2 0 100 400 200 |
| 5 2 3 0 100 400 300 |
| 5 3 4 1 100 400 400 |
|----------------------------------------------------------------------------------|
| 9 0 1 0 50 250 50 |
| 9 1 2 0 50 250 100 |
| 9 2 3 0 50 250 150 |
| 9 3 4 0 50 250 200 |
| 9 4 5 0 50 250 250 |
| 9 5 6 0 50 400 300 |
| 9 6 7 0 50 400 350 |
| 9 7 8 1 50 400 400 |
+----------------------------------------------------------------------------------+
Note how different the data looks. The key point is that we have added a record for every time that time increments while the patient is still alive, so the data is much bigger. Note that for patient 9, we previously assumed that the current dosage was 250, equal to what it was at the end of the fifth month. Now we have current dosage vary in the months 0 to 5, and we're no longer pretending that it was the same level as it was at the end of month 5. Since Cox regression is a series of comparisons of those subjects who fail to those subjects at risk of failing for periods where there was some failure, we are now comparing apples to apples when we include the extra data that reflects this variation.
When we include this data, we get the same estimates as with the automated version:
. stcox protect init_drug_level current_drug_level2, nolog
failure _d: fracture
analysis time _t: time1
id: id
Cox regression -- Breslow method for ties
No. of subjects = 48 Number of obs = 558
No. of failures = 31
Time at risk = 714
LR chi2(3) = 33.23
Log likelihood = -81.95591 Prob > chi2 = 0.0000
-------------------------------------------------------------------------------------
_t | Haz. Ratio Std. Err. z P>|z| [95% Conf. Interval]
--------------------+----------------------------------------------------------------
protect | .0868497 .0417166 -5.09 0.000 .0338774 .2226521
init_drug_level | .9770202 .0134021 -1.69 0.090 .9511026 1.003644
current_drug_level2 | .9999956 .0009067 -0.00 0.996 .9982201 1.001774
-------------------------------------------------------------------------------------
|
What’s wrong with this way of fitting time-dependent coefficients in a Cox regression?
|
I am not very familiar with the mechanics of survival analysis data management in R, but I think I can explain why this happens and show an example using Stata.
The hazard of the risk changes the inst
|
What’s wrong with this way of fitting time-dependent coefficients in a Cox regression?
I am not very familiar with the mechanics of survival analysis data management in R, but I think I can explain why this happens and show an example using Stata.
The hazard of the risk changes the instant the variable changes as time flows (no delays or anticipation allowed), though it remains constant in the intervals that forms the rows in the data. You can achieve the right result by splitting the data at the observed failure times and manually generating the time-varying covariates that you include in the model. Splitting the data allows you to estimate a separate HR for each episode and get the right time-varying coefficient. The cost is data inflation.
Here's an example using the hip fracture study where some elderly folks were given an inflatable device to protect them from falls and an initial dosage of bone-fortifying drug. We will treat the initial dosage as continuous and interact it with time. The HR model is
$$
h(t \vert x) = h_0(t) \cdot \exp(\beta \cdot protect + \gamma \cdot init_-dosage + \eta \cdot init_-dosage \times t)
$$
This probably makes very little sense pharmacologically since the effect of the initial dosage should decay over time, but we will roll with it to make the example more similar to your question.
First we load the data and take a peek at it:
. set more off
. use http://www.stata-press.com/data/cggm3/hip4, clear
(hip fracture study)
. sort id _t
. list id _t0 _t _d init_drug_level if inlist(id,1,5,9), sepby(id) noobs ab(30)
+--------------------------------------+
| id _t0 _t _d init_drug_level |
|--------------------------------------|
| 1 0 1 1 50 |
|--------------------------------------|
| 5 0 4 1 100 |
|--------------------------------------|
| 9 0 5 0 50 |
| 9 5 8 1 50 |
+--------------------------------------+
The variable id indexes patients, _t0 is the entry date, _t is study time in months, and _d indicates failure. The initial dosage was either 50 or 100 mg.
Here's wrong way to do things (create the interaction between dosage and time and use the data as is):
. gen current_drug_level1 = init_drug_level *_t
. stcox protect init_drug_level current_drug_level1, nolog
failure _d: fracture
analysis time _t: time1
id: id
Cox regression -- Breslow method for ties
No. of subjects = 48 Number of obs = 106
No. of failures = 31
Time at risk = 714
LR chi2(3) = 56.88
Log likelihood = -70.129892 Prob > chi2 = 0.0000
-------------------------------------------------------------------------------------
_t | Haz. Ratio Std. Err. z P>|z| [95% Conf. Interval]
--------------------+----------------------------------------------------------------
protect | .1002764 .0563649 -4.09 0.000 .0333229 .3017554
init_drug_level | 1.034508 .0152702 2.30 0.022 1.005008 1.064874
current_drug_level1 | .9954672 .0011483 -3.94 0.000 .9932191 .9977204
-------------------------------------------------------------------------------------
Here's the right way using the automated option (so there's no need to split the data):
. stcox protect init_drug_level, tvc(init_drug_level) texp(_t) nolog
failure _d: fracture
analysis time _t: time1
id: id
Cox regression -- Breslow method for ties
No. of subjects = 48 Number of obs = 106
No. of failures = 31
Time at risk = 714
LR chi2(3) = 33.23
Log likelihood = -81.95591 Prob > chi2 = 0.0000
---------------------------------------------------------------------------------
_t | Haz. Ratio Std. Err. z P>|z| [95% Conf. Interval]
----------------+----------------------------------------------------------------
main |
protect | .0868497 .0417166 -5.09 0.000 .0338774 .2226521
init_drug_level | .9770202 .0134021 -1.69 0.090 .9511026 1.003644
----------------+----------------------------------------------------------------
tvc |
init_drug_level | .9999956 .0009067 -0.00 0.996 .9982201 1.001774
---------------------------------------------------------------------------------
Note: variables in tvc equation interacted with _t
Here's how you would do it by hand, after splitting the records:
. stsplit, at(failures)
(21 failure times)
(452 observations (episodes) created)
. gen current_drug_level2 = init_drug_level *_t
. sort id _t
. list id _t0 _t _d *_drug_level* if inlist(id,1,5,9), sepby(id) noobs ab(30)
+----------------------------------------------------------------------------------+
| id _t0 _t _d init_drug_level current_drug_level1 current_drug_level2 |
|----------------------------------------------------------------------------------|
| 1 0 1 1 50 50 50 |
|----------------------------------------------------------------------------------|
| 5 0 1 0 100 400 100 |
| 5 1 2 0 100 400 200 |
| 5 2 3 0 100 400 300 |
| 5 3 4 1 100 400 400 |
|----------------------------------------------------------------------------------|
| 9 0 1 0 50 250 50 |
| 9 1 2 0 50 250 100 |
| 9 2 3 0 50 250 150 |
| 9 3 4 0 50 250 200 |
| 9 4 5 0 50 250 250 |
| 9 5 6 0 50 400 300 |
| 9 6 7 0 50 400 350 |
| 9 7 8 1 50 400 400 |
+----------------------------------------------------------------------------------+
Note how different the data looks. The key point is that we have added a record for every time that time increments while the patient is still alive, so the data is much bigger. Note that for patient 9, we previously assumed that the current dosage was 250, equal to what it was at the end of the fifth month. Now we have current dosage vary in the months 0 to 5, and we're no longer pretending that it was the same level as it was at the end of month 5. Since Cox regression is a series of comparisons of those subjects who fail to those subjects at risk of failing for periods where there was some failure, we are now comparing apples to apples when we include the extra data that reflects this variation.
When we include this data, we get the same estimates as with the automated version:
. stcox protect init_drug_level current_drug_level2, nolog
failure _d: fracture
analysis time _t: time1
id: id
Cox regression -- Breslow method for ties
No. of subjects = 48 Number of obs = 558
No. of failures = 31
Time at risk = 714
LR chi2(3) = 33.23
Log likelihood = -81.95591 Prob > chi2 = 0.0000
-------------------------------------------------------------------------------------
_t | Haz. Ratio Std. Err. z P>|z| [95% Conf. Interval]
--------------------+----------------------------------------------------------------
protect | .0868497 .0417166 -5.09 0.000 .0338774 .2226521
init_drug_level | .9770202 .0134021 -1.69 0.090 .9511026 1.003644
current_drug_level2 | .9999956 .0009067 -0.00 0.996 .9982201 1.001774
-------------------------------------------------------------------------------------
|
What’s wrong with this way of fitting time-dependent coefficients in a Cox regression?
I am not very familiar with the mechanics of survival analysis data management in R, but I think I can explain why this happens and show an example using Stata.
The hazard of the risk changes the inst
|
41,623
|
What’s wrong with this way of fitting time-dependent coefficients in a Cox regression?
|
As a side note, you can archive your initial model in a "non-invalid" way in R by using the tt function. See the last example in the Using Time Dependent Covariates and Time Dependent Coefficients in the Cox Model vignette of the Survival package (at least the last example in version 2.41-3).
|
What’s wrong with this way of fitting time-dependent coefficients in a Cox regression?
|
As a side note, you can archive your initial model in a "non-invalid" way in R by using the tt function. See the last example in the Using Time Dependent Covariates and Time Dependent Coefficients in
|
What’s wrong with this way of fitting time-dependent coefficients in a Cox regression?
As a side note, you can archive your initial model in a "non-invalid" way in R by using the tt function. See the last example in the Using Time Dependent Covariates and Time Dependent Coefficients in the Cox Model vignette of the Survival package (at least the last example in version 2.41-3).
|
What’s wrong with this way of fitting time-dependent coefficients in a Cox regression?
As a side note, you can archive your initial model in a "non-invalid" way in R by using the tt function. See the last example in the Using Time Dependent Covariates and Time Dependent Coefficients in
|
41,624
|
ROC-AUC and Precision-Recall for random classifiers in class imbalanced problems
|
Area under the ROC curve is insensitive to class balance. Area under the PR curve, on the other hand, is highly influenced by this.
|
ROC-AUC and Precision-Recall for random classifiers in class imbalanced problems
|
Area under the ROC curve is insensitive to class balance. Area under the PR curve, on the other hand, is highly influenced by this.
|
ROC-AUC and Precision-Recall for random classifiers in class imbalanced problems
Area under the ROC curve is insensitive to class balance. Area under the PR curve, on the other hand, is highly influenced by this.
|
ROC-AUC and Precision-Recall for random classifiers in class imbalanced problems
Area under the ROC curve is insensitive to class balance. Area under the PR curve, on the other hand, is highly influenced by this.
|
41,625
|
Obtaining an estimator for z given an estimator for log z
|
I don't necessarily have a problem with you exponentiating your predicted values. You just need to realize that if the former was an expectation, the result is no longer an expectiation. Specifically, a regression model is intended to give the expected value of $Y$ at each point in $X$ ($E(Y|X=x_i)$). An expected value is the weighted average of all possible $Y$ values, where the weights are the likelihoods. In simpler terms, it is the conditional mean. Because the logarithm / exponentiation of a variable is a non-linear transformation, if you input a mean you don't get a mean as your output.
Often, people use the log transform to normalize the residual distribution and/or stabilize the variance. That is perfectly fine. But if the resulting distribution is normal(ish) with (sufficiently) constant variance, the original distribution necessarily wasn't. When you back transform, you get the conditional median instead of the conditional mean. If you understand that (and what it implies), and you want that, you will be fine.
Consider:
x = c(2, 3, 1, 9, 3, 5, 9, 3)
lx = log(x)
mlx = mean(lx)
mlx
# [1] 1.249109
exp(mlx)
# [1] 3.487234
mean(x)
# [1] 4.375
|
Obtaining an estimator for z given an estimator for log z
|
I don't necessarily have a problem with you exponentiating your predicted values. You just need to realize that if the former was an expectation, the result is no longer an expectiation. Specificall
|
Obtaining an estimator for z given an estimator for log z
I don't necessarily have a problem with you exponentiating your predicted values. You just need to realize that if the former was an expectation, the result is no longer an expectiation. Specifically, a regression model is intended to give the expected value of $Y$ at each point in $X$ ($E(Y|X=x_i)$). An expected value is the weighted average of all possible $Y$ values, where the weights are the likelihoods. In simpler terms, it is the conditional mean. Because the logarithm / exponentiation of a variable is a non-linear transformation, if you input a mean you don't get a mean as your output.
Often, people use the log transform to normalize the residual distribution and/or stabilize the variance. That is perfectly fine. But if the resulting distribution is normal(ish) with (sufficiently) constant variance, the original distribution necessarily wasn't. When you back transform, you get the conditional median instead of the conditional mean. If you understand that (and what it implies), and you want that, you will be fine.
Consider:
x = c(2, 3, 1, 9, 3, 5, 9, 3)
lx = log(x)
mlx = mean(lx)
mlx
# [1] 1.249109
exp(mlx)
# [1] 3.487234
mean(x)
# [1] 4.375
|
Obtaining an estimator for z given an estimator for log z
I don't necessarily have a problem with you exponentiating your predicted values. You just need to realize that if the former was an expectation, the result is no longer an expectiation. Specificall
|
41,626
|
Stochastic Programming with MCMC
|
PyMC2 can be combined with the LP solver of your choice to solve stochastic LP problems like this one. Here is code to do it for this very simple case. I've left a note on how I would change this for a more complex LP.
c1 = pm.Normal('c1', mu=2, tau=.5**-2)
c2 = -3
b1 = pm.Normal('b1', mu=0, tau=3.**-2)
@pm.deterministic
def x(c1=c1, c2=c2, b1=b1):
# use an LP solver here for a complex problem
arg_min = np.empty(2)
min_val = np.inf
for x1,x2 in [[0,0], [0, b1], [-b1, 0]]: # there are only three possible extreme points,
if -x1 + x2 <= b1 and x1 >= 0 and x2 >= 0: # so check obj value at each valid one
val = c1*x1 + c2*x2
if val < min_val:
min_val = val
arg_min = [x1,x2]
return np.array(arg_min, dtype=float)
Look at the weird joint distribution for $(x_1, x_2)$:
A notebook with all the code for this is here.
|
Stochastic Programming with MCMC
|
PyMC2 can be combined with the LP solver of your choice to solve stochastic LP problems like this one. Here is code to do it for this very simple case. I've left a note on how I would change this for
|
Stochastic Programming with MCMC
PyMC2 can be combined with the LP solver of your choice to solve stochastic LP problems like this one. Here is code to do it for this very simple case. I've left a note on how I would change this for a more complex LP.
c1 = pm.Normal('c1', mu=2, tau=.5**-2)
c2 = -3
b1 = pm.Normal('b1', mu=0, tau=3.**-2)
@pm.deterministic
def x(c1=c1, c2=c2, b1=b1):
# use an LP solver here for a complex problem
arg_min = np.empty(2)
min_val = np.inf
for x1,x2 in [[0,0], [0, b1], [-b1, 0]]: # there are only three possible extreme points,
if -x1 + x2 <= b1 and x1 >= 0 and x2 >= 0: # so check obj value at each valid one
val = c1*x1 + c2*x2
if val < min_val:
min_val = val
arg_min = [x1,x2]
return np.array(arg_min, dtype=float)
Look at the weird joint distribution for $(x_1, x_2)$:
A notebook with all the code for this is here.
|
Stochastic Programming with MCMC
PyMC2 can be combined with the LP solver of your choice to solve stochastic LP problems like this one. Here is code to do it for this very simple case. I've left a note on how I would change this for
|
41,627
|
Feature Normalization/Standardization before or after Feature Selection?
|
Before.
In fact, it's the "feature selection process" you mention that is pretty much the reason why you want to have your features standardized in the first place.
|
Feature Normalization/Standardization before or after Feature Selection?
|
Before.
In fact, it's the "feature selection process" you mention that is pretty much the reason why you want to have your features standardized in the first place.
|
Feature Normalization/Standardization before or after Feature Selection?
Before.
In fact, it's the "feature selection process" you mention that is pretty much the reason why you want to have your features standardized in the first place.
|
Feature Normalization/Standardization before or after Feature Selection?
Before.
In fact, it's the "feature selection process" you mention that is pretty much the reason why you want to have your features standardized in the first place.
|
41,628
|
GLM analogue of weighted least squares
|
Fit an MLE by maximizing
$$
l(\mathbf{\theta};\mathbf{y})=\sum_{i=1}^Nl{\left(\theta;y_i\right)}
$$
where $l$ is the log-likelihood. Fitting an MLE with inverse-probability (i.e. frequency) weights entails modifying the log-likelihood to:
$$
l(\mathbf{\theta};\mathbf{y})=\sum_{i=1}^Nw_i~l{\left(\theta;y_i\right)}.
$$
In the GLM case, this reduces to solving
$$
\sum_{i=1}^N w_i\frac{y_i-\mu_i}{V(y_i)}\left(\frac{\partial\mu_i}{\partial\eta_i}x_{ij}\right)=0,~\forall j
$$
Source: page 119 of http://www.ssicentral.com/lisrel/techdocs/sglim.pdf, linked at http://www.ssicentral.com/lisrel/resources.html#t. It's the "Generalized Linear Modeling" chapter (chapter 3) of the LISREL "technical documents."
|
GLM analogue of weighted least squares
|
Fit an MLE by maximizing
$$
l(\mathbf{\theta};\mathbf{y})=\sum_{i=1}^Nl{\left(\theta;y_i\right)}
$$
where $l$ is the log-likelihood. Fitting an MLE with inverse-probability (i.e. frequency) weights en
|
GLM analogue of weighted least squares
Fit an MLE by maximizing
$$
l(\mathbf{\theta};\mathbf{y})=\sum_{i=1}^Nl{\left(\theta;y_i\right)}
$$
where $l$ is the log-likelihood. Fitting an MLE with inverse-probability (i.e. frequency) weights entails modifying the log-likelihood to:
$$
l(\mathbf{\theta};\mathbf{y})=\sum_{i=1}^Nw_i~l{\left(\theta;y_i\right)}.
$$
In the GLM case, this reduces to solving
$$
\sum_{i=1}^N w_i\frac{y_i-\mu_i}{V(y_i)}\left(\frac{\partial\mu_i}{\partial\eta_i}x_{ij}\right)=0,~\forall j
$$
Source: page 119 of http://www.ssicentral.com/lisrel/techdocs/sglim.pdf, linked at http://www.ssicentral.com/lisrel/resources.html#t. It's the "Generalized Linear Modeling" chapter (chapter 3) of the LISREL "technical documents."
|
GLM analogue of weighted least squares
Fit an MLE by maximizing
$$
l(\mathbf{\theta};\mathbf{y})=\sum_{i=1}^Nl{\left(\theta;y_i\right)}
$$
where $l$ is the log-likelihood. Fitting an MLE with inverse-probability (i.e. frequency) weights en
|
41,629
|
Matrix Factorization algorithms for Recommender Systems
|
Matrix factorisation is part of Numerical Linear Algebra (NLA). The following are some useful books in NLA and Data Mining / Statistical Learning.
The classic in NLA is Golub & Van Loan's Matrix Computations. Van Loan's webpage lists his books in and links to others.
A modern approach that's great for self-study, is Numerical Linear Algebra by Trefethen & Bau, partially available online on Trefethen's website. Bau was working at Google last I checked.
For a data mining focus, Numerical Linear Algebra and Applications in Data Mining by Lars Elden is available online.
A classic on the statistical side is Elements of Statistical Learning by Hastie, Tibshirani, and Friedman. The authors have graciously made available their entire book online. This requires a fair bit of mathematical background, but the introductions to each topic will be accessible more generally.
A lighter version of the above is Introduction to Statistical Learning with Applications in R, by the same authors plus Daniella Witten. This is also available online by the authors and provides useful R code.
|
Matrix Factorization algorithms for Recommender Systems
|
Matrix factorisation is part of Numerical Linear Algebra (NLA). The following are some useful books in NLA and Data Mining / Statistical Learning.
The classic in NLA is Golub & Van Loan's Matrix C
|
Matrix Factorization algorithms for Recommender Systems
Matrix factorisation is part of Numerical Linear Algebra (NLA). The following are some useful books in NLA and Data Mining / Statistical Learning.
The classic in NLA is Golub & Van Loan's Matrix Computations. Van Loan's webpage lists his books in and links to others.
A modern approach that's great for self-study, is Numerical Linear Algebra by Trefethen & Bau, partially available online on Trefethen's website. Bau was working at Google last I checked.
For a data mining focus, Numerical Linear Algebra and Applications in Data Mining by Lars Elden is available online.
A classic on the statistical side is Elements of Statistical Learning by Hastie, Tibshirani, and Friedman. The authors have graciously made available their entire book online. This requires a fair bit of mathematical background, but the introductions to each topic will be accessible more generally.
A lighter version of the above is Introduction to Statistical Learning with Applications in R, by the same authors plus Daniella Witten. This is also available online by the authors and provides useful R code.
|
Matrix Factorization algorithms for Recommender Systems
Matrix factorisation is part of Numerical Linear Algebra (NLA). The following are some useful books in NLA and Data Mining / Statistical Learning.
The classic in NLA is Golub & Van Loan's Matrix C
|
41,630
|
Matrix Factorization algorithms for Recommender Systems
|
You can have a look to http://dl.acm.org/citation.cfm?id=2043956. For more details you can look into Recommender system handbook.
I am working on similar problems. For more details or discussion mail me @ pranav.waila[at]gmail[dot]com.
|
Matrix Factorization algorithms for Recommender Systems
|
You can have a look to http://dl.acm.org/citation.cfm?id=2043956. For more details you can look into Recommender system handbook.
I am working on similar problems. For more details or discussion mail
|
Matrix Factorization algorithms for Recommender Systems
You can have a look to http://dl.acm.org/citation.cfm?id=2043956. For more details you can look into Recommender system handbook.
I am working on similar problems. For more details or discussion mail me @ pranav.waila[at]gmail[dot]com.
|
Matrix Factorization algorithms for Recommender Systems
You can have a look to http://dl.acm.org/citation.cfm?id=2043956. For more details you can look into Recommender system handbook.
I am working on similar problems. For more details or discussion mail
|
41,631
|
What does "Conditioning on the margins of ____" mean?
|
Margins
Margins here refers to the values on the edges (margins!) of the table, that is, the total number of reds, total number of blacks, total number of drawn, and total number of not drawn. The related term marginal distribution refers to the distribution of a single variable obtained from a joint distribution of several variables by averaging over the other variables (etymologically, the term indeed comes from the values written on the margins of tables).
Conditioning
Conditioning refers to computing conditional distributions, that is, probability distributions given some information. Here, conditioning on the margins means that the margins are fixed, i.e., we assume that there are in total 6680 red balls (and 12160 black balls), as well as 382 drawn balls (and 18458 balls not drawn). So that, for example
Drawn Not drawn Total
Red 200 6480 6680
Black 182 11978 12160
Total 382 18458 18840
would be a possible realization of our random distribution (the margins are the same). Under the null hypothesis that getting drawn and the color of the ball are independent, conditioning on the margins leads to the hypergeometric distribution.
Alternatively, if the experiment were such that one draws balls until 160 reds are obtained, it would not make sense to condition on the margins (as the total number of drawn balls could have turned out something else than 382). In this case, one could obtain realizations like
Drawn Not drawn Total
Red 160 6520 6680
Black 182 11978 12160
Total 342 18498 18840
which would have different margins.
|
What does "Conditioning on the margins of ____" mean?
|
Margins
Margins here refers to the values on the edges (margins!) of the table, that is, the total number of reds, total number of blacks, total number of drawn, and total number of not drawn. The rel
|
What does "Conditioning on the margins of ____" mean?
Margins
Margins here refers to the values on the edges (margins!) of the table, that is, the total number of reds, total number of blacks, total number of drawn, and total number of not drawn. The related term marginal distribution refers to the distribution of a single variable obtained from a joint distribution of several variables by averaging over the other variables (etymologically, the term indeed comes from the values written on the margins of tables).
Conditioning
Conditioning refers to computing conditional distributions, that is, probability distributions given some information. Here, conditioning on the margins means that the margins are fixed, i.e., we assume that there are in total 6680 red balls (and 12160 black balls), as well as 382 drawn balls (and 18458 balls not drawn). So that, for example
Drawn Not drawn Total
Red 200 6480 6680
Black 182 11978 12160
Total 382 18458 18840
would be a possible realization of our random distribution (the margins are the same). Under the null hypothesis that getting drawn and the color of the ball are independent, conditioning on the margins leads to the hypergeometric distribution.
Alternatively, if the experiment were such that one draws balls until 160 reds are obtained, it would not make sense to condition on the margins (as the total number of drawn balls could have turned out something else than 382). In this case, one could obtain realizations like
Drawn Not drawn Total
Red 160 6520 6680
Black 182 11978 12160
Total 342 18498 18840
which would have different margins.
|
What does "Conditioning on the margins of ____" mean?
Margins
Margins here refers to the values on the edges (margins!) of the table, that is, the total number of reds, total number of blacks, total number of drawn, and total number of not drawn. The rel
|
41,632
|
Clustering without a distance matrix
|
Inverted lists work very well for sparse data. This is what e.g. Lucene uses.
I don't know how extensible scikit-learn is. A lot of the code in it seems to be written in Cython, so it is Python-like code compiled via C. This would make it harder to extend.
ELKI, the data mining tool I am contributing a lot to, has an - yet unpublished and undocumented - Lucene addon. This would likely work for you. I hope to at some point also have an inverted index for sparse vectors in ELKI main (because of the Lucene dependency, I plan on keeping this addon separate).
We also have (non integrated) code for a prefix-tree index for accelerating Levenshtein distance. But this needs some more work to integrate it, and maybe some profiling.
Most of the time, indexes only work for a particular distance. There is no general purpose index that can support arbitrary distances at the same time. There are some indexes (e.g. M-tree, and iDistance, both available in ELKI) that do work with arbitrary distances, but only one distance at a time. But how well they work for your data and distance varies a lot. Usually, you need a good numerical contrast on your similarities.
The question you need to ask yourself is: is there a way to find all objects within a radius of $\varepsilon$ (or a similarity larger than $\varepsilon$) without comparing every object to every other object.
Note that for DBSCAN you can use fake distances. The actual distances are not used; only a binary decision is needed ($d\leq\varepsilon$). This is formalized as GeneralizedDBSCAN. So if your can implement a "distance function" that returns 0 for "similar" and 1 for "not similar", and plug this into scikit-learn's DBSCAN, you should be fine.
Depending on the architecture of scikit-learn, you maybe can plug in a custom index disguised as distance function. Inverted lists are a good candidate for binary data.
|
Clustering without a distance matrix
|
Inverted lists work very well for sparse data. This is what e.g. Lucene uses.
I don't know how extensible scikit-learn is. A lot of the code in it seems to be written in Cython, so it is Python-like c
|
Clustering without a distance matrix
Inverted lists work very well for sparse data. This is what e.g. Lucene uses.
I don't know how extensible scikit-learn is. A lot of the code in it seems to be written in Cython, so it is Python-like code compiled via C. This would make it harder to extend.
ELKI, the data mining tool I am contributing a lot to, has an - yet unpublished and undocumented - Lucene addon. This would likely work for you. I hope to at some point also have an inverted index for sparse vectors in ELKI main (because of the Lucene dependency, I plan on keeping this addon separate).
We also have (non integrated) code for a prefix-tree index for accelerating Levenshtein distance. But this needs some more work to integrate it, and maybe some profiling.
Most of the time, indexes only work for a particular distance. There is no general purpose index that can support arbitrary distances at the same time. There are some indexes (e.g. M-tree, and iDistance, both available in ELKI) that do work with arbitrary distances, but only one distance at a time. But how well they work for your data and distance varies a lot. Usually, you need a good numerical contrast on your similarities.
The question you need to ask yourself is: is there a way to find all objects within a radius of $\varepsilon$ (or a similarity larger than $\varepsilon$) without comparing every object to every other object.
Note that for DBSCAN you can use fake distances. The actual distances are not used; only a binary decision is needed ($d\leq\varepsilon$). This is formalized as GeneralizedDBSCAN. So if your can implement a "distance function" that returns 0 for "similar" and 1 for "not similar", and plug this into scikit-learn's DBSCAN, you should be fine.
Depending on the architecture of scikit-learn, you maybe can plug in a custom index disguised as distance function. Inverted lists are a good candidate for binary data.
|
Clustering without a distance matrix
Inverted lists work very well for sparse data. This is what e.g. Lucene uses.
I don't know how extensible scikit-learn is. A lot of the code in it seems to be written in Cython, so it is Python-like c
|
41,633
|
GLMM and two slopes
|
The problem is the number of abscissas, or nodes, you've selected for the Adaptive Gaussian Quadrature (AGQ) approximation of the log-likelihood, specified by nAGQ. The default value is 1 (equivalent to the Laplacian approximation).
The glmer function's Details section (page 29 in the lme4 help page) states:
The most reliable approximation for GLMMs is adaptive Gauss-Hermite quadrature, at present implemented only for models with a single scalar random effect.
Limiting the AGQ approximation to single scalar random effects is not a limitation of AGQ, but appears to be a decision made by the lme4 package writers, as noted here by Douglas Bates back in 2011 (relevant piece quoted below):
It may seem that this issue could be put to rest by incorporating an
adaptive Gauss-Hermite method in glmer ...
there has been such a method in versions of glmer but only for very
specific models. We will add it but right now we are concentrating on
other issues in the development.
So, to get your code to execute, I believe setting nAGQ to 1 would work.
|
GLMM and two slopes
|
The problem is the number of abscissas, or nodes, you've selected for the Adaptive Gaussian Quadrature (AGQ) approximation of the log-likelihood, specified by nAGQ. The default value is 1 (equivalent
|
GLMM and two slopes
The problem is the number of abscissas, or nodes, you've selected for the Adaptive Gaussian Quadrature (AGQ) approximation of the log-likelihood, specified by nAGQ. The default value is 1 (equivalent to the Laplacian approximation).
The glmer function's Details section (page 29 in the lme4 help page) states:
The most reliable approximation for GLMMs is adaptive Gauss-Hermite quadrature, at present implemented only for models with a single scalar random effect.
Limiting the AGQ approximation to single scalar random effects is not a limitation of AGQ, but appears to be a decision made by the lme4 package writers, as noted here by Douglas Bates back in 2011 (relevant piece quoted below):
It may seem that this issue could be put to rest by incorporating an
adaptive Gauss-Hermite method in glmer ...
there has been such a method in versions of glmer but only for very
specific models. We will add it but right now we are concentrating on
other issues in the development.
So, to get your code to execute, I believe setting nAGQ to 1 would work.
|
GLMM and two slopes
The problem is the number of abscissas, or nodes, you've selected for the Adaptive Gaussian Quadrature (AGQ) approximation of the log-likelihood, specified by nAGQ. The default value is 1 (equivalent
|
41,634
|
Intuition behind the Calinski-Harabasz Index
|
Some simple intuition: $[B(k)/(k-1)]/[W(k)/(n-k)]$ is analogous to an F-ratio in ANOVA; $B(k)$ and $W(k)$ are between- and within-cluster sums of squares for the $k$ clusters.
$B(k)$ has $k-1$ degrees of freedom, while $W(k)$ has $n-k$ degrees of freedom.
As $k$ grows, if the clusters were all actually just from the same population, $B$ should be proportional to $k-1$ and $W$ should be proportional to $n-k$.
So if we scale for those degrees of freedom, it puts them more on the same scale (apart, of course, from the effectiveness of the clustering, which is what the index attempts to measure).
|
Intuition behind the Calinski-Harabasz Index
|
Some simple intuition: $[B(k)/(k-1)]/[W(k)/(n-k)]$ is analogous to an F-ratio in ANOVA; $B(k)$ and $W(k)$ are between- and within-cluster sums of squares for the $k$ clusters.
$B(k)$ has $k-1$ degree
|
Intuition behind the Calinski-Harabasz Index
Some simple intuition: $[B(k)/(k-1)]/[W(k)/(n-k)]$ is analogous to an F-ratio in ANOVA; $B(k)$ and $W(k)$ are between- and within-cluster sums of squares for the $k$ clusters.
$B(k)$ has $k-1$ degrees of freedom, while $W(k)$ has $n-k$ degrees of freedom.
As $k$ grows, if the clusters were all actually just from the same population, $B$ should be proportional to $k-1$ and $W$ should be proportional to $n-k$.
So if we scale for those degrees of freedom, it puts them more on the same scale (apart, of course, from the effectiveness of the clustering, which is what the index attempts to measure).
|
Intuition behind the Calinski-Harabasz Index
Some simple intuition: $[B(k)/(k-1)]/[W(k)/(n-k)]$ is analogous to an F-ratio in ANOVA; $B(k)$ and $W(k)$ are between- and within-cluster sums of squares for the $k$ clusters.
$B(k)$ has $k-1$ degree
|
41,635
|
If $X$ is lognormally distributed, what is the distribution of $1 / (1 + X)$?
|
To close this one:
We want the pdf of the variable
$$Z = g(X) =\frac 1{X+1} \Rightarrow X = g^{-1}(Z)=\frac 1Z -1 \Rightarrow \frac {\partial g^{-1}(z)}{\partial z}=-\frac 1{z^2}$$
Note that by construction $0 \leq Z \leq 1$.
The density of the log-normal is known. Applying the change-of-variable formula
we obtain
$$f_Z(z) = \left|\frac {\partial g^{-1}(z)}{\partial z}\right|\cdot f_X(g^{-1}(z))$$
$$=\frac 1{z^2}\cdot\frac{1}{[(1-z)/z]\sqrt{2\pi}\sigma} \exp{ \left\{-\frac{\left(\ln[(1-z)/z]-\mu\right)^2}{2\sigma^2}\right\}}$$
$$\Rightarrow f_Z(z) = \frac{1}{(1-z)z\sqrt{2\pi}\sigma} \exp{ \left\{-\frac{\Big(\ln[z/(1-z)]-(-\mu)\Big)^2}{2\sigma^2}\right\}}$$
which is the density of the "logit-normal" distribution with parameters $\sigma$ and $-\mu$.
|
If $X$ is lognormally distributed, what is the distribution of $1 / (1 + X)$?
|
To close this one:
We want the pdf of the variable
$$Z = g(X) =\frac 1{X+1} \Rightarrow X = g^{-1}(Z)=\frac 1Z -1 \Rightarrow \frac {\partial g^{-1}(z)}{\partial z}=-\frac 1{z^2}$$
Note that by constr
|
If $X$ is lognormally distributed, what is the distribution of $1 / (1 + X)$?
To close this one:
We want the pdf of the variable
$$Z = g(X) =\frac 1{X+1} \Rightarrow X = g^{-1}(Z)=\frac 1Z -1 \Rightarrow \frac {\partial g^{-1}(z)}{\partial z}=-\frac 1{z^2}$$
Note that by construction $0 \leq Z \leq 1$.
The density of the log-normal is known. Applying the change-of-variable formula
we obtain
$$f_Z(z) = \left|\frac {\partial g^{-1}(z)}{\partial z}\right|\cdot f_X(g^{-1}(z))$$
$$=\frac 1{z^2}\cdot\frac{1}{[(1-z)/z]\sqrt{2\pi}\sigma} \exp{ \left\{-\frac{\left(\ln[(1-z)/z]-\mu\right)^2}{2\sigma^2}\right\}}$$
$$\Rightarrow f_Z(z) = \frac{1}{(1-z)z\sqrt{2\pi}\sigma} \exp{ \left\{-\frac{\Big(\ln[z/(1-z)]-(-\mu)\Big)^2}{2\sigma^2}\right\}}$$
which is the density of the "logit-normal" distribution with parameters $\sigma$ and $-\mu$.
|
If $X$ is lognormally distributed, what is the distribution of $1 / (1 + X)$?
To close this one:
We want the pdf of the variable
$$Z = g(X) =\frac 1{X+1} \Rightarrow X = g^{-1}(Z)=\frac 1Z -1 \Rightarrow \frac {\partial g^{-1}(z)}{\partial z}=-\frac 1{z^2}$$
Note that by constr
|
41,636
|
Given two sets, how can I say statistically if they are similar/different
|
null hypothesis in this case is that [...] they are different
-- That's not how null hypotheses work. You need something you can calculate the distribution of a test statistic under; generally that's no effect/no difference (whence, "null").
similarity ... whether or not the two groups were sampled from the same population
Your definition of 'similarity' ("from the same population") is a suitable null, fortunately.
So if the null is the population distributions are identical and the alternative is that they differ in some way, you're after a general test for distributional differences -- something that would pick up a difference in location, or spread, or shape.
This would be something like a two-sample Kolmogorov-Smirnov test. There are other possibilities, but that's the most commonly used one. If there are particular kinds of alternatives you especially want power against, there may be a more suitable choice.
|
Given two sets, how can I say statistically if they are similar/different
|
null hypothesis in this case is that [...] they are different
-- That's not how null hypotheses work. You need something you can calculate the distribution of a test statistic under; generally that's
|
Given two sets, how can I say statistically if they are similar/different
null hypothesis in this case is that [...] they are different
-- That's not how null hypotheses work. You need something you can calculate the distribution of a test statistic under; generally that's no effect/no difference (whence, "null").
similarity ... whether or not the two groups were sampled from the same population
Your definition of 'similarity' ("from the same population") is a suitable null, fortunately.
So if the null is the population distributions are identical and the alternative is that they differ in some way, you're after a general test for distributional differences -- something that would pick up a difference in location, or spread, or shape.
This would be something like a two-sample Kolmogorov-Smirnov test. There are other possibilities, but that's the most commonly used one. If there are particular kinds of alternatives you especially want power against, there may be a more suitable choice.
|
Given two sets, how can I say statistically if they are similar/different
null hypothesis in this case is that [...] they are different
-- That's not how null hypotheses work. You need something you can calculate the distribution of a test statistic under; generally that's
|
41,637
|
Given two sets, how can I say statistically if they are similar/different
|
I would like to compare these two sets of data samples and determine
whether the null hypothesis holds...Therefore how might one determine
how similar/different two groups are...
These are two different things. It is not clear to me that you should be using hypothesis testing at all if you already know the distributions are different. See: Testing for significance between means, having one normal distributed sample and one non normal distributed
You will need to come up with a definition for "similar/different" suitable to your purpose since it sounds like you already know the groups are different.
|
Given two sets, how can I say statistically if they are similar/different
|
I would like to compare these two sets of data samples and determine
whether the null hypothesis holds...Therefore how might one determine
how similar/different two groups are...
These are two di
|
Given two sets, how can I say statistically if they are similar/different
I would like to compare these two sets of data samples and determine
whether the null hypothesis holds...Therefore how might one determine
how similar/different two groups are...
These are two different things. It is not clear to me that you should be using hypothesis testing at all if you already know the distributions are different. See: Testing for significance between means, having one normal distributed sample and one non normal distributed
You will need to come up with a definition for "similar/different" suitable to your purpose since it sounds like you already know the groups are different.
|
Given two sets, how can I say statistically if they are similar/different
I would like to compare these two sets of data samples and determine
whether the null hypothesis holds...Therefore how might one determine
how similar/different two groups are...
These are two di
|
41,638
|
Given two sets, how can I say statistically if they are similar/different
|
On average how many observations per item?
I agree with your naive approach and wanting something more robust. There are many robust nonparametric tests to compare two samples, such as permutation tests or Wilcoxon Rank Sum Tests. You can compare these to the results of the two-sample t-tests and look more closely at the discrepancies.
Obviously you'll want to automate this to do all variables at once using one function/command, which it sounds like you've already accomplished.
|
Given two sets, how can I say statistically if they are similar/different
|
On average how many observations per item?
I agree with your naive approach and wanting something more robust. There are many robust nonparametric tests to compare two samples, such as permutation te
|
Given two sets, how can I say statistically if they are similar/different
On average how many observations per item?
I agree with your naive approach and wanting something more robust. There are many robust nonparametric tests to compare two samples, such as permutation tests or Wilcoxon Rank Sum Tests. You can compare these to the results of the two-sample t-tests and look more closely at the discrepancies.
Obviously you'll want to automate this to do all variables at once using one function/command, which it sounds like you've already accomplished.
|
Given two sets, how can I say statistically if they are similar/different
On average how many observations per item?
I agree with your naive approach and wanting something more robust. There are many robust nonparametric tests to compare two samples, such as permutation te
|
41,639
|
What is the meaning of these terms in image processing?
|
In image processing, computer vision and related fields, an image moment is a certain particular weighted average (moment) of the image pixels' intensities, or a function of such moments, usually chosen to have some attractive property or interpretation.
Local and global characteristics. Example, suppose we want to smooth a noisy image. We could smooth the entire image based upon the noise or some other characteristic of the whole image, i.e., globally, or we could smooth the image based upon the noise and/or rate of change of events in sub-regions of that image, multiple local processing of the image. An example of local, or region by region, image proccessing is the Pixon method.
Energy of image. This only appears in the context of entropy for image processing; it is the information content approach to understanding images. Answered elsewhere.
Color temperature has more to do with the black body temperature illumination of an image than intrinsic color, see link.
|
What is the meaning of these terms in image processing?
|
In image processing, computer vision and related fields, an image moment is a certain particular weighted average (moment) of the image pixels' intensities, or a function of such moments, usually chos
|
What is the meaning of these terms in image processing?
In image processing, computer vision and related fields, an image moment is a certain particular weighted average (moment) of the image pixels' intensities, or a function of such moments, usually chosen to have some attractive property or interpretation.
Local and global characteristics. Example, suppose we want to smooth a noisy image. We could smooth the entire image based upon the noise or some other characteristic of the whole image, i.e., globally, or we could smooth the image based upon the noise and/or rate of change of events in sub-regions of that image, multiple local processing of the image. An example of local, or region by region, image proccessing is the Pixon method.
Energy of image. This only appears in the context of entropy for image processing; it is the information content approach to understanding images. Answered elsewhere.
Color temperature has more to do with the black body temperature illumination of an image than intrinsic color, see link.
|
What is the meaning of these terms in image processing?
In image processing, computer vision and related fields, an image moment is a certain particular weighted average (moment) of the image pixels' intensities, or a function of such moments, usually chos
|
41,640
|
Normal Approximation of the sum of correlated Bernoulli Random Variables
|
If the number of variables is sufficiently large and the correlation is bounded away from 1, then there are Central Limit Theorems that apply (e.g., also see versions of the CLT for stationary processes).
So if your $i$-th variable has parameter $p_i$, then the variance of $X_i$ is
$p_i(1-p_i)$ and :
$$\text{Cov}(X_i,X_j)=\rho \sqrt{p_i(1-p_i)\cdot p_j(1-p_j)}$$
The expected value of the sum is the sum of the expected values. The variance of the sum is the sum of the variances plus twice the sum of all the pairwise covariances, and if $n$ is large enough, the standardized sum will be approximately standard normal.
You may want to consider the possibility of using a continuity correction.
|
Normal Approximation of the sum of correlated Bernoulli Random Variables
|
If the number of variables is sufficiently large and the correlation is bounded away from 1, then there are Central Limit Theorems that apply (e.g., also see versions of the CLT for stationary process
|
Normal Approximation of the sum of correlated Bernoulli Random Variables
If the number of variables is sufficiently large and the correlation is bounded away from 1, then there are Central Limit Theorems that apply (e.g., also see versions of the CLT for stationary processes).
So if your $i$-th variable has parameter $p_i$, then the variance of $X_i$ is
$p_i(1-p_i)$ and :
$$\text{Cov}(X_i,X_j)=\rho \sqrt{p_i(1-p_i)\cdot p_j(1-p_j)}$$
The expected value of the sum is the sum of the expected values. The variance of the sum is the sum of the variances plus twice the sum of all the pairwise covariances, and if $n$ is large enough, the standardized sum will be approximately standard normal.
You may want to consider the possibility of using a continuity correction.
|
Normal Approximation of the sum of correlated Bernoulli Random Variables
If the number of variables is sufficiently large and the correlation is bounded away from 1, then there are Central Limit Theorems that apply (e.g., also see versions of the CLT for stationary process
|
41,641
|
Kernel smoothing for Edgeworth expansion
|
I don't know what estimator you are considering but what you propose has certainly been done before.
Horowitz (1998) investigates whether the bootstrap can be used for asymptotic refinements of median regression. He faces the same problem as you given that the objective function has an embedded indicator
$$\widehat{\beta} = min_{\beta} \frac{1}{n} \sum^{n}_{i=1} [q-1(u_i <0)]u_i$$
where the indicator is one for negative residuals. The problem is that when $u_i = 0$ we are at the kink of the check function $\rho (u_i) \equiv [q-1(u_i <0)]u_i$. Horowitz (1998) "smooths" this objective function by changing $\rho (u_i)$ to
$$\rho^{S}(u_i) \equiv \left[2K\left(\frac{u_i}{h}\right) -1 \right]u_i$$
where $K(\cdot)$ is symmetric and bounded with $K \in [-1,1]$ with a differentiable function that satisfies $K(\nu)=0$ if $\nu \leq -1$ and $K(\nu) = 1$ if $\nu \geq 1$. In this context $K$ is similar to the integral of a kernel function but not a Kernel itself. Horowitz (1998) then applies the bootstrap for asymptotic refinements - which is the only difference to what you are planning to do but the reasoning is similar. For this purpose he needed the smooth objective function.
Other papers have replaced the indicator with kernels like Kaplan and Sun (2012) or Whang (2006). If I remember correctly, the Kaplan and Sun (2012) paper use a kernel smoother and then apply an Edgeworth expansion for asymptotic refinements but I don't have the details ready just now. I post the references below if you have further interest in this issue.
References
Horowitz, J.L. (1998) "Bootstrap Methods for Median Regression Models", Econometrica, Vol. 66(6), pp. 1327-1351 [link]
Kaplan, D.M. and Sun, Y. (2012) "Smoothed Estimating Equations for Instrumental Variables Quantile Regression", UC San Diego Working Paper [link]
Whang, Y.-J. (2006) "Smoothed empirical likelihood methods for quantile regression models", Econometric Theory, Vol. 22(2), pp. 173-205 [link]
|
Kernel smoothing for Edgeworth expansion
|
I don't know what estimator you are considering but what you propose has certainly been done before.
Horowitz (1998) investigates whether the bootstrap can be used for asymptotic refinements of median
|
Kernel smoothing for Edgeworth expansion
I don't know what estimator you are considering but what you propose has certainly been done before.
Horowitz (1998) investigates whether the bootstrap can be used for asymptotic refinements of median regression. He faces the same problem as you given that the objective function has an embedded indicator
$$\widehat{\beta} = min_{\beta} \frac{1}{n} \sum^{n}_{i=1} [q-1(u_i <0)]u_i$$
where the indicator is one for negative residuals. The problem is that when $u_i = 0$ we are at the kink of the check function $\rho (u_i) \equiv [q-1(u_i <0)]u_i$. Horowitz (1998) "smooths" this objective function by changing $\rho (u_i)$ to
$$\rho^{S}(u_i) \equiv \left[2K\left(\frac{u_i}{h}\right) -1 \right]u_i$$
where $K(\cdot)$ is symmetric and bounded with $K \in [-1,1]$ with a differentiable function that satisfies $K(\nu)=0$ if $\nu \leq -1$ and $K(\nu) = 1$ if $\nu \geq 1$. In this context $K$ is similar to the integral of a kernel function but not a Kernel itself. Horowitz (1998) then applies the bootstrap for asymptotic refinements - which is the only difference to what you are planning to do but the reasoning is similar. For this purpose he needed the smooth objective function.
Other papers have replaced the indicator with kernels like Kaplan and Sun (2012) or Whang (2006). If I remember correctly, the Kaplan and Sun (2012) paper use a kernel smoother and then apply an Edgeworth expansion for asymptotic refinements but I don't have the details ready just now. I post the references below if you have further interest in this issue.
References
Horowitz, J.L. (1998) "Bootstrap Methods for Median Regression Models", Econometrica, Vol. 66(6), pp. 1327-1351 [link]
Kaplan, D.M. and Sun, Y. (2012) "Smoothed Estimating Equations for Instrumental Variables Quantile Regression", UC San Diego Working Paper [link]
Whang, Y.-J. (2006) "Smoothed empirical likelihood methods for quantile regression models", Econometric Theory, Vol. 22(2), pp. 173-205 [link]
|
Kernel smoothing for Edgeworth expansion
I don't know what estimator you are considering but what you propose has certainly been done before.
Horowitz (1998) investigates whether the bootstrap can be used for asymptotic refinements of median
|
41,642
|
How good is Monte Carlo Simulation when the variable distribution is unknown?
|
In practice simulation is almost always done without full certainty about the distribution.
For example, one might consider a lognormal distribution for some random variable... but what if it was a little lighter tailed (gamma, say)? Or heavier tailed (inverse gamma, say)? Or much heavier tailed (inverse Gaussian, say)? Or log-t?
What if it was some mixture of lognormals? ... or perhaps some form of regime switching?
What if there was a slight dependence in the data?
One can consider a host of ways in which the model may be inadequate (preferably springing from an understanding of the process being dealt with), and see via simulation how that affects the conclusions.
Such things are commonplace. It's one reason why simulation is so useful
Think of the stock price return distribution-- it doesn't fit into a Gaussian Distribution, but what distribution we can use?
If you read around a little you can find a variety of reasonably good models; none will be perfect.
A relevant, but probably narrower question is, is there any research done along the line of sensitivity analysis? By "sensitivity analysis" I mean we assume known distributions for the variables, but we perturb them a little by slightly changing the distribution shape or slightly changing some of the variables, will the outcome of the simulation results remain roughly the same, or completely different?
Yes, in a variety of guises (and not just for simulation; it's a standard tool in robustness for example) -- but the perturbations needn't be small; you can investigate the effect of just about any form of variation from the base assumptions.
In robustness the effect of small perturbations in distribution is considered through tools like influence functions, empirical influence functions and sensitivity curves (and also via simulation)
The first several books at the above wikipedia page are a good starting place for references on robustness; I don't recall exactly what's in which book (I read Hampel et al in the mid 80's and Huber a year or two later, and some of the others since - it's been a while), but the Hampel et al book does discuss influence functions, so I'd start there I guess.
Edit: TooTone reminded me that I wanted to mention resampling methods, which can be thought of as a kind of simulation. For example, with simple bootstrapping, you use the ECDF (the cdf of the sample) rather than the assumed distribution to sample from. In more complex resampling schemes some function of the data (such as model residuals) may be resampled. So these approaches needn't rely on some parametric assumption about the distribution the data were drawn from.
(Then again, there's also the parametric bootstrap which might be as readily thought of as a particular simulation technique as bootstrapping.)
|
How good is Monte Carlo Simulation when the variable distribution is unknown?
|
In practice simulation is almost always done without full certainty about the distribution.
For example, one might consider a lognormal distribution for some random variable... but what if it was a li
|
How good is Monte Carlo Simulation when the variable distribution is unknown?
In practice simulation is almost always done without full certainty about the distribution.
For example, one might consider a lognormal distribution for some random variable... but what if it was a little lighter tailed (gamma, say)? Or heavier tailed (inverse gamma, say)? Or much heavier tailed (inverse Gaussian, say)? Or log-t?
What if it was some mixture of lognormals? ... or perhaps some form of regime switching?
What if there was a slight dependence in the data?
One can consider a host of ways in which the model may be inadequate (preferably springing from an understanding of the process being dealt with), and see via simulation how that affects the conclusions.
Such things are commonplace. It's one reason why simulation is so useful
Think of the stock price return distribution-- it doesn't fit into a Gaussian Distribution, but what distribution we can use?
If you read around a little you can find a variety of reasonably good models; none will be perfect.
A relevant, but probably narrower question is, is there any research done along the line of sensitivity analysis? By "sensitivity analysis" I mean we assume known distributions for the variables, but we perturb them a little by slightly changing the distribution shape or slightly changing some of the variables, will the outcome of the simulation results remain roughly the same, or completely different?
Yes, in a variety of guises (and not just for simulation; it's a standard tool in robustness for example) -- but the perturbations needn't be small; you can investigate the effect of just about any form of variation from the base assumptions.
In robustness the effect of small perturbations in distribution is considered through tools like influence functions, empirical influence functions and sensitivity curves (and also via simulation)
The first several books at the above wikipedia page are a good starting place for references on robustness; I don't recall exactly what's in which book (I read Hampel et al in the mid 80's and Huber a year or two later, and some of the others since - it's been a while), but the Hampel et al book does discuss influence functions, so I'd start there I guess.
Edit: TooTone reminded me that I wanted to mention resampling methods, which can be thought of as a kind of simulation. For example, with simple bootstrapping, you use the ECDF (the cdf of the sample) rather than the assumed distribution to sample from. In more complex resampling schemes some function of the data (such as model residuals) may be resampled. So these approaches needn't rely on some parametric assumption about the distribution the data were drawn from.
(Then again, there's also the parametric bootstrap which might be as readily thought of as a particular simulation technique as bootstrapping.)
|
How good is Monte Carlo Simulation when the variable distribution is unknown?
In practice simulation is almost always done without full certainty about the distribution.
For example, one might consider a lognormal distribution for some random variable... but what if it was a li
|
41,643
|
Does the slope of a regression between observed and predicted values always equal the $R^2$ of the original model?
|
I always get exactly the same value for the overall model fit $R^2$ and slope of the observed versus predicted regression.
This will be true provided a constant term is included in the overall model. Why?
$R^2$ measures the variance of the fit $\hat Y$ relative to the variance of $Y$ (provided the model includes a constant).
Regressing $\hat Y$ against $Y$ or $Y$ against $\hat Y$ must produce identical standardized slopes $\hat\beta_{\hat{Y}Y} = \hat\beta_{Y\hat{Y}}$. This is because the standardized slope in a univariate regression of $Y$ against any $X$ is their correlation coefficient $\rho_{XY}$, which is symmetric in $X$ and $Y$.
The standardized slope $\hat \beta_{XY}$ in any univariate regression of $Y$ against any $X$ is related to the slope $\hat b_{XY}$ via
$$\hat \beta_{XY} = \hat b_{XY} \frac{\text{SD}(X)}{\text{SD}(Y)}.$$
Regressing $Y$ against $\hat Y$ must have a unit slope $\hat b_{\hat{Y}Y}$. Geometrically, $\hat Y$ is the projection of $Y$ onto the column space of the design matrix and the regression of $Y$ against $\hat Y$ is $1$ times the component of $Y$ on that projection.
Putting these all together (in order) yields
$$R^2 = \rho^2_{\hat{Y}Y} = \hat\beta_{\hat{Y}Y}\hat\beta_{Y\hat{Y}} = \left(\hat b_{Y\hat{Y}} \frac{\text{SD}(Y)}{\text{SD}(\hat{Y})}\right)\left(\hat b_{\hat{Y}Y} \frac{\text{SD}(\hat{Y})}{\text{SD}(Y)}\right) = \hat b_{Y\hat{Y}}\hat b_{\hat{Y}Y} = \hat b_{Y\hat{Y}},$$
QED.
The result is not necessarily true when the model does not include a constant: just about any random simulation, as shown below, will give a counterexample.
n <- 10; d <- 2
x <- matrix(rnorm(n*d), ncol=d)
y <- x %*% (1:d) + rnorm(n, 3)
fit <- lm(y ~ x)
y.hat <- predict(fit)
#
# Look for the appearances of R^2 in the output.
#
var(y.hat) / var(y) # R^2
with(summary(lm(y.hat ~ y)), c(coefficients["y", 1], r.squared))
with(summary(lm(y ~ y.hat)), c(coefficients["y.hat", 1], r.squared))
#
# Repeat without a constant term: the same consistency among
# the output occurs, *but the slopes are not equal to R^2*.
#
with(summary(lm(y.hat ~ y - 1)), c(coefficients["y", 1], r.squared))
with(summary(lm(y ~ y.hat - 1)), c(coefficients["y.hat", 1], r.squared))
|
Does the slope of a regression between observed and predicted values always equal the $R^2$ of the o
|
I always get exactly the same value for the overall model fit $R^2$ and slope of the observed versus predicted regression.
This will be true provided a constant term is included in the overall model.
|
Does the slope of a regression between observed and predicted values always equal the $R^2$ of the original model?
I always get exactly the same value for the overall model fit $R^2$ and slope of the observed versus predicted regression.
This will be true provided a constant term is included in the overall model. Why?
$R^2$ measures the variance of the fit $\hat Y$ relative to the variance of $Y$ (provided the model includes a constant).
Regressing $\hat Y$ against $Y$ or $Y$ against $\hat Y$ must produce identical standardized slopes $\hat\beta_{\hat{Y}Y} = \hat\beta_{Y\hat{Y}}$. This is because the standardized slope in a univariate regression of $Y$ against any $X$ is their correlation coefficient $\rho_{XY}$, which is symmetric in $X$ and $Y$.
The standardized slope $\hat \beta_{XY}$ in any univariate regression of $Y$ against any $X$ is related to the slope $\hat b_{XY}$ via
$$\hat \beta_{XY} = \hat b_{XY} \frac{\text{SD}(X)}{\text{SD}(Y)}.$$
Regressing $Y$ against $\hat Y$ must have a unit slope $\hat b_{\hat{Y}Y}$. Geometrically, $\hat Y$ is the projection of $Y$ onto the column space of the design matrix and the regression of $Y$ against $\hat Y$ is $1$ times the component of $Y$ on that projection.
Putting these all together (in order) yields
$$R^2 = \rho^2_{\hat{Y}Y} = \hat\beta_{\hat{Y}Y}\hat\beta_{Y\hat{Y}} = \left(\hat b_{Y\hat{Y}} \frac{\text{SD}(Y)}{\text{SD}(\hat{Y})}\right)\left(\hat b_{\hat{Y}Y} \frac{\text{SD}(\hat{Y})}{\text{SD}(Y)}\right) = \hat b_{Y\hat{Y}}\hat b_{\hat{Y}Y} = \hat b_{Y\hat{Y}},$$
QED.
The result is not necessarily true when the model does not include a constant: just about any random simulation, as shown below, will give a counterexample.
n <- 10; d <- 2
x <- matrix(rnorm(n*d), ncol=d)
y <- x %*% (1:d) + rnorm(n, 3)
fit <- lm(y ~ x)
y.hat <- predict(fit)
#
# Look for the appearances of R^2 in the output.
#
var(y.hat) / var(y) # R^2
with(summary(lm(y.hat ~ y)), c(coefficients["y", 1], r.squared))
with(summary(lm(y ~ y.hat)), c(coefficients["y.hat", 1], r.squared))
#
# Repeat without a constant term: the same consistency among
# the output occurs, *but the slopes are not equal to R^2*.
#
with(summary(lm(y.hat ~ y - 1)), c(coefficients["y", 1], r.squared))
with(summary(lm(y ~ y.hat - 1)), c(coefficients["y.hat", 1], r.squared))
|
Does the slope of a regression between observed and predicted values always equal the $R^2$ of the o
I always get exactly the same value for the overall model fit $R^2$ and slope of the observed versus predicted regression.
This will be true provided a constant term is included in the overall model.
|
41,644
|
Does the slope of a regression between observed and predicted values always equal the $R^2$ of the original model?
|
$R^2$ doesn't have to be equal to $\beta$. you either have a rare coincidence, or reading the same field from the fit object somehow.
|
Does the slope of a regression between observed and predicted values always equal the $R^2$ of the o
|
$R^2$ doesn't have to be equal to $\beta$. you either have a rare coincidence, or reading the same field from the fit object somehow.
|
Does the slope of a regression between observed and predicted values always equal the $R^2$ of the original model?
$R^2$ doesn't have to be equal to $\beta$. you either have a rare coincidence, or reading the same field from the fit object somehow.
|
Does the slope of a regression between observed and predicted values always equal the $R^2$ of the o
$R^2$ doesn't have to be equal to $\beta$. you either have a rare coincidence, or reading the same field from the fit object somehow.
|
41,645
|
Modelling slopes over time
|
Instead of running the models for your two periods separately
$$y_1 = \alpha_1 + \beta_1 X_1 + u_1$$
$$y_2 = \alpha_2 + \beta_2 X_2 + u_2$$
you can combine them as
$$y_t = (X_1\cdot d_1)\beta_1 + (X_2\cdot d_2)\beta_2 + d_1\cdot u_1 + d_2\cdot u_2$$
where $d_1$ is a dummy which equals one for the period pre day 150 and $d_2$ is the post day 150 dummy. This can be re-written as
$$y_t = X_t\beta_1 + d_2\cdot X_2(\beta_2 - \beta_1) + d_1\cdot u_1 + d_2\cdot u_2$$
How to do this in practice:
generate a dummy which is 1 after day 150 and zero otherwise, interact it with your explanatory variable and then regress your dependent variable on your explanatory variable, the dummy and the interaction. When you regress
$$y_t = \alpha + \beta_1 X_t + \beta_2 d_t + \beta_3 (X_t \cdot d_t) + e_t$$
this allows you to model the structural break and in addition you can perform an F-test on $\beta_2$ and $\beta_3$ to see whether your slope for $X_t$ is actually different between your two periods. This is usually referred to as the Chow test.
|
Modelling slopes over time
|
Instead of running the models for your two periods separately
$$y_1 = \alpha_1 + \beta_1 X_1 + u_1$$
$$y_2 = \alpha_2 + \beta_2 X_2 + u_2$$
you can combine them as
$$y_t = (X_1\cdot d_1)\beta_1 + (X_2
|
Modelling slopes over time
Instead of running the models for your two periods separately
$$y_1 = \alpha_1 + \beta_1 X_1 + u_1$$
$$y_2 = \alpha_2 + \beta_2 X_2 + u_2$$
you can combine them as
$$y_t = (X_1\cdot d_1)\beta_1 + (X_2\cdot d_2)\beta_2 + d_1\cdot u_1 + d_2\cdot u_2$$
where $d_1$ is a dummy which equals one for the period pre day 150 and $d_2$ is the post day 150 dummy. This can be re-written as
$$y_t = X_t\beta_1 + d_2\cdot X_2(\beta_2 - \beta_1) + d_1\cdot u_1 + d_2\cdot u_2$$
How to do this in practice:
generate a dummy which is 1 after day 150 and zero otherwise, interact it with your explanatory variable and then regress your dependent variable on your explanatory variable, the dummy and the interaction. When you regress
$$y_t = \alpha + \beta_1 X_t + \beta_2 d_t + \beta_3 (X_t \cdot d_t) + e_t$$
this allows you to model the structural break and in addition you can perform an F-test on $\beta_2$ and $\beta_3$ to see whether your slope for $X_t$ is actually different between your two periods. This is usually referred to as the Chow test.
|
Modelling slopes over time
Instead of running the models for your two periods separately
$$y_1 = \alpha_1 + \beta_1 X_1 + u_1$$
$$y_2 = \alpha_2 + \beta_2 X_2 + u_2$$
you can combine them as
$$y_t = (X_1\cdot d_1)\beta_1 + (X_2
|
41,646
|
The between estimator in panel data
|
As you said correctly, the between estimator takes the individual effects model
$$y_{it} = \alpha_i + x'_{it}\beta + \epsilon_{it}$$
and averages out the time component resulting in the regression
$$\overline{y}_{i.} = \alpha + \overline{x}'_{i.} + (\alpha_i - \alpha + \overline{\epsilon}_{i.})$$
where bars indicate average variables and . signifies that time has been averaged out. You still need an intercept in this model to consistently estimate it.
Note though that this estimator only uses the cross-sectional information and completely discards the time variation in your data. The estimator is only consistent if $\alpha_i$ are random effects (though in this case you may opt for the random effects estimator which is more efficient and also uses the time variation in the data).
You can easily implement the between estimator in your statistical software by averaging the data for each panel unit to average out the time component and then regress the averaged variables on each other. For more information on this topic see for instance Cameron and Trivedi (2009) "Microeconometrics using Stata" or Wooldridge (2010) "Econometric Analysis of Cross-Section and Panel Data".
|
The between estimator in panel data
|
As you said correctly, the between estimator takes the individual effects model
$$y_{it} = \alpha_i + x'_{it}\beta + \epsilon_{it}$$
and averages out the time component resulting in the regression
$$\
|
The between estimator in panel data
As you said correctly, the between estimator takes the individual effects model
$$y_{it} = \alpha_i + x'_{it}\beta + \epsilon_{it}$$
and averages out the time component resulting in the regression
$$\overline{y}_{i.} = \alpha + \overline{x}'_{i.} + (\alpha_i - \alpha + \overline{\epsilon}_{i.})$$
where bars indicate average variables and . signifies that time has been averaged out. You still need an intercept in this model to consistently estimate it.
Note though that this estimator only uses the cross-sectional information and completely discards the time variation in your data. The estimator is only consistent if $\alpha_i$ are random effects (though in this case you may opt for the random effects estimator which is more efficient and also uses the time variation in the data).
You can easily implement the between estimator in your statistical software by averaging the data for each panel unit to average out the time component and then regress the averaged variables on each other. For more information on this topic see for instance Cameron and Trivedi (2009) "Microeconometrics using Stata" or Wooldridge (2010) "Econometric Analysis of Cross-Section and Panel Data".
|
The between estimator in panel data
As you said correctly, the between estimator takes the individual effects model
$$y_{it} = \alpha_i + x'_{it}\beta + \epsilon_{it}$$
and averages out the time component resulting in the regression
$$\
|
41,647
|
Clustering in Cox proportional hazards model MLM vs. sandwich estimator
|
Neglecting the clustering is, I think, usual practice when analysing multicentre clinical trials based on a time-to-event outcome. That is, the standard Cox model is used. In that case, the treatment effect has a population-averaged (marginal) interpretation, the effect being averaged over all centres. It is known that the Cox model leads to a consistent estimate of the population hazard ratio (under some mild conditions, I guess), but the standard error is not because of the correlation between the survival times. Adjustment of the standard error, though, is possible by using the jackknife, leading to some kind of sandwich estimator. The method is available in R (cf. the cluster() function to be used within coxph()). Alternatively, multilevel modelling can also be used for such type of data, as you suggest. That is, the centre effect enters the Cox model as a random effect. In survival analysis, this is called a frailty model. In the frailty model, the treatment effect has a centre-specific (conditional) interpretation. The method is also available in R (cf. the frailty() function to be used within coxph(), or coxme()).
Both approaches are correct, provided that the hazard ratio is well interpreted (population-averaged versus centre-specific). In general, the population-averaged effect is attenuated compared to the centre-
specific effect. I would say that the choice of one method over another depends on the interpretation we want to give to the hazard ratio. The centre-specific interpretation is, according to me, particularly relevant in the context of clinical trials as it compares “like-for-like”.
References that discuss this issue include Glidden and Vittinghoff (2004), PhD thesis by Snavely (chapter 1), and Duchateau and Janssen (2008, Chapter 3).
|
Clustering in Cox proportional hazards model MLM vs. sandwich estimator
|
Neglecting the clustering is, I think, usual practice when analysing multicentre clinical trials based on a time-to-event outcome. That is, the standard Cox model is used. In that case, the treatment
|
Clustering in Cox proportional hazards model MLM vs. sandwich estimator
Neglecting the clustering is, I think, usual practice when analysing multicentre clinical trials based on a time-to-event outcome. That is, the standard Cox model is used. In that case, the treatment effect has a population-averaged (marginal) interpretation, the effect being averaged over all centres. It is known that the Cox model leads to a consistent estimate of the population hazard ratio (under some mild conditions, I guess), but the standard error is not because of the correlation between the survival times. Adjustment of the standard error, though, is possible by using the jackknife, leading to some kind of sandwich estimator. The method is available in R (cf. the cluster() function to be used within coxph()). Alternatively, multilevel modelling can also be used for such type of data, as you suggest. That is, the centre effect enters the Cox model as a random effect. In survival analysis, this is called a frailty model. In the frailty model, the treatment effect has a centre-specific (conditional) interpretation. The method is also available in R (cf. the frailty() function to be used within coxph(), or coxme()).
Both approaches are correct, provided that the hazard ratio is well interpreted (population-averaged versus centre-specific). In general, the population-averaged effect is attenuated compared to the centre-
specific effect. I would say that the choice of one method over another depends on the interpretation we want to give to the hazard ratio. The centre-specific interpretation is, according to me, particularly relevant in the context of clinical trials as it compares “like-for-like”.
References that discuss this issue include Glidden and Vittinghoff (2004), PhD thesis by Snavely (chapter 1), and Duchateau and Janssen (2008, Chapter 3).
|
Clustering in Cox proportional hazards model MLM vs. sandwich estimator
Neglecting the clustering is, I think, usual practice when analysing multicentre clinical trials based on a time-to-event outcome. That is, the standard Cox model is used. In that case, the treatment
|
41,648
|
The Conjugate Beta Prior proof
|
To go from the third to fourth row just ignore factors that are constant with respect to $\pi$. That is, let
$$\binom{n}{y}\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}=k$$
so
$$p(\pi|y)=k \cdot \pi^y (1-\pi)^{(n-y)}\pi^{(\alpha-1)}(1-\pi)^{(\beta-1)}$$
or
$$p(\pi|y) \propto \pi^y(1-\pi)^{(n-y)}\pi^{(\alpha-1)}(1-\pi)^{(\beta-1)}$$
The idea's to deal just with the kernel of the probability distribution, knowing you can always put the normalizing constant back in later.
|
The Conjugate Beta Prior proof
|
To go from the third to fourth row just ignore factors that are constant with respect to $\pi$. That is, let
$$\binom{n}{y}\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}=k$$
so
$$p(\pi|y)=k
|
The Conjugate Beta Prior proof
To go from the third to fourth row just ignore factors that are constant with respect to $\pi$. That is, let
$$\binom{n}{y}\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}=k$$
so
$$p(\pi|y)=k \cdot \pi^y (1-\pi)^{(n-y)}\pi^{(\alpha-1)}(1-\pi)^{(\beta-1)}$$
or
$$p(\pi|y) \propto \pi^y(1-\pi)^{(n-y)}\pi^{(\alpha-1)}(1-\pi)^{(\beta-1)}$$
The idea's to deal just with the kernel of the probability distribution, knowing you can always put the normalizing constant back in later.
|
The Conjugate Beta Prior proof
To go from the third to fourth row just ignore factors that are constant with respect to $\pi$. That is, let
$$\binom{n}{y}\frac{\Gamma(\alpha+\beta)}{\Gamma(\alpha)\Gamma(\beta)}=k$$
so
$$p(\pi|y)=k
|
41,649
|
Compare maxima of two Gaussian samples
|
I found the answer, thanks mainly to whuber's comment. Let $M_y$ and $M_x$ denote the sample maxima of $Y$ and $X$, respectively. Scaled versions of $M_y$ and $M_x$ are both Gumbel distributed [source 1], and the difference between two Gumbels is logistic [source 2]. Specifically, we have
$$\sqrt{2 \ln n}~(M_y-M_x-c) \stackrel{d}{\rightarrow} \mathcal{L}(0,1),$$
where $\mathcal{L}(0,1)$ denotes the logistic distribution with location $0$ and scale (standard deviation) $1$. From the usual approximation, we hence have
$$(M_y-M_x) \approx \mathcal{L}\left(c,\frac{1}{\sqrt{2 \ln n}}\right).$$ Using the cdf of the logistic distribution [source 2], we get
$$P(M_y > M_x) \approx \frac{\exp\left(\sqrt{2\ln n}\times c\right)}{1+\exp\left(\sqrt{2\ln n}\times c\right)}.$$
This approximation formula implies that $P(M_y > M_x)$ does converge to $1$, but at a very (!) slow rate. For example, if $n$ equals one million and $c = 0.1$, the probability is only $62.84$ percent.
References:
Source 1 -
http://sfb649.wiwi.hu-berlin.de/fedc_homepage/xplore/tutorials/sfehtmlnode90.html
Source 2 -
http://en.wikipedia.org/wiki/Logistic_distribution
|
Compare maxima of two Gaussian samples
|
I found the answer, thanks mainly to whuber's comment. Let $M_y$ and $M_x$ denote the sample maxima of $Y$ and $X$, respectively. Scaled versions of $M_y$ and $M_x$ are both Gumbel distributed [source
|
Compare maxima of two Gaussian samples
I found the answer, thanks mainly to whuber's comment. Let $M_y$ and $M_x$ denote the sample maxima of $Y$ and $X$, respectively. Scaled versions of $M_y$ and $M_x$ are both Gumbel distributed [source 1], and the difference between two Gumbels is logistic [source 2]. Specifically, we have
$$\sqrt{2 \ln n}~(M_y-M_x-c) \stackrel{d}{\rightarrow} \mathcal{L}(0,1),$$
where $\mathcal{L}(0,1)$ denotes the logistic distribution with location $0$ and scale (standard deviation) $1$. From the usual approximation, we hence have
$$(M_y-M_x) \approx \mathcal{L}\left(c,\frac{1}{\sqrt{2 \ln n}}\right).$$ Using the cdf of the logistic distribution [source 2], we get
$$P(M_y > M_x) \approx \frac{\exp\left(\sqrt{2\ln n}\times c\right)}{1+\exp\left(\sqrt{2\ln n}\times c\right)}.$$
This approximation formula implies that $P(M_y > M_x)$ does converge to $1$, but at a very (!) slow rate. For example, if $n$ equals one million and $c = 0.1$, the probability is only $62.84$ percent.
References:
Source 1 -
http://sfb649.wiwi.hu-berlin.de/fedc_homepage/xplore/tutorials/sfehtmlnode90.html
Source 2 -
http://en.wikipedia.org/wiki/Logistic_distribution
|
Compare maxima of two Gaussian samples
I found the answer, thanks mainly to whuber's comment. Let $M_y$ and $M_x$ denote the sample maxima of $Y$ and $X$, respectively. Scaled versions of $M_y$ and $M_x$ are both Gumbel distributed [source
|
41,650
|
Large difference between Mann-Whitney test and Wilcoxon signed rank test significance
|
In scipy.stats, the Mann-Whitney U test compares two populations:
Computes the Mann-Whitney rank test on samples x and y.
but the Wilcoxon test compares two PAIRED populations:
The Wilcoxon signed-rank test tests the null hypothesis that two
related paired samples come from the same distribution. In particular,
it tests whether the distribution of the differences x - y is
symmetric about zero. It is a non-parametric version of the paired
T-test.
EDITED / CORRECTED in response to ttnphns' comments.
Note that the t does not test for whether the distribution of the differences is symmetric about zero, so the Wilcoxon signed rank test is not truly a non-parametric counterpart of the paired t test.
The Mann-Whitney test, on the other hand, assumes that all the observations are independent of each other (no basis for pairing here!). It also assumes that the two distributions are the same, and the alternative is that one is stochastically greater than the other. If we make the additional assumption that the only difference between the two distributions is their location, and the distributions are continuous, then "stochastically greater than" is equivalent to such statements as "the medians are different", so you can, with the extra assumption(s), interpret it that way.
The Mann-Whitney uses a continuity correction by default, but the Wilcoxon doesn't.
The Mann-Whitney handles ties using the midrank, but the Wilcoxon offers three options for handling ties in the paired values (i.e., zero difference between the two elements of the pair.)
It sounds like the Wilcoxon test is the more appropriate for your purposes, since you do have that lack of independence between all observations. However, one might imagine that requests with similar, but not equal, lengths might exhibit similar behavior, whereas the Wilcoxon would assume that if they aren't paired, they are independent. A logistic regression model might serve you better in this case.
Quotes are from the scipy.stats doc pages, which we aren't supposed to link to, apparently.
|
Large difference between Mann-Whitney test and Wilcoxon signed rank test significance
|
In scipy.stats, the Mann-Whitney U test compares two populations:
Computes the Mann-Whitney rank test on samples x and y.
but the Wilcoxon test compares two PAIRED populations:
The Wilcoxon signed-
|
Large difference between Mann-Whitney test and Wilcoxon signed rank test significance
In scipy.stats, the Mann-Whitney U test compares two populations:
Computes the Mann-Whitney rank test on samples x and y.
but the Wilcoxon test compares two PAIRED populations:
The Wilcoxon signed-rank test tests the null hypothesis that two
related paired samples come from the same distribution. In particular,
it tests whether the distribution of the differences x - y is
symmetric about zero. It is a non-parametric version of the paired
T-test.
EDITED / CORRECTED in response to ttnphns' comments.
Note that the t does not test for whether the distribution of the differences is symmetric about zero, so the Wilcoxon signed rank test is not truly a non-parametric counterpart of the paired t test.
The Mann-Whitney test, on the other hand, assumes that all the observations are independent of each other (no basis for pairing here!). It also assumes that the two distributions are the same, and the alternative is that one is stochastically greater than the other. If we make the additional assumption that the only difference between the two distributions is their location, and the distributions are continuous, then "stochastically greater than" is equivalent to such statements as "the medians are different", so you can, with the extra assumption(s), interpret it that way.
The Mann-Whitney uses a continuity correction by default, but the Wilcoxon doesn't.
The Mann-Whitney handles ties using the midrank, but the Wilcoxon offers three options for handling ties in the paired values (i.e., zero difference between the two elements of the pair.)
It sounds like the Wilcoxon test is the more appropriate for your purposes, since you do have that lack of independence between all observations. However, one might imagine that requests with similar, but not equal, lengths might exhibit similar behavior, whereas the Wilcoxon would assume that if they aren't paired, they are independent. A logistic regression model might serve you better in this case.
Quotes are from the scipy.stats doc pages, which we aren't supposed to link to, apparently.
|
Large difference between Mann-Whitney test and Wilcoxon signed rank test significance
In scipy.stats, the Mann-Whitney U test compares two populations:
Computes the Mann-Whitney rank test on samples x and y.
but the Wilcoxon test compares two PAIRED populations:
The Wilcoxon signed-
|
41,651
|
Visual display of multiple comparisons test
|
iv <- c("A","B","C","D")
dv <- c(1.2,2.3,4.5,6.7)
gp <- c(1,1,1,2)
par(mai=c(1,1,0,0))
plot(dv, gp, axes=F, xlab="Average time", ylab="Grouping based on
\n mean comparison",
ylim=c(0,3), xlim=c(0,7), pch=16)
text(dv, gp-.2, iv)
axis(side=2, label=c("i", "ii"), at=c(1,2))
axis(side=1)
abline(h=c(1,2),col="blue",lty=3)
Provide a footnote: Means on the same horizontal reference line are not statistically different from each other. Alpha = 0.05, Bonferroni adjustment
And I really like this design because you can flexibly accomodate group means with multiple memberships. Like in this case, C is not different from D and also not different from A and B:
iv <- c("A","B","C", "C", "D")
dv <- c(1.2,2.3,4.5, 4.5, 6.7)
gp <- c(1,1,1,2,2)
par(mai=c(1,1,0,0))
plot(dv, gp, axes=F, xlab="Average time", ylab="Grouping based on
\n mean comparison",
ylim=c(0,3), xlim=c(0,7), pch=16)
text(dv, gp-.2, iv)
axis(side=2, label=c("i", "ii"), at=c(1,2))
axis(side=1)
abline(h=c(1,2),col="blue",lty=3)
|
Visual display of multiple comparisons test
|
iv <- c("A","B","C","D")
dv <- c(1.2,2.3,4.5,6.7)
gp <- c(1,1,1,2)
par(mai=c(1,1,0,0))
plot(dv, gp, axes=F, xlab="Average time", ylab="Grouping based on
\n mean comparison",
|
Visual display of multiple comparisons test
iv <- c("A","B","C","D")
dv <- c(1.2,2.3,4.5,6.7)
gp <- c(1,1,1,2)
par(mai=c(1,1,0,0))
plot(dv, gp, axes=F, xlab="Average time", ylab="Grouping based on
\n mean comparison",
ylim=c(0,3), xlim=c(0,7), pch=16)
text(dv, gp-.2, iv)
axis(side=2, label=c("i", "ii"), at=c(1,2))
axis(side=1)
abline(h=c(1,2),col="blue",lty=3)
Provide a footnote: Means on the same horizontal reference line are not statistically different from each other. Alpha = 0.05, Bonferroni adjustment
And I really like this design because you can flexibly accomodate group means with multiple memberships. Like in this case, C is not different from D and also not different from A and B:
iv <- c("A","B","C", "C", "D")
dv <- c(1.2,2.3,4.5, 4.5, 6.7)
gp <- c(1,1,1,2,2)
par(mai=c(1,1,0,0))
plot(dv, gp, axes=F, xlab="Average time", ylab="Grouping based on
\n mean comparison",
ylim=c(0,3), xlim=c(0,7), pch=16)
text(dv, gp-.2, iv)
axis(side=2, label=c("i", "ii"), at=c(1,2))
axis(side=1)
abline(h=c(1,2),col="blue",lty=3)
|
Visual display of multiple comparisons test
iv <- c("A","B","C","D")
dv <- c(1.2,2.3,4.5,6.7)
gp <- c(1,1,1,2)
par(mai=c(1,1,0,0))
plot(dv, gp, axes=F, xlab="Average time", ylab="Grouping based on
\n mean comparison",
|
41,652
|
Visual display of multiple comparisons test
|
Based on your question and follow-up comments, I'd start with a dot-plot. They're quick and easy (even in Excel). Here's s sample with your data:
This chart type scales well, handles large numbers of data points well and is very easy to understand-even to a non-tech audience.
|
Visual display of multiple comparisons test
|
Based on your question and follow-up comments, I'd start with a dot-plot. They're quick and easy (even in Excel). Here's s sample with your data:
This chart type scales well, handles large numbers o
|
Visual display of multiple comparisons test
Based on your question and follow-up comments, I'd start with a dot-plot. They're quick and easy (even in Excel). Here's s sample with your data:
This chart type scales well, handles large numbers of data points well and is very easy to understand-even to a non-tech audience.
|
Visual display of multiple comparisons test
Based on your question and follow-up comments, I'd start with a dot-plot. They're quick and easy (even in Excel). Here's s sample with your data:
This chart type scales well, handles large numbers o
|
41,653
|
Visual display of multiple comparisons test
|
Point is, that your dataset is too small (4 groups 5 values each). The means obtained from such data are not very accurate representative values for each group - and therefore you should not run ANOVA to make inference about differences among group.
One thing is to be understandable to the audience but more important is to be scientifically accurate.
I suggest to solve this issue by Kruskal-Wallis followed by multiple comparisons.
Boxplots (with medians) is probably the most used graphical representation of multiple comparisons of groups. To display differences you either make brackets above pairs which are statistically different and add (***-symbols or N.S.) This looks good if you have small number of groups. Or can make notches on each boxplot (very helpful in large number of groups) by which anyone will found desired comparison be eye.
You may created boxplots for example in R:
data <- data.frame(value=c(rnorm(60), rnorm(20)+3),
group=rep(c("A", "B", "C", "D"), each=20))
value group
1 -1.206926025 A
2 -0.311125313 A
3 1.336579675 A
......
21 1.543827796 B
22 -1.874257866 B
......
80 4.383037868 D
etc.
boxplot(data$value ~ data$group, notch=TRUE,
col = "red", xlab="group", ylab="value")
Boxplots shows median values instead of mean.
I strongly suggest to not display ONLY mean values for each group. Raw data are the last possibility.
|
Visual display of multiple comparisons test
|
Point is, that your dataset is too small (4 groups 5 values each). The means obtained from such data are not very accurate representative values for each group - and therefore you should not run ANOVA
|
Visual display of multiple comparisons test
Point is, that your dataset is too small (4 groups 5 values each). The means obtained from such data are not very accurate representative values for each group - and therefore you should not run ANOVA to make inference about differences among group.
One thing is to be understandable to the audience but more important is to be scientifically accurate.
I suggest to solve this issue by Kruskal-Wallis followed by multiple comparisons.
Boxplots (with medians) is probably the most used graphical representation of multiple comparisons of groups. To display differences you either make brackets above pairs which are statistically different and add (***-symbols or N.S.) This looks good if you have small number of groups. Or can make notches on each boxplot (very helpful in large number of groups) by which anyone will found desired comparison be eye.
You may created boxplots for example in R:
data <- data.frame(value=c(rnorm(60), rnorm(20)+3),
group=rep(c("A", "B", "C", "D"), each=20))
value group
1 -1.206926025 A
2 -0.311125313 A
3 1.336579675 A
......
21 1.543827796 B
22 -1.874257866 B
......
80 4.383037868 D
etc.
boxplot(data$value ~ data$group, notch=TRUE,
col = "red", xlab="group", ylab="value")
Boxplots shows median values instead of mean.
I strongly suggest to not display ONLY mean values for each group. Raw data are the last possibility.
|
Visual display of multiple comparisons test
Point is, that your dataset is too small (4 groups 5 values each). The means obtained from such data are not very accurate representative values for each group - and therefore you should not run ANOVA
|
41,654
|
Using the eigenvalues from PCA in k-nearest-neighbours
|
There are many options you could pursue, I can suggest a few.
First, if you already have a training set, and assuming the training set is large enough, you could learn a distance metric instead of using PCA weights-based interpretation.
See Mahalanobis Distance as an example of distance metric learning.
The main idea is that you intend to use a weighted Euclidean metric:
$$
D(x_1,x_2)=\sqrt{(x_1-x_2)^TC(x_1-x_2)}
$$
$$
C=diag(w_1...w_n)
$$
The Mahalanobis distance is similarly defined, although it takes into account the cross correlation covariance between the variables (some of your features may be correlated)
$$
D_M(x_1,x_2)=\sqrt{(x_1-x_2)^TC(x_1-x_2)}
$$
where C is the covariance matrix.
Another option is instead of using PCA, which is an unsupervised method, use a supervised method, such as Class Augumented-PCA. Generally speaking, you could use any interpretable machine learning classification algorithm (gives you the weights) and use K-NN with the weights.
|
Using the eigenvalues from PCA in k-nearest-neighbours
|
There are many options you could pursue, I can suggest a few.
First, if you already have a training set, and assuming the training set is large enough, you could learn a distance metric instead of usi
|
Using the eigenvalues from PCA in k-nearest-neighbours
There are many options you could pursue, I can suggest a few.
First, if you already have a training set, and assuming the training set is large enough, you could learn a distance metric instead of using PCA weights-based interpretation.
See Mahalanobis Distance as an example of distance metric learning.
The main idea is that you intend to use a weighted Euclidean metric:
$$
D(x_1,x_2)=\sqrt{(x_1-x_2)^TC(x_1-x_2)}
$$
$$
C=diag(w_1...w_n)
$$
The Mahalanobis distance is similarly defined, although it takes into account the cross correlation covariance between the variables (some of your features may be correlated)
$$
D_M(x_1,x_2)=\sqrt{(x_1-x_2)^TC(x_1-x_2)}
$$
where C is the covariance matrix.
Another option is instead of using PCA, which is an unsupervised method, use a supervised method, such as Class Augumented-PCA. Generally speaking, you could use any interpretable machine learning classification algorithm (gives you the weights) and use K-NN with the weights.
|
Using the eigenvalues from PCA in k-nearest-neighbours
There are many options you could pursue, I can suggest a few.
First, if you already have a training set, and assuming the training set is large enough, you could learn a distance metric instead of usi
|
41,655
|
"One-tailed" Levene Test
|
Browne-Forsythe simply performs ANOVA on $z_{ij}=|y_{ij} - \tilde{y}_j|$, where $y_{ij}$ is the $i$th observation in group $j$. Groups with larger spread in $y$ will have larger mean $z$. (Levene is similar, but the $z$'s are defined in terms of the deviations from the group mean instead of the group median.)
If you only have two groups (where a one-tailed test has meaning), you'd simply replace that ANOVA in either of those tests with a plain two-sample t-test on the $z$s ... except one tailed.
Of course, you'd have to specify the direction a priori (before seeing the data).
If you already conclude that the Browne-Forsythe or Levene $F$ is appropriate in the two-group, two-tailed case, then the corresponding $t$-test is necessarily appropriate in the two-group version (it rejects exactly the same cases as the $F$ when working two-tailed) - consequently the only remaining consideration is whether it works as well two tailed as it does one tailed. Simple considerations of symmetry in arguments should suffice for that.
So if you think Browne-Forsythe or Levene are okay, then just do a $t$-test. Nothing to it.
[Caveat Emptor: Using such a test prior to an ANOVA to decide whether on not to apply some adjustment for heteroskedasticity or more robust procedure is not advisable. Better to assume the variances are unequal at the outset.]
|
"One-tailed" Levene Test
|
Browne-Forsythe simply performs ANOVA on $z_{ij}=|y_{ij} - \tilde{y}_j|$, where $y_{ij}$ is the $i$th observation in group $j$. Groups with larger spread in $y$ will have larger mean $z$. (Levene is s
|
"One-tailed" Levene Test
Browne-Forsythe simply performs ANOVA on $z_{ij}=|y_{ij} - \tilde{y}_j|$, where $y_{ij}$ is the $i$th observation in group $j$. Groups with larger spread in $y$ will have larger mean $z$. (Levene is similar, but the $z$'s are defined in terms of the deviations from the group mean instead of the group median.)
If you only have two groups (where a one-tailed test has meaning), you'd simply replace that ANOVA in either of those tests with a plain two-sample t-test on the $z$s ... except one tailed.
Of course, you'd have to specify the direction a priori (before seeing the data).
If you already conclude that the Browne-Forsythe or Levene $F$ is appropriate in the two-group, two-tailed case, then the corresponding $t$-test is necessarily appropriate in the two-group version (it rejects exactly the same cases as the $F$ when working two-tailed) - consequently the only remaining consideration is whether it works as well two tailed as it does one tailed. Simple considerations of symmetry in arguments should suffice for that.
So if you think Browne-Forsythe or Levene are okay, then just do a $t$-test. Nothing to it.
[Caveat Emptor: Using such a test prior to an ANOVA to decide whether on not to apply some adjustment for heteroskedasticity or more robust procedure is not advisable. Better to assume the variances are unequal at the outset.]
|
"One-tailed" Levene Test
Browne-Forsythe simply performs ANOVA on $z_{ij}=|y_{ij} - \tilde{y}_j|$, where $y_{ij}$ is the $i$th observation in group $j$. Groups with larger spread in $y$ will have larger mean $z$. (Levene is s
|
41,656
|
"One-tailed" Levene Test
|
I know this is 2 years later, but you may want to investigate the software you are using to see if it's already a one-way test. In R there are 3 packages that contain Brown-Forsythe (which is simply Levene's, but uses median instead of mean), and all of them are the one-sided test.
You can test this yourself just by looking at the F-value that you are given, and looking at an F-Distribution table. Since the F-Distribution is typically expected for ANOVA use, and ANOVA is typically one-tailed, these tables are almost always given as one-tailed. You didn't give specific values, but here’s an example. I randomly drew two samples, (x & y, sampled without replacement) from a normally distributed population. Both samples originally have similar variances (0.73 & 0.25), and both are n=50. I manipulated the data in x to dramatically change its variance (6.99). I ran the Brown-Forsythe, which gave me F(1,98)=4.06, p=0.047. I manipulated the data in x a second time to slightly decrease the variance (6.90). I re-ran Brown-Forsythe, and this time I got F(1,98)=3.75, p=0.056.
Clearly, if alpha=5%, then these latter two manipulated variances of x straddle the line. Now take a look at this F-Distribution table—it has a graph at the top to help assure you that it’s a one-tailed test. https://www.safaribooksonline.com/library/view/random-data-analysis/9780470248775/images/tabA-5a.jpg
For df1=1 and df2=100, the critical value for the one-tailed F-Distribution is 3.94. Since we had df2=98, we would expect our specific critical value to be slightly higher than 3.94, but not by much. The first manipulation had F=4.06, p=0.049, and the second was F=3.75, p=0.056. You can see that according to this one-tailed F-Distribution table, the p-values produced by this Brown-Forsythe test in R are using a one-tailed test--i.e., the critical value for p=0.05 is 3.94, which is right in-between the two calculated F-values. If you could use your own software to perform the Brown-Forsythe, and manipulate your data to get to an F-value that is close to p=0.05, then you can compare it to this F-Distribution table as I have done to see what kind of ‘tailedness’ your software uses to produce its p-values
However, let’s presume your software uses a two-tailed test, but you still really want a one-tailed test. As long as you are given an F-value from the Brown-Forsythe, you can look it up yourself using the table. If your calculated F is greater than the critical value in the table for your degrees of freedom, then you must reject the null (i.e., the variances are not equal); if your calculated F-value is less than the critical value in the table, then you should fail to reject the null, and assume the variances are equal enough to proceed to whatever parametric test you are intending to run to test the means. If you are running R, then you can use 1-pf(F,df1,df2)
|
"One-tailed" Levene Test
|
I know this is 2 years later, but you may want to investigate the software you are using to see if it's already a one-way test. In R there are 3 packages that contain Brown-Forsythe (which is simply
|
"One-tailed" Levene Test
I know this is 2 years later, but you may want to investigate the software you are using to see if it's already a one-way test. In R there are 3 packages that contain Brown-Forsythe (which is simply Levene's, but uses median instead of mean), and all of them are the one-sided test.
You can test this yourself just by looking at the F-value that you are given, and looking at an F-Distribution table. Since the F-Distribution is typically expected for ANOVA use, and ANOVA is typically one-tailed, these tables are almost always given as one-tailed. You didn't give specific values, but here’s an example. I randomly drew two samples, (x & y, sampled without replacement) from a normally distributed population. Both samples originally have similar variances (0.73 & 0.25), and both are n=50. I manipulated the data in x to dramatically change its variance (6.99). I ran the Brown-Forsythe, which gave me F(1,98)=4.06, p=0.047. I manipulated the data in x a second time to slightly decrease the variance (6.90). I re-ran Brown-Forsythe, and this time I got F(1,98)=3.75, p=0.056.
Clearly, if alpha=5%, then these latter two manipulated variances of x straddle the line. Now take a look at this F-Distribution table—it has a graph at the top to help assure you that it’s a one-tailed test. https://www.safaribooksonline.com/library/view/random-data-analysis/9780470248775/images/tabA-5a.jpg
For df1=1 and df2=100, the critical value for the one-tailed F-Distribution is 3.94. Since we had df2=98, we would expect our specific critical value to be slightly higher than 3.94, but not by much. The first manipulation had F=4.06, p=0.049, and the second was F=3.75, p=0.056. You can see that according to this one-tailed F-Distribution table, the p-values produced by this Brown-Forsythe test in R are using a one-tailed test--i.e., the critical value for p=0.05 is 3.94, which is right in-between the two calculated F-values. If you could use your own software to perform the Brown-Forsythe, and manipulate your data to get to an F-value that is close to p=0.05, then you can compare it to this F-Distribution table as I have done to see what kind of ‘tailedness’ your software uses to produce its p-values
However, let’s presume your software uses a two-tailed test, but you still really want a one-tailed test. As long as you are given an F-value from the Brown-Forsythe, you can look it up yourself using the table. If your calculated F is greater than the critical value in the table for your degrees of freedom, then you must reject the null (i.e., the variances are not equal); if your calculated F-value is less than the critical value in the table, then you should fail to reject the null, and assume the variances are equal enough to proceed to whatever parametric test you are intending to run to test the means. If you are running R, then you can use 1-pf(F,df1,df2)
|
"One-tailed" Levene Test
I know this is 2 years later, but you may want to investigate the software you are using to see if it's already a one-way test. In R there are 3 packages that contain Brown-Forsythe (which is simply
|
41,657
|
Assigning values to missing data for use in binary logistic regression in SAS
|
In general, dealing with missing input values is always problematic. To my best knowledge, none of the existing methods can deal with it without introducing some bias to the model, so you have to consider this during your research. There are at least few possible options:
ignore data with missing values (which I do believe you do now), which is the "safest" option, but can lead to insufficient data being left to train a good model
fill missing values with some statistical analysis of the data - for example:
mean value of the particular feature/dimension (for real valued variables)
median value of the particular feature/dimension (for categorical ones)
train a separate model to predict a missing value, e.g. let's imagine data in $X^k$, and each of the dimensions can have missing inputs, then you can create $k$ models $M_i$, each for predicting the $i$th dimension using the rest of them, so $M_i : X^{k-1} \rightarrow X$, and you use it to preprocess your data
use some generative model, that can fill missing values by itself, one possibility is a Restricted Boltzmann Machine
As was previously stated, each method introduces some bias to the analysis (which has been proven in many papers, for many models), but it can also help you build a better model: everything depends on your data.
EDIT (after clarification)
A missing value of some $i$th feature/dimension $f_i \in X$ is lack of observation/knowledge about what particular value $x\in X$ does it have. One can imagine a situation where we are asking people to fill out a multi-page survey, and after getting all the data it turns out we do not have one of the person's pages. We do not know what was his/her response, but we are quite sure there was one. On the other hand a person could give as a blank question (without an answer) or write something like "I will not answer this question", which is not missing information; in fact this is as informative as selecting one of the predefined boxes. In such a scenario we simply have a categorical feature, $f'_i \in X \cup \{ \emptyset \}$. We can either express it as a multi-valued feature, or encode it in unary form by replacing $f'_i$ with $|X|+1$ new binary features $f''_{ij}$ for each $j\in X \cup \{ \emptyset \}$ such that $f''_{ij} = 1 \iff f'_i = j$. Choice between these methods is model- and data-dependent.
|
Assigning values to missing data for use in binary logistic regression in SAS
|
In general, dealing with missing input values is always problematic. To my best knowledge, none of the existing methods can deal with it without introducing some bias to the model, so you have to cons
|
Assigning values to missing data for use in binary logistic regression in SAS
In general, dealing with missing input values is always problematic. To my best knowledge, none of the existing methods can deal with it without introducing some bias to the model, so you have to consider this during your research. There are at least few possible options:
ignore data with missing values (which I do believe you do now), which is the "safest" option, but can lead to insufficient data being left to train a good model
fill missing values with some statistical analysis of the data - for example:
mean value of the particular feature/dimension (for real valued variables)
median value of the particular feature/dimension (for categorical ones)
train a separate model to predict a missing value, e.g. let's imagine data in $X^k$, and each of the dimensions can have missing inputs, then you can create $k$ models $M_i$, each for predicting the $i$th dimension using the rest of them, so $M_i : X^{k-1} \rightarrow X$, and you use it to preprocess your data
use some generative model, that can fill missing values by itself, one possibility is a Restricted Boltzmann Machine
As was previously stated, each method introduces some bias to the analysis (which has been proven in many papers, for many models), but it can also help you build a better model: everything depends on your data.
EDIT (after clarification)
A missing value of some $i$th feature/dimension $f_i \in X$ is lack of observation/knowledge about what particular value $x\in X$ does it have. One can imagine a situation where we are asking people to fill out a multi-page survey, and after getting all the data it turns out we do not have one of the person's pages. We do not know what was his/her response, but we are quite sure there was one. On the other hand a person could give as a blank question (without an answer) or write something like "I will not answer this question", which is not missing information; in fact this is as informative as selecting one of the predefined boxes. In such a scenario we simply have a categorical feature, $f'_i \in X \cup \{ \emptyset \}$. We can either express it as a multi-valued feature, or encode it in unary form by replacing $f'_i$ with $|X|+1$ new binary features $f''_{ij}$ for each $j\in X \cup \{ \emptyset \}$ such that $f''_{ij} = 1 \iff f'_i = j$. Choice between these methods is model- and data-dependent.
|
Assigning values to missing data for use in binary logistic regression in SAS
In general, dealing with missing input values is always problematic. To my best knowledge, none of the existing methods can deal with it without introducing some bias to the model, so you have to cons
|
41,658
|
Confidence interval for Uniform($\theta$, $\theta + a$)
|
It's possible to construct silly confidence intervals that are technically valid, i.e. have the claimed coverage, but it's not obligatory. So I don't know why doing so should demonstrate a flaw in "confidence interval methodology" (whatever that is)—you could have constructed a more appropriate one for your purposes.
A more orthodox approach to this particular problem:
First, note that you can reëxpress the sufficient statistic $(Y,Z)$ to contain an obvious ancillary component: the sample range $Z-Y$. It's a very strong precision index: as it approaches $a$, $\theta$ is known almost exactly. So any inference should be conditional on its observed value $z-y$.
Second, the sample minimum $Y$ conditional on the observed value of the range is uniformly distributed between $\theta$ and $\theta+a-r$. Confidence intervals based on this distribution could be constructed & would be well behaved. Still, the likelihood for $\theta$ is flat between $z-a$ and $y$, so it wouldn't be possible to say for any confidence interval with less than 100% coverage that it contained values of $\theta$ less discrepant with the data than those outside it. Best to stick with the 100% interval.
[Response to comments:
(1) Your confidence interval does have some undesirable properties* but that's no reason to tar all confidence intervals with the same brush. You only took unconditional coverage into account when you derived it & therefore have no right to complain that unconditional coverage is all it gives you.
(2) You're right that $\left(Y-\left(1-\frac{1-\gamma} 2\right)(a-r),Y-\frac{1-\gamma} 2(a-r)\right)$ is a valid C.I., conditional on $r$, but why not use $(Y-\gamma(a-r),Y)$? The likelihood is $\frac{1}{a-r}$ for all values of $\theta$ between $y+r-a$ and $y$, & I don't see any generally compelling reason to honour an arbitrary 95% of those values with inclusion in my confidence interval. The 100% interval $(y+r-a,y)$ is preferable as it separates zero-likelihood values of $\theta$ from those with positive likelihood.
*At least for inference on $\theta$ from a single sample: there may be some applications—say in quality control, where consideration of coverage over repeated samples is more than a thought experiment—for which it could have some use.]
|
Confidence interval for Uniform($\theta$, $\theta + a$)
|
It's possible to construct silly confidence intervals that are technically valid, i.e. have the claimed coverage, but it's not obligatory. So I don't know why doing so should demonstrate a flaw in "co
|
Confidence interval for Uniform($\theta$, $\theta + a$)
It's possible to construct silly confidence intervals that are technically valid, i.e. have the claimed coverage, but it's not obligatory. So I don't know why doing so should demonstrate a flaw in "confidence interval methodology" (whatever that is)—you could have constructed a more appropriate one for your purposes.
A more orthodox approach to this particular problem:
First, note that you can reëxpress the sufficient statistic $(Y,Z)$ to contain an obvious ancillary component: the sample range $Z-Y$. It's a very strong precision index: as it approaches $a$, $\theta$ is known almost exactly. So any inference should be conditional on its observed value $z-y$.
Second, the sample minimum $Y$ conditional on the observed value of the range is uniformly distributed between $\theta$ and $\theta+a-r$. Confidence intervals based on this distribution could be constructed & would be well behaved. Still, the likelihood for $\theta$ is flat between $z-a$ and $y$, so it wouldn't be possible to say for any confidence interval with less than 100% coverage that it contained values of $\theta$ less discrepant with the data than those outside it. Best to stick with the 100% interval.
[Response to comments:
(1) Your confidence interval does have some undesirable properties* but that's no reason to tar all confidence intervals with the same brush. You only took unconditional coverage into account when you derived it & therefore have no right to complain that unconditional coverage is all it gives you.
(2) You're right that $\left(Y-\left(1-\frac{1-\gamma} 2\right)(a-r),Y-\frac{1-\gamma} 2(a-r)\right)$ is a valid C.I., conditional on $r$, but why not use $(Y-\gamma(a-r),Y)$? The likelihood is $\frac{1}{a-r}$ for all values of $\theta$ between $y+r-a$ and $y$, & I don't see any generally compelling reason to honour an arbitrary 95% of those values with inclusion in my confidence interval. The 100% interval $(y+r-a,y)$ is preferable as it separates zero-likelihood values of $\theta$ from those with positive likelihood.
*At least for inference on $\theta$ from a single sample: there may be some applications—say in quality control, where consideration of coverage over repeated samples is more than a thought experiment—for which it could have some use.]
|
Confidence interval for Uniform($\theta$, $\theta + a$)
It's possible to construct silly confidence intervals that are technically valid, i.e. have the claimed coverage, but it's not obligatory. So I don't know why doing so should demonstrate a flaw in "co
|
41,659
|
Entropy of (Sum of Gaussians) versus Sum of (Entropy of Gaussians)
|
The issue is that you are working with a differential entropy for continuous random variables, which doesn't share all the nice properties of Shannon's entropy for discrete random variables and can behave counter to intuition. In particular, differential entropy can be negative!
The following might help to get a feel for what's going on. First, a little derivation. We have that
\begin{align}
H[X + Y, Y] &= H[X + Y \mid Y] + H[Y] = H[X \mid Y] + H[Y] = H[X, Y], \\
H[X + Y, Y] &= H[Y \mid X + Y] + H[X + Y],
\end{align}
so that,
$$H[X + Y] = H[X, Y] - H[Y \mid X + Y].$$
Since Shannon's entropy is always non-negative, the entropy of $X + Y$ will therefore always be smaller or equal to the entropy of $X, Y$, in line with your intuition. What must happen in your example is that $H[Y \mid X + Y]$ is negative, which is only possible because it is a differential entropy.
If you want a more well-behaved measure for continuous random variables, use relative entropy.
|
Entropy of (Sum of Gaussians) versus Sum of (Entropy of Gaussians)
|
The issue is that you are working with a differential entropy for continuous random variables, which doesn't share all the nice properties of Shannon's entropy for discrete random variables and can be
|
Entropy of (Sum of Gaussians) versus Sum of (Entropy of Gaussians)
The issue is that you are working with a differential entropy for continuous random variables, which doesn't share all the nice properties of Shannon's entropy for discrete random variables and can behave counter to intuition. In particular, differential entropy can be negative!
The following might help to get a feel for what's going on. First, a little derivation. We have that
\begin{align}
H[X + Y, Y] &= H[X + Y \mid Y] + H[Y] = H[X \mid Y] + H[Y] = H[X, Y], \\
H[X + Y, Y] &= H[Y \mid X + Y] + H[X + Y],
\end{align}
so that,
$$H[X + Y] = H[X, Y] - H[Y \mid X + Y].$$
Since Shannon's entropy is always non-negative, the entropy of $X + Y$ will therefore always be smaller or equal to the entropy of $X, Y$, in line with your intuition. What must happen in your example is that $H[Y \mid X + Y]$ is negative, which is only possible because it is a differential entropy.
If you want a more well-behaved measure for continuous random variables, use relative entropy.
|
Entropy of (Sum of Gaussians) versus Sum of (Entropy of Gaussians)
The issue is that you are working with a differential entropy for continuous random variables, which doesn't share all the nice properties of Shannon's entropy for discrete random variables and can be
|
41,660
|
McNemar-Bowker test for comparison of two measures
|
McNemar-Bowker test of symmetry of k X k contingency table is inherently 2-sided: the alternative hypothesis is undirected. So, in general case it cannot be used to test a one sided alternative that subdiagonal frequencies are larger/smaller than superdiagonal frequencies. But since in your case the differences are consistently in favour of subdiagonal frequencies you can use the test for the directional inference.
The Bowker test is chi-square asymptotic-based and hence is for "large sample" - I've read somewhere (sorry don't remember where, so I'm not quite sure) that the sum in any two symmetric cells, if it is not 0 (the test ignores 0-0 cell pairs altogether), should be at least 10. Clearly, this isn't your case - you have only one pair of symmetric cells with the large sum. There exists an exact version of the test (see) but not in SPSS. But you can bypass the problem if you merge "Once", "Twice", "Three+" categories. Then you'll have the dichotomous case for which Bowker test becomes McNemar test with exact p-value easily computed (SPSS does it).
You might want also to consider some alternative tests of symmetry of a contingency table. Because it is questionnable whether your inquiry is isomorphic to what McNemar-Bowker tests. It tests if every off-diagonal cell is equal (in population) to the cell symmetric to it. Might it be that comparing the subdiagonal and the superdiagonal sums is more apt here?
|
McNemar-Bowker test for comparison of two measures
|
McNemar-Bowker test of symmetry of k X k contingency table is inherently 2-sided: the alternative hypothesis is undirected. So, in general case it cannot be used to test a one sided alternative that s
|
McNemar-Bowker test for comparison of two measures
McNemar-Bowker test of symmetry of k X k contingency table is inherently 2-sided: the alternative hypothesis is undirected. So, in general case it cannot be used to test a one sided alternative that subdiagonal frequencies are larger/smaller than superdiagonal frequencies. But since in your case the differences are consistently in favour of subdiagonal frequencies you can use the test for the directional inference.
The Bowker test is chi-square asymptotic-based and hence is for "large sample" - I've read somewhere (sorry don't remember where, so I'm not quite sure) that the sum in any two symmetric cells, if it is not 0 (the test ignores 0-0 cell pairs altogether), should be at least 10. Clearly, this isn't your case - you have only one pair of symmetric cells with the large sum. There exists an exact version of the test (see) but not in SPSS. But you can bypass the problem if you merge "Once", "Twice", "Three+" categories. Then you'll have the dichotomous case for which Bowker test becomes McNemar test with exact p-value easily computed (SPSS does it).
You might want also to consider some alternative tests of symmetry of a contingency table. Because it is questionnable whether your inquiry is isomorphic to what McNemar-Bowker tests. It tests if every off-diagonal cell is equal (in population) to the cell symmetric to it. Might it be that comparing the subdiagonal and the superdiagonal sums is more apt here?
|
McNemar-Bowker test for comparison of two measures
McNemar-Bowker test of symmetry of k X k contingency table is inherently 2-sided: the alternative hypothesis is undirected. So, in general case it cannot be used to test a one sided alternative that s
|
41,661
|
What to do when an independent variable is not significant, but it definitely should be!
|
One thing you can do is look at the effect sizes and build a confidence interval around them. “Very” significant coefficients do not necessarily represent strong effects and test results depend a lot on the sample size. Therefore, a failure to reject the null hypothesis is not as such evidence that your results are inconsistent with the previous study (Andrew Gelman regularly puts it that way: “the difference between significant and non-significant is not itself significant”).
Basically the more data you have, the more confidence you can have in the fact that a given coefficient is different from 0 but, again, that's not a measure of the strength of the relationship between the variables. If the confidence intervals overlap or the coefficient or effect size measures are similar but one is not significant, it simply means that one of the studies had less power. If that's the case, one way to “fix” it is simply to collect more data.
Also, I know nothing of your field and I am not sure to follow exactly what you are doing but generally speaking one published study would not be enough to convince me that something definitely should be significant, no matter what the relevant p value was. If you have strong theoretical reasons to expect it to be that's another story but you should not overestimate the reproducibility of many published results (see for example John Ioannidis's publications).
|
What to do when an independent variable is not significant, but it definitely should be!
|
One thing you can do is look at the effect sizes and build a confidence interval around them. “Very” significant coefficients do not necessarily represent strong effects and test results depend a lot
|
What to do when an independent variable is not significant, but it definitely should be!
One thing you can do is look at the effect sizes and build a confidence interval around them. “Very” significant coefficients do not necessarily represent strong effects and test results depend a lot on the sample size. Therefore, a failure to reject the null hypothesis is not as such evidence that your results are inconsistent with the previous study (Andrew Gelman regularly puts it that way: “the difference between significant and non-significant is not itself significant”).
Basically the more data you have, the more confidence you can have in the fact that a given coefficient is different from 0 but, again, that's not a measure of the strength of the relationship between the variables. If the confidence intervals overlap or the coefficient or effect size measures are similar but one is not significant, it simply means that one of the studies had less power. If that's the case, one way to “fix” it is simply to collect more data.
Also, I know nothing of your field and I am not sure to follow exactly what you are doing but generally speaking one published study would not be enough to convince me that something definitely should be significant, no matter what the relevant p value was. If you have strong theoretical reasons to expect it to be that's another story but you should not overestimate the reproducibility of many published results (see for example John Ioannidis's publications).
|
What to do when an independent variable is not significant, but it definitely should be!
One thing you can do is look at the effect sizes and build a confidence interval around them. “Very” significant coefficients do not necessarily represent strong effects and test results depend a lot
|
41,662
|
What to do when an independent variable is not significant, but it definitely should be!
|
After reading more I found out that the values are not significant because the fixed effects model does not work well with the data. I have a high heterskedasticity, which is not fixed by transforming the data into its log form. So far, the OLS with Robust Error adjustment for heteroskedasticity is what is giving us the best results.
|
What to do when an independent variable is not significant, but it definitely should be!
|
After reading more I found out that the values are not significant because the fixed effects model does not work well with the data. I have a high heterskedasticity, which is not fixed by transforming
|
What to do when an independent variable is not significant, but it definitely should be!
After reading more I found out that the values are not significant because the fixed effects model does not work well with the data. I have a high heterskedasticity, which is not fixed by transforming the data into its log form. So far, the OLS with Robust Error adjustment for heteroskedasticity is what is giving us the best results.
|
What to do when an independent variable is not significant, but it definitely should be!
After reading more I found out that the values are not significant because the fixed effects model does not work well with the data. I have a high heterskedasticity, which is not fixed by transforming
|
41,663
|
Are these variables mutually independent?
|
Even assuming the $X_i$ and $Y_j$ are independent of $N$ and of one another, $S_X$ and $S_Y$ will be positively correlated and (therefore) not independent. We can calculate that correlation given the distribution of $N$.
To illustrate, let $N$ take on either the value $1$ or $12$ with equal probability. Then half the time $(S_X, S_Y)$ is a point in the unit square (when $N=1$) and half the time it is a point near $(6,6)$ and--according to the Central Limit Theorem--is dispersed randomly around that location in an approximately bivariate Normal manner.
This scatterplot depicts a simulation of 10,000 independent $(S_X, S_Y)$ pairs. The colors distinguish the values of $N$.
The correlation should be obvious in this plot: $S_X$ and $S_Y$ both tend to be small when $N$ is small and large when $N$ is large, whence they tend to be small together or large together: that's positive correlation.
The correlation can be computed using formulas for nested (iterated) expectations. For example,
$$\mathbb{E}_{N;X_i}[S_X] = \mathbb{E}_N[\mathbb{E}_{X_i|N}[S_X | N]] = (1/2 + 12(1/2))/2 = 13/4.$$
In a similar manner all relevant multivariate moments of $(S_X, S_Y)$ can be computed based on knowing that $\mathbb{E}[X_i] = \mathbb{E}[Y_j] = 1/2$ and $\mathbb{E}[X_i^2] = \mathbb{E}[Y_j^2] = 1/3$ (if we assume the $X_i$ and $Y_j$ are all independent). The variances are $56/3 - (13/4)^2 \approx 8.104$ and the covariance is $145/8 - (13/4)^2 = 7.5625$, whence the correlation is $363/389 \approx 0.9332$. Indeed, in this simulation the observed correlation was $0.9316$, apparently differing from this theoretical value only by chance variation.
This answer obviously extends to more than two such sums. It provides a nice example of variables that can be conditionally independent (which will be the case when the $X_i$ are independent of the $Y_j$) but not themselves independent.
Simulation Code
The simulation was carried out in R:
N <- 10^4
m <- 12
n <- ifelse(runif(N) < 1/2, 1, m)
x <- matrix(runif(m*N), ncol=m)
y <- matrix(runif(m*N), ncol=m)
s <- t(sapply(1:N, function(i) c(sum(x[i, 1:n[i]]), sum(y[i, 1:n[i]]))))
col = ifelse(n==1, "Blue", "Red")
plot(s, col=col, pch=19, cex=.5, xlab="X", ylab="Y")
cor(s)
|
Are these variables mutually independent?
|
Even assuming the $X_i$ and $Y_j$ are independent of $N$ and of one another, $S_X$ and $S_Y$ will be positively correlated and (therefore) not independent. We can calculate that correlation given the
|
Are these variables mutually independent?
Even assuming the $X_i$ and $Y_j$ are independent of $N$ and of one another, $S_X$ and $S_Y$ will be positively correlated and (therefore) not independent. We can calculate that correlation given the distribution of $N$.
To illustrate, let $N$ take on either the value $1$ or $12$ with equal probability. Then half the time $(S_X, S_Y)$ is a point in the unit square (when $N=1$) and half the time it is a point near $(6,6)$ and--according to the Central Limit Theorem--is dispersed randomly around that location in an approximately bivariate Normal manner.
This scatterplot depicts a simulation of 10,000 independent $(S_X, S_Y)$ pairs. The colors distinguish the values of $N$.
The correlation should be obvious in this plot: $S_X$ and $S_Y$ both tend to be small when $N$ is small and large when $N$ is large, whence they tend to be small together or large together: that's positive correlation.
The correlation can be computed using formulas for nested (iterated) expectations. For example,
$$\mathbb{E}_{N;X_i}[S_X] = \mathbb{E}_N[\mathbb{E}_{X_i|N}[S_X | N]] = (1/2 + 12(1/2))/2 = 13/4.$$
In a similar manner all relevant multivariate moments of $(S_X, S_Y)$ can be computed based on knowing that $\mathbb{E}[X_i] = \mathbb{E}[Y_j] = 1/2$ and $\mathbb{E}[X_i^2] = \mathbb{E}[Y_j^2] = 1/3$ (if we assume the $X_i$ and $Y_j$ are all independent). The variances are $56/3 - (13/4)^2 \approx 8.104$ and the covariance is $145/8 - (13/4)^2 = 7.5625$, whence the correlation is $363/389 \approx 0.9332$. Indeed, in this simulation the observed correlation was $0.9316$, apparently differing from this theoretical value only by chance variation.
This answer obviously extends to more than two such sums. It provides a nice example of variables that can be conditionally independent (which will be the case when the $X_i$ are independent of the $Y_j$) but not themselves independent.
Simulation Code
The simulation was carried out in R:
N <- 10^4
m <- 12
n <- ifelse(runif(N) < 1/2, 1, m)
x <- matrix(runif(m*N), ncol=m)
y <- matrix(runif(m*N), ncol=m)
s <- t(sapply(1:N, function(i) c(sum(x[i, 1:n[i]]), sum(y[i, 1:n[i]]))))
col = ifelse(n==1, "Blue", "Red")
plot(s, col=col, pch=19, cex=.5, xlab="X", ylab="Y")
cor(s)
|
Are these variables mutually independent?
Even assuming the $X_i$ and $Y_j$ are independent of $N$ and of one another, $S_X$ and $S_Y$ will be positively correlated and (therefore) not independent. We can calculate that correlation given the
|
41,664
|
ANOVA and Regression give opposite results in R
|
Essentially, the question is, how come that one coefficient in the linear model is significantly different from 0, but ANOVA shows no significant effect and vice versa.
For this, let's consider a simpler example.
set.seed( 123 )
data <- data.frame( x= rnorm( 100 ), g= rep( letters[1:10], each= 10 ) )
data$x[ data$g == "d" ] <- data$x[ data$g == "d" ] + 0.5
boxplot( x ~ g, data )
l <- lm( x ~ 0 + g, data )
summary( l )
anova( l )
You can see that there is only one group (d) that stands out of the line (has a coefficient significantly different from zero). However, given that the nine other groups do not show an effect, the anova returns $p > 0.1$. However, let us remove some of the groups:
data2 <- data[ data$g %in% c( "a", "d" ), ]
anova( lm( x ~ 0 + g, data2 )
returns
Df Sum Sq Mean Sq F value Pr(>F)
g 2 6.8133 3.4066 5.7363 0.01182 *
Residuals 18 10.6898 0.5939
ANOVA considers the overall variance within and between the groups. In the first case (10 groups) the variance between the groups is smaller because of the many groups with no effect. In the second, there are only two groups, and all the between groups variance comes from the difference between these two groups.
How about the reverse? This is easier: imagine three groups with means equal to -1, 0, 1. Total average is 0. Each group separately does not necessarily has a significant difference from 0, but there is enough difference between group 1 and 3 to account for significant total between group variance.
|
ANOVA and Regression give opposite results in R
|
Essentially, the question is, how come that one coefficient in the linear model is significantly different from 0, but ANOVA shows no significant effect and vice versa.
For this, let's consider a simp
|
ANOVA and Regression give opposite results in R
Essentially, the question is, how come that one coefficient in the linear model is significantly different from 0, but ANOVA shows no significant effect and vice versa.
For this, let's consider a simpler example.
set.seed( 123 )
data <- data.frame( x= rnorm( 100 ), g= rep( letters[1:10], each= 10 ) )
data$x[ data$g == "d" ] <- data$x[ data$g == "d" ] + 0.5
boxplot( x ~ g, data )
l <- lm( x ~ 0 + g, data )
summary( l )
anova( l )
You can see that there is only one group (d) that stands out of the line (has a coefficient significantly different from zero). However, given that the nine other groups do not show an effect, the anova returns $p > 0.1$. However, let us remove some of the groups:
data2 <- data[ data$g %in% c( "a", "d" ), ]
anova( lm( x ~ 0 + g, data2 )
returns
Df Sum Sq Mean Sq F value Pr(>F)
g 2 6.8133 3.4066 5.7363 0.01182 *
Residuals 18 10.6898 0.5939
ANOVA considers the overall variance within and between the groups. In the first case (10 groups) the variance between the groups is smaller because of the many groups with no effect. In the second, there are only two groups, and all the between groups variance comes from the difference between these two groups.
How about the reverse? This is easier: imagine three groups with means equal to -1, 0, 1. Total average is 0. Each group separately does not necessarily has a significant difference from 0, but there is enough difference between group 1 and 3 to account for significant total between group variance.
|
ANOVA and Regression give opposite results in R
Essentially, the question is, how come that one coefficient in the linear model is significantly different from 0, but ANOVA shows no significant effect and vice versa.
For this, let's consider a simp
|
41,665
|
ANOVA and Regression give opposite results in R
|
What is going on here is a multiple comparisons issue. You have 10 df for interaction so you can look at 10 independent interaction effects (although I suspect the 10 you are actually looking at, i.e. the 10 regression effects, are not independent).
The 10 df interaction test will be significant at some level if and only if Scheffe's multiple comparison test can find an interaction effect that is significant at that level. So using Scheffe's method you would not be able to find an interaction regression coefficient that is significant. What is being report as P values for the regression coefficients are equivalent to looking at Fisher's LSD multiple comparison methods, which is notoriously easier at declaring significance. So basically you have one method that is declaring no effects and another that finds a few but since they are different methods that is not surprising. You need to decide what standards you want to use. (A more sophisticated use of LSD would not look at individual coefficients unless the overall test was significant.)
Another way of thinking about this is that the 10df interaction test is an average of ten 1df tests and if the interaction effects are not very striking they can get lost in the process of averaging them. However, if you look at them individually, you can see their effect.
I will not get into the main effects issues. But I think what R is telling you most strongly about the interactions (P=.00513) is that the differential effect of using Nutrients a and d changes depending on whether you use the unnamed Herbivore or the Paired Herbivore. If the differential effect of a and d can change, then there has to be some effect for the pair a and d, however the regression coefficient for Nut. d (which really looks at their difference) SEEMS to be saying there is none -- but it only seems to be saying that because main effects in the presence of interaction get so convoluted that they are not worth trying to figure out.
|
ANOVA and Regression give opposite results in R
|
What is going on here is a multiple comparisons issue. You have 10 df for interaction so you can look at 10 independent interaction effects (although I suspect the 10 you are actually looking at, i.e
|
ANOVA and Regression give opposite results in R
What is going on here is a multiple comparisons issue. You have 10 df for interaction so you can look at 10 independent interaction effects (although I suspect the 10 you are actually looking at, i.e. the 10 regression effects, are not independent).
The 10 df interaction test will be significant at some level if and only if Scheffe's multiple comparison test can find an interaction effect that is significant at that level. So using Scheffe's method you would not be able to find an interaction regression coefficient that is significant. What is being report as P values for the regression coefficients are equivalent to looking at Fisher's LSD multiple comparison methods, which is notoriously easier at declaring significance. So basically you have one method that is declaring no effects and another that finds a few but since they are different methods that is not surprising. You need to decide what standards you want to use. (A more sophisticated use of LSD would not look at individual coefficients unless the overall test was significant.)
Another way of thinking about this is that the 10df interaction test is an average of ten 1df tests and if the interaction effects are not very striking they can get lost in the process of averaging them. However, if you look at them individually, you can see their effect.
I will not get into the main effects issues. But I think what R is telling you most strongly about the interactions (P=.00513) is that the differential effect of using Nutrients a and d changes depending on whether you use the unnamed Herbivore or the Paired Herbivore. If the differential effect of a and d can change, then there has to be some effect for the pair a and d, however the regression coefficient for Nut. d (which really looks at their difference) SEEMS to be saying there is none -- but it only seems to be saying that because main effects in the presence of interaction get so convoluted that they are not worth trying to figure out.
|
ANOVA and Regression give opposite results in R
What is going on here is a multiple comparisons issue. You have 10 df for interaction so you can look at 10 independent interaction effects (although I suspect the 10 you are actually looking at, i.e
|
41,666
|
How many observations per subject are necessary to fit a random slope in a mixed model?
|
In a basic mixed effects model,
$$E[Y_{it}|X_{it}] = \alpha + \sigma_i^2 + \beta X_{it}$$
clusters having just one observation contribute influence to both the estimated variance of the random effect and the slope of the fixed effect. This is because the random intercept is never actually estimated. While some numerical solvers produce estimates for random intercepts, they are actually post-hoc statistics calculated after joint estimation of the random effect variance and the fixed effect slope.
If you fit mixed effects models with unbalanced designs, it's important to verify the normality of these estimates (this can be a strong and influential assumption when there are a small number of clusters). As an example, suppose I run a health care clinic and we're verifying the management of AIDS in subjects on antiretroviral therapies, such as effivirenz. If I combine prevalent cases at baseline and incident cases during follow-up, my analysis is now sensitive to the distribution of incidence. For instance, suppose 70% of my cases were diagnosed two years ago, and have had successful management of disease while 30% of my cases are incident and have high viral loads before starting therapy. I now have an uneven bimodal distribution of random intercepts (viral load at "visit 1") and my fixed effect is biased toward the null (when it's actually suggestive that it's effective in managing disease).
A GEE on the other hand makes no assumption about the distribution of random effects and is consistent for the population averaged effect estimate: $\beta_M$ (M for marginal) rather than $\beta_C$ (C for conditional). These models are related to one another, but on average $\beta_M \leq \beta_C$ yet tests of inference about $\beta_M$ can often be of higher power.
|
How many observations per subject are necessary to fit a random slope in a mixed model?
|
In a basic mixed effects model,
$$E[Y_{it}|X_{it}] = \alpha + \sigma_i^2 + \beta X_{it}$$
clusters having just one observation contribute influence to both the estimated variance of the random effect
|
How many observations per subject are necessary to fit a random slope in a mixed model?
In a basic mixed effects model,
$$E[Y_{it}|X_{it}] = \alpha + \sigma_i^2 + \beta X_{it}$$
clusters having just one observation contribute influence to both the estimated variance of the random effect and the slope of the fixed effect. This is because the random intercept is never actually estimated. While some numerical solvers produce estimates for random intercepts, they are actually post-hoc statistics calculated after joint estimation of the random effect variance and the fixed effect slope.
If you fit mixed effects models with unbalanced designs, it's important to verify the normality of these estimates (this can be a strong and influential assumption when there are a small number of clusters). As an example, suppose I run a health care clinic and we're verifying the management of AIDS in subjects on antiretroviral therapies, such as effivirenz. If I combine prevalent cases at baseline and incident cases during follow-up, my analysis is now sensitive to the distribution of incidence. For instance, suppose 70% of my cases were diagnosed two years ago, and have had successful management of disease while 30% of my cases are incident and have high viral loads before starting therapy. I now have an uneven bimodal distribution of random intercepts (viral load at "visit 1") and my fixed effect is biased toward the null (when it's actually suggestive that it's effective in managing disease).
A GEE on the other hand makes no assumption about the distribution of random effects and is consistent for the population averaged effect estimate: $\beta_M$ (M for marginal) rather than $\beta_C$ (C for conditional). These models are related to one another, but on average $\beta_M \leq \beta_C$ yet tests of inference about $\beta_M$ can often be of higher power.
|
How many observations per subject are necessary to fit a random slope in a mixed model?
In a basic mixed effects model,
$$E[Y_{it}|X_{it}] = \alpha + \sigma_i^2 + \beta X_{it}$$
clusters having just one observation contribute influence to both the estimated variance of the random effect
|
41,667
|
The dangers of stepwise variable selection in regression
|
The figure shows the distribution of estimated slope parameters over all models, not just those which were significantly different from zero. The spike at zero represents all the models where the slope was deemed insignificant, and so a zero-slope model was used. The point is to demonstrate that the variable-selection procedure leads to estimates of $\beta$ which are either zero (and thus far too low) or extremely large (because the larger estimates are "more significant").
|
The dangers of stepwise variable selection in regression
|
The figure shows the distribution of estimated slope parameters over all models, not just those which were significantly different from zero. The spike at zero represents all the models where the slop
|
The dangers of stepwise variable selection in regression
The figure shows the distribution of estimated slope parameters over all models, not just those which were significantly different from zero. The spike at zero represents all the models where the slope was deemed insignificant, and so a zero-slope model was used. The point is to demonstrate that the variable-selection procedure leads to estimates of $\beta$ which are either zero (and thus far too low) or extremely large (because the larger estimates are "more significant").
|
The dangers of stepwise variable selection in regression
The figure shows the distribution of estimated slope parameters over all models, not just those which were significantly different from zero. The spike at zero represents all the models where the slop
|
41,668
|
Standard error of the quotient of two estimates (Wald estimators) using the delta method
|
Here's an example in Stata of how to create the ratio and test a hypothesis using nlcom:
. webuse regress
. regress y x1 x2 x3
Source | SS df MS Number of obs = 148
-------------+------------------------------ F( 3, 144) = 96.12
Model | 3259.3561 3 1086.45203 Prob > F = 0.0000
Residual | 1627.56282 144 11.3025196 R-squared = 0.6670
-------------+------------------------------ Adj R-squared = 0.6600
Total | 4886.91892 147 33.2443464 Root MSE = 3.3619
------------------------------------------------------------------------------
y | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
x1 | 1.457113 1.07461 1.36 0.177 -.666934 3.581161
x2 | 2.221682 .8610358 2.58 0.011 .5197797 3.923583
x3 | -.006139 .0005543 -11.08 0.000 -.0072345 -.0050435
_cons | 36.10135 4.382693 8.24 0.000 27.43863 44.76407
------------------------------------------------------------------------------
. nlcom ratio:_b[x1]/_b[x2], post
ratio: _b[x1]/_b[x2]
------------------------------------------------------------------------------
y | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
ratio | .6558606 .4221027 1.55 0.122 -.1784571 1.490178
------------------------------------------------------------------------------
. test ratio=.5
( 1) ratio = .5
F( 1, 144) = 0.14
Prob > F = 0.7125
There are formulas in the pdf manual under nlcom. A terse explanation can be found in the Stata FAQ on the delta method.
Added in response to the OP's comment below:
If you have two separate regressions, you have all the ingredients for the formula that Glen_b linked to, other than the covariance term. Here you have two choices. You can assume it's zero if that makes sense with your model and do the calculation "manually". Or you can estimate the two equations as a system, which will give you cross-equation covariances between the coefficients. It's hard to know which is better without the details. One way (out of several possible ways) to do the latter is with Seemingly Unrelated Regression:
. webuse regress
. sureg (eq1:y x1 x2) (eq2:y x1 x3)
Seemingly unrelated regression
----------------------------------------------------------------------
Equation Obs Parms RMSE "R-sq" chi2 P
----------------------------------------------------------------------
eq1 148 2 4.54006 0.3758 91.48 0.0000
eq2 148 2 3.770546 0.5694 211.94 0.0000
----------------------------------------------------------------------
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
eq1 |
x1 | 7.472932 .98949 7.55 0.000 5.533568 9.412297
x2 | -.4768772 .7799875 -0.61 0.541 -2.005625 1.05187
_cons | -1.374358 2.883296 -0.48 0.634 -7.025514 4.276798
-------------+----------------------------------------------------------------
eq2 |
x1 | 4.338581 .7852935 5.52 0.000 2.799434 5.877728
x3 | -.0026865 .0003774 -7.12 0.000 -.0034261 -.0019468
_cons | 16.32873 3.214735 5.08 0.000 10.02797 22.6295
------------------------------------------------------------------------------
. nlcom ratio:[eq1]_b[x1]/[eq2]_b[x1]
ratio: [eq1]_b[x1]/[eq2]_b[x1]
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
ratio | 1.722437 .2773696 6.21 0.000 1.178803 2.266071
------------------------------------------------------------------------------
|
Standard error of the quotient of two estimates (Wald estimators) using the delta method
|
Here's an example in Stata of how to create the ratio and test a hypothesis using nlcom:
. webuse regress
. regress y x1 x2 x3
Source | SS df MS Number of obs =
|
Standard error of the quotient of two estimates (Wald estimators) using the delta method
Here's an example in Stata of how to create the ratio and test a hypothesis using nlcom:
. webuse regress
. regress y x1 x2 x3
Source | SS df MS Number of obs = 148
-------------+------------------------------ F( 3, 144) = 96.12
Model | 3259.3561 3 1086.45203 Prob > F = 0.0000
Residual | 1627.56282 144 11.3025196 R-squared = 0.6670
-------------+------------------------------ Adj R-squared = 0.6600
Total | 4886.91892 147 33.2443464 Root MSE = 3.3619
------------------------------------------------------------------------------
y | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
x1 | 1.457113 1.07461 1.36 0.177 -.666934 3.581161
x2 | 2.221682 .8610358 2.58 0.011 .5197797 3.923583
x3 | -.006139 .0005543 -11.08 0.000 -.0072345 -.0050435
_cons | 36.10135 4.382693 8.24 0.000 27.43863 44.76407
------------------------------------------------------------------------------
. nlcom ratio:_b[x1]/_b[x2], post
ratio: _b[x1]/_b[x2]
------------------------------------------------------------------------------
y | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
ratio | .6558606 .4221027 1.55 0.122 -.1784571 1.490178
------------------------------------------------------------------------------
. test ratio=.5
( 1) ratio = .5
F( 1, 144) = 0.14
Prob > F = 0.7125
There are formulas in the pdf manual under nlcom. A terse explanation can be found in the Stata FAQ on the delta method.
Added in response to the OP's comment below:
If you have two separate regressions, you have all the ingredients for the formula that Glen_b linked to, other than the covariance term. Here you have two choices. You can assume it's zero if that makes sense with your model and do the calculation "manually". Or you can estimate the two equations as a system, which will give you cross-equation covariances between the coefficients. It's hard to know which is better without the details. One way (out of several possible ways) to do the latter is with Seemingly Unrelated Regression:
. webuse regress
. sureg (eq1:y x1 x2) (eq2:y x1 x3)
Seemingly unrelated regression
----------------------------------------------------------------------
Equation Obs Parms RMSE "R-sq" chi2 P
----------------------------------------------------------------------
eq1 148 2 4.54006 0.3758 91.48 0.0000
eq2 148 2 3.770546 0.5694 211.94 0.0000
----------------------------------------------------------------------
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
eq1 |
x1 | 7.472932 .98949 7.55 0.000 5.533568 9.412297
x2 | -.4768772 .7799875 -0.61 0.541 -2.005625 1.05187
_cons | -1.374358 2.883296 -0.48 0.634 -7.025514 4.276798
-------------+----------------------------------------------------------------
eq2 |
x1 | 4.338581 .7852935 5.52 0.000 2.799434 5.877728
x3 | -.0026865 .0003774 -7.12 0.000 -.0034261 -.0019468
_cons | 16.32873 3.214735 5.08 0.000 10.02797 22.6295
------------------------------------------------------------------------------
. nlcom ratio:[eq1]_b[x1]/[eq2]_b[x1]
ratio: [eq1]_b[x1]/[eq2]_b[x1]
------------------------------------------------------------------------------
| Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
ratio | 1.722437 .2773696 6.21 0.000 1.178803 2.266071
------------------------------------------------------------------------------
|
Standard error of the quotient of two estimates (Wald estimators) using the delta method
Here's an example in Stata of how to create the ratio and test a hypothesis using nlcom:
. webuse regress
. regress y x1 x2 x3
Source | SS df MS Number of obs =
|
41,669
|
Standard error of the quotient of two estimates (Wald estimators) using the delta method
|
Another method of looking at the ratio is due to Fieller. An excellent post is at: How to compute the confidence interval of the ratio of two normal means
|
Standard error of the quotient of two estimates (Wald estimators) using the delta method
|
Another method of looking at the ratio is due to Fieller. An excellent post is at: How to compute the confidence interval of the ratio of two normal means
|
Standard error of the quotient of two estimates (Wald estimators) using the delta method
Another method of looking at the ratio is due to Fieller. An excellent post is at: How to compute the confidence interval of the ratio of two normal means
|
Standard error of the quotient of two estimates (Wald estimators) using the delta method
Another method of looking at the ratio is due to Fieller. An excellent post is at: How to compute the confidence interval of the ratio of two normal means
|
41,670
|
What is the reasoning behind the formulae for the different standard errors of measurement?
|
Okay, I'm not sure if anyone's still checking this (since it's from a year ago), but here's my response to your question: "I can follow the definitions but the article doesn't really explain why the calculations should be different in the different contexts. Is anyone able to provide an explanation?"
The calculations are different because of the purpose of each of these measurements. I'll go through them one by one. For each one, let's consider you taking a test. We know that in the general population, the average score for a certain measurement equals a specific value (let's say 100, to keep it simple). There's an associated standard deviation (let's say, 15, to keep it consistent with your chart) that describes how the data is distributed around that mean. So if you measure and graph a large population, you should get a curve centered at 100 with a certain spread defined by the standard deviation.
Let's also define a few terms, to make sure we're clear on what they all mean:
"True score" - the sample's true value for this score (i.e. what a perfectly accurate machine will spit out as a value when applied to this sample)
"Actual score" - a machine's actual output when applied to a sample
Standard Error of Measurement
This is basically a measurement of how accurate the machine's scoring is. If, for example, a sample's true score was 90, a perfectly accurate machine would give you a score of 90 every time. The lower the reliability of the machine, though, the more varied would be the responses when you measured the sample. A somewhat accurate machine might give you scores of 85, 87, 91, 90, 92 for five tries. A less accurate machine might give you 93, 81, 96, 88, 89 for 5 tries. Consider this as a new curve based off of measuring the same person a bunch of times. That person has one "true score", but the machine will create a spread of "Actual scores". The less reliable the machine, the more spread out the actual scores.
Standard Error of Estimation
This description is a little confusing. It says "The standard error of estimation is an estimate of the variability of a candidate’s true score, given their actual score." A true score doesn't vary - it's fixed. I think what this is trying to say is if you have a bunch of people with the same actual score, then this measurement would be a measurement of the variability of their true scores. Here, the more reliable the machine, the less variation among those people's actual scores. This is also (though counterintuitively for many people) true if the measurement machine is really unreliable. If the machine is really unreliable, we basically have no idea where those people should really be scoring, so they are all likely coming from the middle of the true distribution (that's where the most people are) and so there won't be as much variation among them.
Standard Error of Prediction
The standard error of prediction can be pictured this way. You take one measurement of the sample with your machine. Let's say it gives you a value of 95. You then say, "I predict that if I measure this sample again, I will get a score of x." If the machine is perfectly reliable, you can say you'll get a score of 95. But the more unreliable the machine, the less likely your prediction. Most often, you'd say something like "I predict that if I measure this sample again, I will get a score between x and y." The less reliable the machine, the wider range you have to give to be confident in your prediction. It's higher because, as you say in a comment above, you have two sources of unreliability - your initial measurement and the second (upcoming) measurement.
I hope that his helps (and gets seen!).
|
What is the reasoning behind the formulae for the different standard errors of measurement?
|
Okay, I'm not sure if anyone's still checking this (since it's from a year ago), but here's my response to your question: "I can follow the definitions but the article doesn't really explain why the c
|
What is the reasoning behind the formulae for the different standard errors of measurement?
Okay, I'm not sure if anyone's still checking this (since it's from a year ago), but here's my response to your question: "I can follow the definitions but the article doesn't really explain why the calculations should be different in the different contexts. Is anyone able to provide an explanation?"
The calculations are different because of the purpose of each of these measurements. I'll go through them one by one. For each one, let's consider you taking a test. We know that in the general population, the average score for a certain measurement equals a specific value (let's say 100, to keep it simple). There's an associated standard deviation (let's say, 15, to keep it consistent with your chart) that describes how the data is distributed around that mean. So if you measure and graph a large population, you should get a curve centered at 100 with a certain spread defined by the standard deviation.
Let's also define a few terms, to make sure we're clear on what they all mean:
"True score" - the sample's true value for this score (i.e. what a perfectly accurate machine will spit out as a value when applied to this sample)
"Actual score" - a machine's actual output when applied to a sample
Standard Error of Measurement
This is basically a measurement of how accurate the machine's scoring is. If, for example, a sample's true score was 90, a perfectly accurate machine would give you a score of 90 every time. The lower the reliability of the machine, though, the more varied would be the responses when you measured the sample. A somewhat accurate machine might give you scores of 85, 87, 91, 90, 92 for five tries. A less accurate machine might give you 93, 81, 96, 88, 89 for 5 tries. Consider this as a new curve based off of measuring the same person a bunch of times. That person has one "true score", but the machine will create a spread of "Actual scores". The less reliable the machine, the more spread out the actual scores.
Standard Error of Estimation
This description is a little confusing. It says "The standard error of estimation is an estimate of the variability of a candidate’s true score, given their actual score." A true score doesn't vary - it's fixed. I think what this is trying to say is if you have a bunch of people with the same actual score, then this measurement would be a measurement of the variability of their true scores. Here, the more reliable the machine, the less variation among those people's actual scores. This is also (though counterintuitively for many people) true if the measurement machine is really unreliable. If the machine is really unreliable, we basically have no idea where those people should really be scoring, so they are all likely coming from the middle of the true distribution (that's where the most people are) and so there won't be as much variation among them.
Standard Error of Prediction
The standard error of prediction can be pictured this way. You take one measurement of the sample with your machine. Let's say it gives you a value of 95. You then say, "I predict that if I measure this sample again, I will get a score of x." If the machine is perfectly reliable, you can say you'll get a score of 95. But the more unreliable the machine, the less likely your prediction. Most often, you'd say something like "I predict that if I measure this sample again, I will get a score between x and y." The less reliable the machine, the wider range you have to give to be confident in your prediction. It's higher because, as you say in a comment above, you have two sources of unreliability - your initial measurement and the second (upcoming) measurement.
I hope that his helps (and gets seen!).
|
What is the reasoning behind the formulae for the different standard errors of measurement?
Okay, I'm not sure if anyone's still checking this (since it's from a year ago), but here's my response to your question: "I can follow the definitions but the article doesn't really explain why the c
|
41,671
|
What is the reasoning behind the formulae for the different standard errors of measurement?
|
Only the first of those is a standard error of measurement. Calling all three of them by the same name only confuses things. See sec 3.8 Errors of Measurement, Estimation, and Prediction, pp 66-69 in Lord & Novick's Statistical Theories of Mental Test Scores, Addison-Wesley, 1968.
|
What is the reasoning behind the formulae for the different standard errors of measurement?
|
Only the first of those is a standard error of measurement. Calling all three of them by the same name only confuses things. See sec 3.8 Errors of Measurement, Estimation, and Prediction, pp 66-69 in
|
What is the reasoning behind the formulae for the different standard errors of measurement?
Only the first of those is a standard error of measurement. Calling all three of them by the same name only confuses things. See sec 3.8 Errors of Measurement, Estimation, and Prediction, pp 66-69 in Lord & Novick's Statistical Theories of Mental Test Scores, Addison-Wesley, 1968.
|
What is the reasoning behind the formulae for the different standard errors of measurement?
Only the first of those is a standard error of measurement. Calling all three of them by the same name only confuses things. See sec 3.8 Errors of Measurement, Estimation, and Prediction, pp 66-69 in
|
41,672
|
What is the reasoning behind the formulae for the different standard errors of measurement?
|
You are measuring 3 differents things, why do you expect to have only one standard error ? Intuitively, you will probably have differents means. This is the same thing for standard error.
With your exemple you can see that in the thrid process you take 2 measures, this is likely to be more reliable then taking only one measure.
The calculation is different because the context is different. From Wikipedia: "The standard error is the standard deviation of the sampling distribution of a statistic.[1] The term may also be used to refer to an estimate of that standard deviation, derived from a particular sample used to compute the estimate." If you estimate differents things, you will have differents standards errors. The expression of the standard error will depend on your measure.
|
What is the reasoning behind the formulae for the different standard errors of measurement?
|
You are measuring 3 differents things, why do you expect to have only one standard error ? Intuitively, you will probably have differents means. This is the same thing for standard error.
With your e
|
What is the reasoning behind the formulae for the different standard errors of measurement?
You are measuring 3 differents things, why do you expect to have only one standard error ? Intuitively, you will probably have differents means. This is the same thing for standard error.
With your exemple you can see that in the thrid process you take 2 measures, this is likely to be more reliable then taking only one measure.
The calculation is different because the context is different. From Wikipedia: "The standard error is the standard deviation of the sampling distribution of a statistic.[1] The term may also be used to refer to an estimate of that standard deviation, derived from a particular sample used to compute the estimate." If you estimate differents things, you will have differents standards errors. The expression of the standard error will depend on your measure.
|
What is the reasoning behind the formulae for the different standard errors of measurement?
You are measuring 3 differents things, why do you expect to have only one standard error ? Intuitively, you will probably have differents means. This is the same thing for standard error.
With your e
|
41,673
|
Iterative proportional fitting in R
|
This is old, but here we go:
As @Henrico wrote, it seems that what are you trying to achieve is indeed raking. Instead of using survey::rake you might fit the distribution to the marginals "by hand" using a Poisson GLM, as suggested by @DWin. To get the right frequencies you need to use an offset.
Let
$n_{ij}$ be your sample1
$N_{ij}$ the expected frequencies for the adjusted table
$\hat{N}_{ij}$ the fitted values for the adjusted table
We need to fit a model (see Little & Wu 1991 in JASA):
$$\log \left( \frac{N_{ij}}{n_{ij}} \right) = \lambda + \lambda^1_i + \lambda^2_j$$
thus we have
$$\log \hat{N}_{ij} - \log n_{ij} = \hat{\lambda} + \hat{\lambda^1_i} + \hat{\lambda^2_j}$$
where $\log n_{ij}$ is the mentioned offset.
You can estimate it with any GLM software by
Creating an artificial table of $N_{ij}$ that will satisfy independence and have the desired marginals.
Fit a main effects log-linear/Poisson model to $N_{ij}$ with $n_{ij}$s (observed frequencies, sample1) as an offset.
Get the fitted values.
For example, this will get you the target frequencies f:
# Your data
sample1 <- structure(c(6L, 14L, 46L, 16L, 6L, 21L, 62L, 169L, 327L, 174L,
44L, 72L, 43L, 100L, 186L, 72L, 23L, 42L), .Dim = c(6L, 3L), .Dimnames = list(
c("Primary", "Lowersec", "Highersec", "Highershort", "Higherlong",
"University"), c("B", "F", "W")))
sample2 <- structure(c(171796L, 168191L, 240671L, 69168L, 60079L, 168169L,
954045L, 1040981L, 1872732L, 726410L, 207366L, 425786L, 596239L,
604826L, 991640L, 323215L, 134066L, 221696L), .Dim = c(6L, 3L
), .Dimnames = list(c("Primary", "Lowersec", "Highersec", "Highershort",
"Higherlong", "University"), c("B", "F", "W")))
library(dplyr)
# Turn to a data frame
d1 <- as_data_frame( as.table(sample1), stringsAsFactors = FALSE)
# Create artificial freqs based on sample2 and join with d1
N <- sum(sample2)
d <- outer(rowSums(sample2)/N, colSums(sample2)/N) %>%
as.table() %>%
as_data_frame() %>%
mutate(
p = n / sum(n),
N = round(p * sum(sample2))
) %>%
select(Var1, Var2, p, N) %>%
left_join(d1)
#> Joining, by = c("Var1", "Var2")
# Fit the model
mod <- glm( N ~ Var1 + Var2 + offset(log(n)), data=d, family=poisson("log") )
# Get the fitted values
d$f <- predict(mod, type="response")
d
#> # A tibble: 18 x 6
#> Var1 Var2 p N n f
#> <chr> <chr> <dbl> <dbl> <int> <dbl>
#> 1 Primary B 0.018763534 168442 6 124197.33
#> 2 Lowersec B 0.019765059 177432 14 119743.66
#> 3 Highersec B 0.033832098 303713 46 336937.75
#> 4 Highershort B 0.012190206 109432 16 90514.26
#> 5 Higherlong B 0.004374806 39273 6 43486.57
#> 6 University B 0.008887215 79781 21 163193.43
#> 7 Primary F 0.111702426 1002761 62 960181.53
#> 8 Lowersec F 0.117664672 1056285 169 1081463.51
#> 9 Highersec F 0.201408086 1808056 327 1792009.27
#> 10 Highershort F 0.072570318 651469 174 736456.24
#> 11 Higherlong F 0.026043943 233798 44 238592.76
#> 12 University F 0.052907063 474951 72 418616.69
#> 13 Primary W 0.061364877 550877 43 637701.15
#> 14 Lowersec W 0.064640297 580281 100 612790.82
#> 15 Highersec W 0.110645603 993274 186 976095.99
#> 16 Highershort W 0.039867250 357891 72 291821.50
#> 17 Higherlong W 0.014307508 128440 23 119431.67
#> 18 University W 0.029065039 260919 42 233840.87
|
Iterative proportional fitting in R
|
This is old, but here we go:
As @Henrico wrote, it seems that what are you trying to achieve is indeed raking. Instead of using survey::rake you might fit the distribution to the marginals "by hand" u
|
Iterative proportional fitting in R
This is old, but here we go:
As @Henrico wrote, it seems that what are you trying to achieve is indeed raking. Instead of using survey::rake you might fit the distribution to the marginals "by hand" using a Poisson GLM, as suggested by @DWin. To get the right frequencies you need to use an offset.
Let
$n_{ij}$ be your sample1
$N_{ij}$ the expected frequencies for the adjusted table
$\hat{N}_{ij}$ the fitted values for the adjusted table
We need to fit a model (see Little & Wu 1991 in JASA):
$$\log \left( \frac{N_{ij}}{n_{ij}} \right) = \lambda + \lambda^1_i + \lambda^2_j$$
thus we have
$$\log \hat{N}_{ij} - \log n_{ij} = \hat{\lambda} + \hat{\lambda^1_i} + \hat{\lambda^2_j}$$
where $\log n_{ij}$ is the mentioned offset.
You can estimate it with any GLM software by
Creating an artificial table of $N_{ij}$ that will satisfy independence and have the desired marginals.
Fit a main effects log-linear/Poisson model to $N_{ij}$ with $n_{ij}$s (observed frequencies, sample1) as an offset.
Get the fitted values.
For example, this will get you the target frequencies f:
# Your data
sample1 <- structure(c(6L, 14L, 46L, 16L, 6L, 21L, 62L, 169L, 327L, 174L,
44L, 72L, 43L, 100L, 186L, 72L, 23L, 42L), .Dim = c(6L, 3L), .Dimnames = list(
c("Primary", "Lowersec", "Highersec", "Highershort", "Higherlong",
"University"), c("B", "F", "W")))
sample2 <- structure(c(171796L, 168191L, 240671L, 69168L, 60079L, 168169L,
954045L, 1040981L, 1872732L, 726410L, 207366L, 425786L, 596239L,
604826L, 991640L, 323215L, 134066L, 221696L), .Dim = c(6L, 3L
), .Dimnames = list(c("Primary", "Lowersec", "Highersec", "Highershort",
"Higherlong", "University"), c("B", "F", "W")))
library(dplyr)
# Turn to a data frame
d1 <- as_data_frame( as.table(sample1), stringsAsFactors = FALSE)
# Create artificial freqs based on sample2 and join with d1
N <- sum(sample2)
d <- outer(rowSums(sample2)/N, colSums(sample2)/N) %>%
as.table() %>%
as_data_frame() %>%
mutate(
p = n / sum(n),
N = round(p * sum(sample2))
) %>%
select(Var1, Var2, p, N) %>%
left_join(d1)
#> Joining, by = c("Var1", "Var2")
# Fit the model
mod <- glm( N ~ Var1 + Var2 + offset(log(n)), data=d, family=poisson("log") )
# Get the fitted values
d$f <- predict(mod, type="response")
d
#> # A tibble: 18 x 6
#> Var1 Var2 p N n f
#> <chr> <chr> <dbl> <dbl> <int> <dbl>
#> 1 Primary B 0.018763534 168442 6 124197.33
#> 2 Lowersec B 0.019765059 177432 14 119743.66
#> 3 Highersec B 0.033832098 303713 46 336937.75
#> 4 Highershort B 0.012190206 109432 16 90514.26
#> 5 Higherlong B 0.004374806 39273 6 43486.57
#> 6 University B 0.008887215 79781 21 163193.43
#> 7 Primary F 0.111702426 1002761 62 960181.53
#> 8 Lowersec F 0.117664672 1056285 169 1081463.51
#> 9 Highersec F 0.201408086 1808056 327 1792009.27
#> 10 Highershort F 0.072570318 651469 174 736456.24
#> 11 Higherlong F 0.026043943 233798 44 238592.76
#> 12 University F 0.052907063 474951 72 418616.69
#> 13 Primary W 0.061364877 550877 43 637701.15
#> 14 Lowersec W 0.064640297 580281 100 612790.82
#> 15 Highersec W 0.110645603 993274 186 976095.99
#> 16 Highershort W 0.039867250 357891 72 291821.50
#> 17 Higherlong W 0.014307508 128440 23 119431.67
#> 18 University W 0.029065039 260919 42 233840.87
|
Iterative proportional fitting in R
This is old, but here we go:
As @Henrico wrote, it seems that what are you trying to achieve is indeed raking. Instead of using survey::rake you might fit the distribution to the marginals "by hand" u
|
41,674
|
Iterative proportional fitting in R
|
Make a glm fit to the marginals with Poisson errors (yielding a log-linear model) and then use predict on expand.grid data.frame from the the row and column values based of the second sample. (There's no particular advantage that I can see in using IPF to estimate a log-linear model of this sort.)
require(reshape2)
Loading required package: reshape2
> melt(sample1)
Var1 Var2 value
1 Primary B 6
2 Lowersec B 14
3 Highersec B 46
4 Highershort B 16
5 Higherlong B 6
6 University B 21
7 Primary F 62
8 Lowersec F 169
9 Highersec F 327
10 Highershort F 174
11 Higherlong F 44
12 University F 72
13 Primary W 43
14 Lowersec W 100
15 Highersec W 186
16 Highershort W 72
17 Higherlong W 23
18 University W 42
> m_sample1<- melt(sample1)
> glm( value ~ Var1+Var2, data=m_sample1)
Call: glm(formula = value ~ Var1 + Var2, data = msample)
Coefficients:
(Intercept) Var1Highersec Var1Highershort Var1Lowersec Var1Primary
-36.56 162.00 63.00 70.00 12.67
Var1University Var2F Var2W
20.67 123.17 59.50
Degrees of Freedom: 17 Total (i.e. Null); 10 Residual
Null Deviance: 121200
Residual Deviance: 22510 AIC: 197.4
That was the linear model. This is the multiplicative (log-linear) model:
> glm( value ~ Var1+Var2, data= m_sample1, family="poisson")
Call: glm(formula = value ~ Var1 + Var2, family = "poisson", data = m_sample1)
Coefficients:
(Intercept) Var1Highersec Var1Highershort Var1Lowersec Var1Primary
1.7213 2.0357 1.2779 1.3550 0.4191
Var1University Var2F Var2W
0.6148 2.0515 1.4528
Degrees of Freedom: 17 Total (i.e. Null); 10 Residual
Null Deviance: 1287
Residual Deviance: 21.05 AIC: 139.1
> predict(glm( value ~ Var1+Var2, data=msample,family="poisson"), data.frame(Var1="Lowersec", Var2="B") )
1
3.076272
Edit; More detail requested:
Multiply the grand sum by combinations of the appropriate entries from exp(coef(fit)). The non-Intercept entries in coef(fit) let you compute the estimated ratios of proportions in "non-corner" cells to the "corner cell". The Var1:University with Var2:F cell would have an estimate of exp( 1.7213 + 0.6148+ 2.0515) in the original model (which is what predict(fit) or predict(fit, expand.grid(data.frame( rows=rowMeans(m_sample1), cols=colMeans(m_sample1)))) should give you). You then need to multiply by the ratio of the grand sums of the new data to the grand sum of the fit data.
|
Iterative proportional fitting in R
|
Make a glm fit to the marginals with Poisson errors (yielding a log-linear model) and then use predict on expand.grid data.frame from the the row and column values based of the second sample. (There
|
Iterative proportional fitting in R
Make a glm fit to the marginals with Poisson errors (yielding a log-linear model) and then use predict on expand.grid data.frame from the the row and column values based of the second sample. (There's no particular advantage that I can see in using IPF to estimate a log-linear model of this sort.)
require(reshape2)
Loading required package: reshape2
> melt(sample1)
Var1 Var2 value
1 Primary B 6
2 Lowersec B 14
3 Highersec B 46
4 Highershort B 16
5 Higherlong B 6
6 University B 21
7 Primary F 62
8 Lowersec F 169
9 Highersec F 327
10 Highershort F 174
11 Higherlong F 44
12 University F 72
13 Primary W 43
14 Lowersec W 100
15 Highersec W 186
16 Highershort W 72
17 Higherlong W 23
18 University W 42
> m_sample1<- melt(sample1)
> glm( value ~ Var1+Var2, data=m_sample1)
Call: glm(formula = value ~ Var1 + Var2, data = msample)
Coefficients:
(Intercept) Var1Highersec Var1Highershort Var1Lowersec Var1Primary
-36.56 162.00 63.00 70.00 12.67
Var1University Var2F Var2W
20.67 123.17 59.50
Degrees of Freedom: 17 Total (i.e. Null); 10 Residual
Null Deviance: 121200
Residual Deviance: 22510 AIC: 197.4
That was the linear model. This is the multiplicative (log-linear) model:
> glm( value ~ Var1+Var2, data= m_sample1, family="poisson")
Call: glm(formula = value ~ Var1 + Var2, family = "poisson", data = m_sample1)
Coefficients:
(Intercept) Var1Highersec Var1Highershort Var1Lowersec Var1Primary
1.7213 2.0357 1.2779 1.3550 0.4191
Var1University Var2F Var2W
0.6148 2.0515 1.4528
Degrees of Freedom: 17 Total (i.e. Null); 10 Residual
Null Deviance: 1287
Residual Deviance: 21.05 AIC: 139.1
> predict(glm( value ~ Var1+Var2, data=msample,family="poisson"), data.frame(Var1="Lowersec", Var2="B") )
1
3.076272
Edit; More detail requested:
Multiply the grand sum by combinations of the appropriate entries from exp(coef(fit)). The non-Intercept entries in coef(fit) let you compute the estimated ratios of proportions in "non-corner" cells to the "corner cell". The Var1:University with Var2:F cell would have an estimate of exp( 1.7213 + 0.6148+ 2.0515) in the original model (which is what predict(fit) or predict(fit, expand.grid(data.frame( rows=rowMeans(m_sample1), cols=colMeans(m_sample1)))) should give you). You then need to multiply by the ratio of the grand sums of the new data to the grand sum of the fit data.
|
Iterative proportional fitting in R
Make a glm fit to the marginals with Poisson errors (yielding a log-linear model) and then use predict on expand.grid data.frame from the the row and column values based of the second sample. (There
|
41,675
|
Equivalence between single sample cross-validation index and the Akaike information criterion for prediction
|
You only ever really need to fit the full model once for cross validation. You can use the results from the full run to work out the residuals from predicting a subset. Now suppose you consider a specific group of observations, say $m$, where $n-m\geq p$ where $n$ is the number of samples and $p$ is the number of betas. The standard least squares solution using all the data is $b=(X^TX)^{-1}X^TY$. Now let the m samples removed be in the $m\times p$ matrix $Z$ and the corresponding observed responses be in the $m\times 1$ vector $W$. Now we can write the "out of sample" prediction for $W$ as follows:
$$Zb_{-Z}=Z(X^TX-Z^TZ)^{-1}(X^TY-Z^TW)$$
That is, we subtract the contribution of the m points away from the full dataset. Next we use the blockwise inversion formula setting $X^TX=A,\;Z^T=B,\;Z=C,\;D=I_m$. After some tedious manipulations we get
$$Zb_{-Z}=(I_m-H_Z)^{-1} (Zb- H_ZW)$$
where
$H_Z=Z (X^TX)^{-1}Z^T$
finally the "leave m out" residuals are given as
$$ W-Zb_{-Z}= (I_m-H_Z)^{-1} (W-Zb)$$
$$= (I_m-H_Z)^{-1} e_{Z}$$
where $e_Z$ is the residuals for the m samples when they are inclued in the model. Taking their sum of squares gives
$$e_{Z}^T (I_m-H_Z)^{-1} (I_m-H_Z)^{-1} e_Z$$
The idea is to the take all the ${n\choose m }$ combinations of $Z$ available in the sample. But this grows something like $O(m^n)$ and is infeasible for all but very small m. For $m=1$ we have the PRESS statistic given as:
$$\sum_i\frac{e_i^2}{(1-h_{ii})^2}$$
taking logs and using the approximations $h_{ii}\approx\frac{p}{n}$ and $(1-q)^{-2}\approx 1+2q$ we get
$$n\log(\sum_ie_i^2(1+ 2\frac{p}{n})=n\log(\sum_ie_i^2) +n\log(1+ 2\frac{p}{n})\approx n\log(\sum_ie_i^2) +2p=AIC$$
|
Equivalence between single sample cross-validation index and the Akaike information criterion for pr
|
You only ever really need to fit the full model once for cross validation. You can use the results from the full run to work out the residuals from predicting a subset. Now suppose you consider a sp
|
Equivalence between single sample cross-validation index and the Akaike information criterion for prediction
You only ever really need to fit the full model once for cross validation. You can use the results from the full run to work out the residuals from predicting a subset. Now suppose you consider a specific group of observations, say $m$, where $n-m\geq p$ where $n$ is the number of samples and $p$ is the number of betas. The standard least squares solution using all the data is $b=(X^TX)^{-1}X^TY$. Now let the m samples removed be in the $m\times p$ matrix $Z$ and the corresponding observed responses be in the $m\times 1$ vector $W$. Now we can write the "out of sample" prediction for $W$ as follows:
$$Zb_{-Z}=Z(X^TX-Z^TZ)^{-1}(X^TY-Z^TW)$$
That is, we subtract the contribution of the m points away from the full dataset. Next we use the blockwise inversion formula setting $X^TX=A,\;Z^T=B,\;Z=C,\;D=I_m$. After some tedious manipulations we get
$$Zb_{-Z}=(I_m-H_Z)^{-1} (Zb- H_ZW)$$
where
$H_Z=Z (X^TX)^{-1}Z^T$
finally the "leave m out" residuals are given as
$$ W-Zb_{-Z}= (I_m-H_Z)^{-1} (W-Zb)$$
$$= (I_m-H_Z)^{-1} e_{Z}$$
where $e_Z$ is the residuals for the m samples when they are inclued in the model. Taking their sum of squares gives
$$e_{Z}^T (I_m-H_Z)^{-1} (I_m-H_Z)^{-1} e_Z$$
The idea is to the take all the ${n\choose m }$ combinations of $Z$ available in the sample. But this grows something like $O(m^n)$ and is infeasible for all but very small m. For $m=1$ we have the PRESS statistic given as:
$$\sum_i\frac{e_i^2}{(1-h_{ii})^2}$$
taking logs and using the approximations $h_{ii}\approx\frac{p}{n}$ and $(1-q)^{-2}\approx 1+2q$ we get
$$n\log(\sum_ie_i^2(1+ 2\frac{p}{n})=n\log(\sum_ie_i^2) +n\log(1+ 2\frac{p}{n})\approx n\log(\sum_ie_i^2) +2p=AIC$$
|
Equivalence between single sample cross-validation index and the Akaike information criterion for pr
You only ever really need to fit the full model once for cross validation. You can use the results from the full run to work out the residuals from predicting a subset. Now suppose you consider a sp
|
41,676
|
Sampling from the normal-gamma distribution in R
|
Sampling from normal-gamma distribution is easy, and in fact the algorithm is described on Wikipedia:
Generation of random variates is straightforward:
Sample $\tau$ from a gamma distribution with parameters $\alpha$ and $\beta$
Sample $x$ from a normal distribution with mean $\mu$ and variance $1/(\lambda \tau)$
What leads to the following function:
rnormgamma <- function(n, mu, lambda, alpha, beta) {
if (length(n) > 1)
n <- length(n)
tau <- rgamma(n, alpha, beta)
x <- rnorm(n, mu, sqrt(1/(lambda*tau)))
data.frame(tau = tau, x = x)
}
|
Sampling from the normal-gamma distribution in R
|
Sampling from normal-gamma distribution is easy, and in fact the algorithm is described on Wikipedia:
Generation of random variates is straightforward:
Sample $\tau$ from a gamma distribution with p
|
Sampling from the normal-gamma distribution in R
Sampling from normal-gamma distribution is easy, and in fact the algorithm is described on Wikipedia:
Generation of random variates is straightforward:
Sample $\tau$ from a gamma distribution with parameters $\alpha$ and $\beta$
Sample $x$ from a normal distribution with mean $\mu$ and variance $1/(\lambda \tau)$
What leads to the following function:
rnormgamma <- function(n, mu, lambda, alpha, beta) {
if (length(n) > 1)
n <- length(n)
tau <- rgamma(n, alpha, beta)
x <- rnorm(n, mu, sqrt(1/(lambda*tau)))
data.frame(tau = tau, x = x)
}
|
Sampling from the normal-gamma distribution in R
Sampling from normal-gamma distribution is easy, and in fact the algorithm is described on Wikipedia:
Generation of random variates is straightforward:
Sample $\tau$ from a gamma distribution with p
|
41,677
|
Sampling from the normal-gamma distribution in R
|
Use the rigamma() function from the pscl package on CRAN. You can also take a look at the ghyp package.
|
Sampling from the normal-gamma distribution in R
|
Use the rigamma() function from the pscl package on CRAN. You can also take a look at the ghyp package.
|
Sampling from the normal-gamma distribution in R
Use the rigamma() function from the pscl package on CRAN. You can also take a look at the ghyp package.
|
Sampling from the normal-gamma distribution in R
Use the rigamma() function from the pscl package on CRAN. You can also take a look at the ghyp package.
|
41,678
|
Sampling from the normal-gamma distribution in R
|
Ok this is very late, but did you can approximate samples from the normal inverse gamma bivariate distribution using a Gibbs Sampler where the two alternating sampling distributions would be the normal distribution and the inverse gamma distribution. Did you look into that?
|
Sampling from the normal-gamma distribution in R
|
Ok this is very late, but did you can approximate samples from the normal inverse gamma bivariate distribution using a Gibbs Sampler where the two alternating sampling distributions would be the norma
|
Sampling from the normal-gamma distribution in R
Ok this is very late, but did you can approximate samples from the normal inverse gamma bivariate distribution using a Gibbs Sampler where the two alternating sampling distributions would be the normal distribution and the inverse gamma distribution. Did you look into that?
|
Sampling from the normal-gamma distribution in R
Ok this is very late, but did you can approximate samples from the normal inverse gamma bivariate distribution using a Gibbs Sampler where the two alternating sampling distributions would be the norma
|
41,679
|
Random forests vs boosting
|
I would use whichever one performed better out of sample.
So far, I've found in impossible to tell which model will be better for a novel problem a priori.
|
Random forests vs boosting
|
I would use whichever one performed better out of sample.
So far, I've found in impossible to tell which model will be better for a novel problem a priori.
|
Random forests vs boosting
I would use whichever one performed better out of sample.
So far, I've found in impossible to tell which model will be better for a novel problem a priori.
|
Random forests vs boosting
I would use whichever one performed better out of sample.
So far, I've found in impossible to tell which model will be better for a novel problem a priori.
|
41,680
|
Random forests vs boosting
|
Just to start, a quick thought.
Random Forest can run in parallel and they are much faster to train, Boosting is an iterative algorithm instead. However, Boosting might converge early iteration-wise.
Boosting might overfit when there are many noisy features but Random Forest also does.
On the other hand, their target is almost similar: produce many different weak learners as much different as possible from each others. Random Forests tackle the problem with randomization and Boosting focuses on mis-classified examples of previous models to build a different one.
|
Random forests vs boosting
|
Just to start, a quick thought.
Random Forest can run in parallel and they are much faster to train, Boosting is an iterative algorithm instead. However, Boosting might converge early iteration-wise.
|
Random forests vs boosting
Just to start, a quick thought.
Random Forest can run in parallel and they are much faster to train, Boosting is an iterative algorithm instead. However, Boosting might converge early iteration-wise.
Boosting might overfit when there are many noisy features but Random Forest also does.
On the other hand, their target is almost similar: produce many different weak learners as much different as possible from each others. Random Forests tackle the problem with randomization and Boosting focuses on mis-classified examples of previous models to build a different one.
|
Random forests vs boosting
Just to start, a quick thought.
Random Forest can run in parallel and they are much faster to train, Boosting is an iterative algorithm instead. However, Boosting might converge early iteration-wise.
|
41,681
|
Sample size effects on R squared
|
My guess is that you only ran these simulations once. If you run them a few times the results will vary. You might get a smaller coefficient for the first one. But in general the reason for this pattern is because the true correlation for the underlying population is 0 and your simulations are following the law of large numbers.
|
Sample size effects on R squared
|
My guess is that you only ran these simulations once. If you run them a few times the results will vary. You might get a smaller coefficient for the first one. But in general the reason for this pa
|
Sample size effects on R squared
My guess is that you only ran these simulations once. If you run them a few times the results will vary. You might get a smaller coefficient for the first one. But in general the reason for this pattern is because the true correlation for the underlying population is 0 and your simulations are following the law of large numbers.
|
Sample size effects on R squared
My guess is that you only ran these simulations once. If you run them a few times the results will vary. You might get a smaller coefficient for the first one. But in general the reason for this pa
|
41,682
|
covariance of RVs under a nonlinear transformation
|
Let's tackle this in relatively simple steps.
First, covariances are found in terms of expectations; e.g.,
$$\eqalign{
\text{Cov}(x_1x_2, x_1x_3) & = E[(x_1x_2 - E[x_1x_2])(x_1x_3 - E[x_1x_3])] \\
& = E[x_1^2x_2x_3] - E[x_1x_2E[x_1x_3]] - E[E[x_1x_2]x_1x_3] + E[x_1x_2]E[x_1x_3] \\
& = E[x_1^2x_2x_3] - E[x_1x_2]E[x_1x_3].
}$$
This reduces the problem of finding a covariance of the form $\text{Cov}(x_1^{i_1}x_2^{i_2}x_3^{i_3}, x_1^{j_1}x_2^{j_2}x_3^{j_3})$ to that of finding expectations of monomials $x_1^{k_1}x_2^{k_2}x_3^{k_3}$.
Next, by writing $x_1 = 2 + z_1$, $x_2 = -1 + z_2$, and $x_3 = 3 + z_3$, the distribution of $(z_1, z_2, z_3)$ is multinormal with zero mean and the same covariance matrix $\Sigma$. Therefore we can obtain the expectation of a monomial in the $x_i$ by expanding the product:
$$E(x_1^{k_1}x_2^{k_2}x_3^{k_3}) = E((z_1+2)^{k_1}(z_2-1)^{k_2}(z_3+3)^{k_3})$$
which itself is a linear combination of monomials in the $z_i$.
Third, the Cholesky decomposition $\Sigma = \mathbb{U}^\intercal \mathbb{U}$ for an upper-triangular matrix $\mathbb{U}$ exhibits the $z_i$ as linear combinations of uncorrelated standard normal variates $y_i$, which are therefore independent (this is where multinormality comes to the fore). In vector notation,
$$z = y \mathbb{U}.$$
Substituting, we see that monomials in the $z_i$ expand to polynomials in the $y_i$. Once again exploiting the linearity of expectation, it suffices to obtain the expectation of monomials in the $y_i$. But because the $y_i$ are independent, we see that
$$E[y_1^{l_1}y_2^{l_2}y_3^{l_3}] = E[y_1^{l_1}]E[y_2^{l_2}]E[y_3^{l_3}].$$
This was the crux of the matter.
Finally, it is well known (and easy to compute) that the expected value of $y^l$ for a nonnegative integral power of a standard normal variate $y$ is $0$ when $l$ is odd (which is a beautiful simplification, because it makes any monomial with an odd power of any variable disappear) and otherwise
$$E[y^l] = (l-1)!! = (l-1)(l-3)...(3)(1).$$
The example in the question is a little complicated because $\mathbb{U}$ is not very nice. However, doing the calculations (in Mathematica), I obtain
$$E[x_1^2x_2x_3] = -4, \quad E[x_1x_2] = -1, \quad E[x_1x_2] = 6,$$
whence
$$\text{Cov}(x_1x_2, x_1x_3) = -4 - (-1)(6) = 2.$$
As a check, I simulated $10^6$ draws from this multinormal distribution and computed the sample values of these four quantities. (Many draws are needed because higher-order moments have large sample variances.) The estimates (with the correct values shown in parentheses) were $-3.98 (-4)$, $-1.003 (-1)$, $5.999 (6)$, and--in a separate simulation--$1.95 (2)$. The closeness of all these results confirms the correctness of this computation.
I chose Mathematica for this work in part because it handles polynomial calculations well. Here is the core of the code used for the computations. It all comes down to computing the expectation of a monomial in the $z_i$. This monomial is given by the vector of its exponents, e. The covariance matrix $\Sigma$ is the only other input:
expectation[e_, \[Sigma]_] :=
Module[{n = Length[\[Sigma]], u = CholeskyDecomposition[\[Sigma]], x, y, reps, p, f},
x = Table[Unique["x"], {n}]; (* Original variables *)
y = Table[Unique["y"], {n}]; (* Uncorrelated variables *)
reps = Rule @@ # & /@ ({x, y . u}\[Transpose]);
p = CoefficientRules[ Times @@ (x ^ e) /. reps // Expand, y];
f[k_Integer /; OddQ[k]] := 0;
f[k_Integer /; EvenQ[k]] := (k - 1)!!;
f[Rule[k_List, x_]] := x Times @@ (f /@ k);
Sum[f[q], {q, p}]
]
As a check, expectation should reproduce the variances of the original variables (the diagonal entries). These would be determined by the exponent vectors $(2,0,0)$, $(0,2,0)$, and $(0,0,2)$, in order: twice the identity matrix when concatenated. Here, a has previously been initialized to the covariance matrix of the question:
expectation[#, a] & /@ (2 IdentityMatrix[3])
$\{4,2,30\}$
That's exactly correct. (Mathematica is doing exact calculations for this small problem, not numerical ones.) In fact, we can reproduce the original covariance matrix by generating all six exponent vectors (the others are $(1,1,0)$, $(1,0,1)$, and $(0,1,1)$) and applying expectation to recover all second multivariate moments:
Partition[expectation[#, a] & /@ Total /@ Tuples[IdentityMatrix[3], 2], 3]
The output is exactly the original matrix a.
To accommodate the second step (incorporating the nonzero means), we expand the polynomial in that step and compute the expectation term by term:
Last[#] expectation[First[#], a] & /@
(CoefficientRules[#, {x1, x2, x3}] & /@ ((x1 + 2) (x2 - 1)(x1 + 2) (x3 + 3)
// Expand) // First)
$-4$
Similar expressions involving (x1+2)(x2-1) and (x1+2)(x3+3) compute the other values needed.
The simulations were carried out with these three commands, which take about a second to execute:
f = MultinormalDistribution[{2, -1, 3}, a];
data = {#[[1]]^2 #[[2]] #[[3]], #[[1]] #[[2]] , #[[1]] #[[3]] } & /@
RandomVariate[f, 10^6];
Append[Mean[data], Covariance[data[[All, 2 ;; 3]]][[1, 2]]]
|
covariance of RVs under a nonlinear transformation
|
Let's tackle this in relatively simple steps.
First, covariances are found in terms of expectations; e.g.,
$$\eqalign{
\text{Cov}(x_1x_2, x_1x_3) & = E[(x_1x_2 - E[x_1x_2])(x_1x_3 - E[x_1x_3])] \\
&
|
covariance of RVs under a nonlinear transformation
Let's tackle this in relatively simple steps.
First, covariances are found in terms of expectations; e.g.,
$$\eqalign{
\text{Cov}(x_1x_2, x_1x_3) & = E[(x_1x_2 - E[x_1x_2])(x_1x_3 - E[x_1x_3])] \\
& = E[x_1^2x_2x_3] - E[x_1x_2E[x_1x_3]] - E[E[x_1x_2]x_1x_3] + E[x_1x_2]E[x_1x_3] \\
& = E[x_1^2x_2x_3] - E[x_1x_2]E[x_1x_3].
}$$
This reduces the problem of finding a covariance of the form $\text{Cov}(x_1^{i_1}x_2^{i_2}x_3^{i_3}, x_1^{j_1}x_2^{j_2}x_3^{j_3})$ to that of finding expectations of monomials $x_1^{k_1}x_2^{k_2}x_3^{k_3}$.
Next, by writing $x_1 = 2 + z_1$, $x_2 = -1 + z_2$, and $x_3 = 3 + z_3$, the distribution of $(z_1, z_2, z_3)$ is multinormal with zero mean and the same covariance matrix $\Sigma$. Therefore we can obtain the expectation of a monomial in the $x_i$ by expanding the product:
$$E(x_1^{k_1}x_2^{k_2}x_3^{k_3}) = E((z_1+2)^{k_1}(z_2-1)^{k_2}(z_3+3)^{k_3})$$
which itself is a linear combination of monomials in the $z_i$.
Third, the Cholesky decomposition $\Sigma = \mathbb{U}^\intercal \mathbb{U}$ for an upper-triangular matrix $\mathbb{U}$ exhibits the $z_i$ as linear combinations of uncorrelated standard normal variates $y_i$, which are therefore independent (this is where multinormality comes to the fore). In vector notation,
$$z = y \mathbb{U}.$$
Substituting, we see that monomials in the $z_i$ expand to polynomials in the $y_i$. Once again exploiting the linearity of expectation, it suffices to obtain the expectation of monomials in the $y_i$. But because the $y_i$ are independent, we see that
$$E[y_1^{l_1}y_2^{l_2}y_3^{l_3}] = E[y_1^{l_1}]E[y_2^{l_2}]E[y_3^{l_3}].$$
This was the crux of the matter.
Finally, it is well known (and easy to compute) that the expected value of $y^l$ for a nonnegative integral power of a standard normal variate $y$ is $0$ when $l$ is odd (which is a beautiful simplification, because it makes any monomial with an odd power of any variable disappear) and otherwise
$$E[y^l] = (l-1)!! = (l-1)(l-3)...(3)(1).$$
The example in the question is a little complicated because $\mathbb{U}$ is not very nice. However, doing the calculations (in Mathematica), I obtain
$$E[x_1^2x_2x_3] = -4, \quad E[x_1x_2] = -1, \quad E[x_1x_2] = 6,$$
whence
$$\text{Cov}(x_1x_2, x_1x_3) = -4 - (-1)(6) = 2.$$
As a check, I simulated $10^6$ draws from this multinormal distribution and computed the sample values of these four quantities. (Many draws are needed because higher-order moments have large sample variances.) The estimates (with the correct values shown in parentheses) were $-3.98 (-4)$, $-1.003 (-1)$, $5.999 (6)$, and--in a separate simulation--$1.95 (2)$. The closeness of all these results confirms the correctness of this computation.
I chose Mathematica for this work in part because it handles polynomial calculations well. Here is the core of the code used for the computations. It all comes down to computing the expectation of a monomial in the $z_i$. This monomial is given by the vector of its exponents, e. The covariance matrix $\Sigma$ is the only other input:
expectation[e_, \[Sigma]_] :=
Module[{n = Length[\[Sigma]], u = CholeskyDecomposition[\[Sigma]], x, y, reps, p, f},
x = Table[Unique["x"], {n}]; (* Original variables *)
y = Table[Unique["y"], {n}]; (* Uncorrelated variables *)
reps = Rule @@ # & /@ ({x, y . u}\[Transpose]);
p = CoefficientRules[ Times @@ (x ^ e) /. reps // Expand, y];
f[k_Integer /; OddQ[k]] := 0;
f[k_Integer /; EvenQ[k]] := (k - 1)!!;
f[Rule[k_List, x_]] := x Times @@ (f /@ k);
Sum[f[q], {q, p}]
]
As a check, expectation should reproduce the variances of the original variables (the diagonal entries). These would be determined by the exponent vectors $(2,0,0)$, $(0,2,0)$, and $(0,0,2)$, in order: twice the identity matrix when concatenated. Here, a has previously been initialized to the covariance matrix of the question:
expectation[#, a] & /@ (2 IdentityMatrix[3])
$\{4,2,30\}$
That's exactly correct. (Mathematica is doing exact calculations for this small problem, not numerical ones.) In fact, we can reproduce the original covariance matrix by generating all six exponent vectors (the others are $(1,1,0)$, $(1,0,1)$, and $(0,1,1)$) and applying expectation to recover all second multivariate moments:
Partition[expectation[#, a] & /@ Total /@ Tuples[IdentityMatrix[3], 2], 3]
The output is exactly the original matrix a.
To accommodate the second step (incorporating the nonzero means), we expand the polynomial in that step and compute the expectation term by term:
Last[#] expectation[First[#], a] & /@
(CoefficientRules[#, {x1, x2, x3}] & /@ ((x1 + 2) (x2 - 1)(x1 + 2) (x3 + 3)
// Expand) // First)
$-4$
Similar expressions involving (x1+2)(x2-1) and (x1+2)(x3+3) compute the other values needed.
The simulations were carried out with these three commands, which take about a second to execute:
f = MultinormalDistribution[{2, -1, 3}, a];
data = {#[[1]]^2 #[[2]] #[[3]], #[[1]] #[[2]] , #[[1]] #[[3]] } & /@
RandomVariate[f, 10^6];
Append[Mean[data], Covariance[data[[All, 2 ;; 3]]][[1, 2]]]
|
covariance of RVs under a nonlinear transformation
Let's tackle this in relatively simple steps.
First, covariances are found in terms of expectations; e.g.,
$$\eqalign{
\text{Cov}(x_1x_2, x_1x_3) & = E[(x_1x_2 - E[x_1x_2])(x_1x_3 - E[x_1x_3])] \\
&
|
41,683
|
Standard deviation of a ratio (percentage change)
|
If you don't know the distribution, the usual approach would be via Taylor expansion.
e.g. see here or top of p 6 here
or
http://en.wikipedia.org/wiki/Taylor_expansions_for_the_moments_of_functions_of_random_variables
(You have to recognize that the two sample means are themselves random variables to apply it.)
---
Edit:
The formula is directly relevant for your case because $Var(100(y-z)/z) = 100^2 Var(\frac{y}{z} -1) = 100^2 Var(y/z)$.
I don't know of a specific book reference off the top of my head, it feels a bit like asking for a reference for how to do long division.
It's an absolutely standard technique for approximating means and variances, based quite directly (and in a fairly obvious way) off Taylor series, which have been around for 300 years now. It's certainly mentioned in books, but I've never learned it from a book, in spite of encountering it many times - it's always 'expand this transformation in a Taylor series' (usually, but not always about the mean) and 'take expectations' or 'take variances' (or whatever, as necessary).
Once you learn how to do Taylor series (standard early-undergrad mathematics) and know a few properties of expectations and variances (standard early mathematical statistics), you're done; it's something undergrad students are given as an exercise.
I'll see if I can dig up a reference; there's sure to be something in a standard old reference like Cox and Hinkley or Kendall and Stuart or Feller or something (none of which I have to hand at the moment).
|
Standard deviation of a ratio (percentage change)
|
If you don't know the distribution, the usual approach would be via Taylor expansion.
e.g. see here or top of p 6 here
or
http://en.wikipedia.org/wiki/Taylor_expansions_for_the_moments_of_functions_o
|
Standard deviation of a ratio (percentage change)
If you don't know the distribution, the usual approach would be via Taylor expansion.
e.g. see here or top of p 6 here
or
http://en.wikipedia.org/wiki/Taylor_expansions_for_the_moments_of_functions_of_random_variables
(You have to recognize that the two sample means are themselves random variables to apply it.)
---
Edit:
The formula is directly relevant for your case because $Var(100(y-z)/z) = 100^2 Var(\frac{y}{z} -1) = 100^2 Var(y/z)$.
I don't know of a specific book reference off the top of my head, it feels a bit like asking for a reference for how to do long division.
It's an absolutely standard technique for approximating means and variances, based quite directly (and in a fairly obvious way) off Taylor series, which have been around for 300 years now. It's certainly mentioned in books, but I've never learned it from a book, in spite of encountering it many times - it's always 'expand this transformation in a Taylor series' (usually, but not always about the mean) and 'take expectations' or 'take variances' (or whatever, as necessary).
Once you learn how to do Taylor series (standard early-undergrad mathematics) and know a few properties of expectations and variances (standard early mathematical statistics), you're done; it's something undergrad students are given as an exercise.
I'll see if I can dig up a reference; there's sure to be something in a standard old reference like Cox and Hinkley or Kendall and Stuart or Feller or something (none of which I have to hand at the moment).
|
Standard deviation of a ratio (percentage change)
If you don't know the distribution, the usual approach would be via Taylor expansion.
e.g. see here or top of p 6 here
or
http://en.wikipedia.org/wiki/Taylor_expansions_for_the_moments_of_functions_o
|
41,684
|
Standard deviation of a ratio (percentage change)
|
The Taylor series method yields an estimator for the variance which can then be used to estimate a symmetric confidence interval based upon a crude normality assumption of your ratio.
A more general approach which directly yields a (possibly unsymmetric) confidence interval for the ratio of two random variables is MOVER-R by Donner and Zhou:
Donner, Zhou: Closed-form confidence intervals for functions of the normal mean and standard deviation.
Statistical Methods in Medical Research, 21(4), 347–359 (2012)
|
Standard deviation of a ratio (percentage change)
|
The Taylor series method yields an estimator for the variance which can then be used to estimate a symmetric confidence interval based upon a crude normality assumption of your ratio.
A more general a
|
Standard deviation of a ratio (percentage change)
The Taylor series method yields an estimator for the variance which can then be used to estimate a symmetric confidence interval based upon a crude normality assumption of your ratio.
A more general approach which directly yields a (possibly unsymmetric) confidence interval for the ratio of two random variables is MOVER-R by Donner and Zhou:
Donner, Zhou: Closed-form confidence intervals for functions of the normal mean and standard deviation.
Statistical Methods in Medical Research, 21(4), 347–359 (2012)
|
Standard deviation of a ratio (percentage change)
The Taylor series method yields an estimator for the variance which can then be used to estimate a symmetric confidence interval based upon a crude normality assumption of your ratio.
A more general a
|
41,685
|
What is the probability of randomly selecting n random numbers in the range 1-m in sorted order?
|
David's answer is incorrect because they are not i.i.d. The probability the nth draw is greater than all previous draws and the probability the n-1th draw is greater than all previous draws are not independent.
Here is some python code to calculate the exact probability recursively:
# Recursively finds the number of combinations that are in order for n draws
# of random integers from 1 to m.
def recursively_find_combinations(m,n):
if n==2: # For the n=2 case just return the sum from 1 to m
return float(m*(m+1))/2
else: # Otherwise sum from 1 to m the ordered combinations when n = n-1
return sum([recursively_find_combinations(x,n-1) for x in range(1,m+1)])
# Finds the probability that n draws n draws of random integers from 1 to m will be in order.
def find_p(m,n):
return recursively_find_combinations(m,n)/(m**n)
Let's break it down by the number of combinations of draws that are in order and how many combinations are possible.
There are $m^n$ possible combinations.
Let's start by looking at m=10 and n=2. There are 100 combinations ($10^2$). How many are in order? There are 10 combinations where the first number is a 1, all 10 of these will be in order because the second number will be greater than or equal to 1. There are 10 combinations where the first number is a 2, 9 of these will be in order, only 1 is eliminated, the combination (2,1). Following this reasoning there are 10 + 9 + 8 + ... + 1 or $=\sum_{x=1}^{x=m}x = \frac{(m)(m+1)}{2} = 55 $ ordered combinations. There are 100 possible combinations so that gives a $=\frac{55}{100} = .55$ probability of getting an ordered result.
So for n = 2 the ordered combinations are $=\sum_{x_1=1}^{x_1=m}x_1$.
Now let's look at m=10 and n=3. There are 1000 possible combinations ($10^3$).
There are 100 combinations that start with a 1. The number that are in order (55) are the same as the ordered combinations for m=10 and n=2 because all numbers are greater or equal to 1. There are 100 combinations that start with a 2. There are 45 combinations that are in order, the same 55 as when it started with a 1 minus the 10 combinations of the form 2,1,x. None of those are in order because 1 is smaller than 2 no matter what x is. When we start with a 3 there are 55-10-9 ordered combinations. Do this all the way up to 10. Starting with a 10 there are 55-10-9-8-7-6-5-4-3-2-1 ordered combinations.
So the total number of ordered combinations for m=10 and n=3 is $\sum_{x_2=1}^{x_2=m}\sum_{x_1=1}^{x_1=x_2}x_1$.
Following this reasoning the total number of combinations for an arbitrary n and m is $\sum_{x_{n-1}=1}^{x_{n-1}=m}\sum_{x_{n-2}=1}^{x_{n-2}=x_{n-1}}\sum_{x_{n-3}=1}^{x_{n-3}=x_{n-2}}...\sum_{x_1=1}^{x_1=x_2}x_1$.
To find the probability of drawing an ordered combination just divide the number of ordered combinations by the possible combinations.
$=\frac{\sum_{x_{n-1}=1}^{x_{n-1}=m}\sum_{x_{n-2}=1}^{x_{n-2}=x_{n-1}}\sum_{x_{n-3}=1}^{x_{n-3}=x_{n-2}}...\sum_{x_1=1}^{x_1=x_2}x_1}{m^n}$
Some results for m = 10 while varying n:
P(ordered|m=10,n=2)= 0.55000000
P(ordered|m=10,n=3)= 0.22000000
P(ordered|m=10,n=4)= 0.07150000
P(ordered|m=10,n=5)= 0.02002000
P(ordered|m=10,n=6)= 0.00500500
P(ordered|m=10,n=7)= 0.00114400
P(ordered|m=10,n=8)= 0.00024310
P(ordered|m=10,n=9)= 0.00004862
|
What is the probability of randomly selecting n random numbers in the range 1-m in sorted order?
|
David's answer is incorrect because they are not i.i.d. The probability the nth draw is greater than all previous draws and the probability the n-1th draw is greater than all previous draws are not in
|
What is the probability of randomly selecting n random numbers in the range 1-m in sorted order?
David's answer is incorrect because they are not i.i.d. The probability the nth draw is greater than all previous draws and the probability the n-1th draw is greater than all previous draws are not independent.
Here is some python code to calculate the exact probability recursively:
# Recursively finds the number of combinations that are in order for n draws
# of random integers from 1 to m.
def recursively_find_combinations(m,n):
if n==2: # For the n=2 case just return the sum from 1 to m
return float(m*(m+1))/2
else: # Otherwise sum from 1 to m the ordered combinations when n = n-1
return sum([recursively_find_combinations(x,n-1) for x in range(1,m+1)])
# Finds the probability that n draws n draws of random integers from 1 to m will be in order.
def find_p(m,n):
return recursively_find_combinations(m,n)/(m**n)
Let's break it down by the number of combinations of draws that are in order and how many combinations are possible.
There are $m^n$ possible combinations.
Let's start by looking at m=10 and n=2. There are 100 combinations ($10^2$). How many are in order? There are 10 combinations where the first number is a 1, all 10 of these will be in order because the second number will be greater than or equal to 1. There are 10 combinations where the first number is a 2, 9 of these will be in order, only 1 is eliminated, the combination (2,1). Following this reasoning there are 10 + 9 + 8 + ... + 1 or $=\sum_{x=1}^{x=m}x = \frac{(m)(m+1)}{2} = 55 $ ordered combinations. There are 100 possible combinations so that gives a $=\frac{55}{100} = .55$ probability of getting an ordered result.
So for n = 2 the ordered combinations are $=\sum_{x_1=1}^{x_1=m}x_1$.
Now let's look at m=10 and n=3. There are 1000 possible combinations ($10^3$).
There are 100 combinations that start with a 1. The number that are in order (55) are the same as the ordered combinations for m=10 and n=2 because all numbers are greater or equal to 1. There are 100 combinations that start with a 2. There are 45 combinations that are in order, the same 55 as when it started with a 1 minus the 10 combinations of the form 2,1,x. None of those are in order because 1 is smaller than 2 no matter what x is. When we start with a 3 there are 55-10-9 ordered combinations. Do this all the way up to 10. Starting with a 10 there are 55-10-9-8-7-6-5-4-3-2-1 ordered combinations.
So the total number of ordered combinations for m=10 and n=3 is $\sum_{x_2=1}^{x_2=m}\sum_{x_1=1}^{x_1=x_2}x_1$.
Following this reasoning the total number of combinations for an arbitrary n and m is $\sum_{x_{n-1}=1}^{x_{n-1}=m}\sum_{x_{n-2}=1}^{x_{n-2}=x_{n-1}}\sum_{x_{n-3}=1}^{x_{n-3}=x_{n-2}}...\sum_{x_1=1}^{x_1=x_2}x_1$.
To find the probability of drawing an ordered combination just divide the number of ordered combinations by the possible combinations.
$=\frac{\sum_{x_{n-1}=1}^{x_{n-1}=m}\sum_{x_{n-2}=1}^{x_{n-2}=x_{n-1}}\sum_{x_{n-3}=1}^{x_{n-3}=x_{n-2}}...\sum_{x_1=1}^{x_1=x_2}x_1}{m^n}$
Some results for m = 10 while varying n:
P(ordered|m=10,n=2)= 0.55000000
P(ordered|m=10,n=3)= 0.22000000
P(ordered|m=10,n=4)= 0.07150000
P(ordered|m=10,n=5)= 0.02002000
P(ordered|m=10,n=6)= 0.00500500
P(ordered|m=10,n=7)= 0.00114400
P(ordered|m=10,n=8)= 0.00024310
P(ordered|m=10,n=9)= 0.00004862
|
What is the probability of randomly selecting n random numbers in the range 1-m in sorted order?
David's answer is incorrect because they are not i.i.d. The probability the nth draw is greater than all previous draws and the probability the n-1th draw is greater than all previous draws are not in
|
41,686
|
What is the probability of randomly selecting n random numbers in the range 1-m in sorted order?
|
Recursive Solution
Let the first number chosen be $k$ (with probability $1/m$). The chance that all are in order now equals the chance that the remaining $n-1$ are (a) in order and (b) equal or exceed $k$. Subtracting $k-1$ from all of the remaining numbers puts them in one-to-one correspondence with the possible ways of selecting numbers in order from $1, 2, \ldots, m-k+1$; according to part (b), this chance has to be multiplied by $(m-k+1)^{n-1}$. Letting $p(n,m)$ denote the chance, this provides the recursion
$$p(n,m) = \frac{1}{m}\sum _{k=1}^m \left(\frac{m-k+1}{m}\right)^{n-1} p(n-1,m-k+1),$$
with $p(1,m)=1$ to get it started.
The unique solution is
$$p(n,m) = \frac{(n+1)^{[m-1]}}{m^n(m-1)!}$$
where "$^{[m]}$" denotes an ascending factorial power; in general,
$$x^{[m]} = x(x+1) \cdots (x+m-1).$$
To verify the solution we need only to show it satisfies the recursion and the initial condition; this is a matter of algebraic checking.
For instance, with $n=3, m=10$ we obtain
$$p(3,10) =\frac{(3+1)^{[10-1]}}{10^3(10-1)!} = \frac{4\cdot 5\cdots 11 \cdot 12 }{10^3 (9 \cdot 8 \cdots 3 \cdot 2 \cdot 1)} = \frac{10 \cdot11\cdot 12}{10^3(3\cdot2\cdot1)}=\frac{11}{50}=0.22.$$
Combinatorial Solution
A selection of $n$ values (with repetition) from the numbers $\{1,2,\ldots, m\}$ is a sequence $(k_1, k_2, \ldots, k_n)$. Such sequences are in one-to-one correspondence with the sequences $(k_1, k_2+1, \ldots, k_n + n-1)$ drawn from the numbers $\{1, 2, \ldots, m+n-1\}$. (For instance, drawing $(1,1,4)$ from $1..10$ would correspond to drawing $(1,2,6)$ from $1..12$.) Moreover, the original sequence is in order if and only if the derived sequence is in strict order. It thereby determines (and is determined by) the subset $\{k_1, k_2+1, \ldots, k_n+n-1\}$, of which there are $\binom{n+m-1}{n}$ possibilities (by definition). Because there are $m^n$ equally probable sequences, the desired probability is
$$p(n,m) = m^{-n}\binom{n+m-1}{n}.$$
This of course is just another way to express the previous formula for $p(n,m)$ (or, if you like, equating the two results gives us an explicit formula for the binomial coefficient!).
For example,
$$p(3, 10) = 10^{-3}\binom{3+10-1}{3} = 10^{-3}\binom{12}{3} = \frac{12\cdot 11\cdot 10}{10^3(3 \cdot 2 \cdot 1)} = \frac{11}{50},$$
exactly as determined with the recursive solution.
|
What is the probability of randomly selecting n random numbers in the range 1-m in sorted order?
|
Recursive Solution
Let the first number chosen be $k$ (with probability $1/m$). The chance that all are in order now equals the chance that the remaining $n-1$ are (a) in order and (b) equal or excee
|
What is the probability of randomly selecting n random numbers in the range 1-m in sorted order?
Recursive Solution
Let the first number chosen be $k$ (with probability $1/m$). The chance that all are in order now equals the chance that the remaining $n-1$ are (a) in order and (b) equal or exceed $k$. Subtracting $k-1$ from all of the remaining numbers puts them in one-to-one correspondence with the possible ways of selecting numbers in order from $1, 2, \ldots, m-k+1$; according to part (b), this chance has to be multiplied by $(m-k+1)^{n-1}$. Letting $p(n,m)$ denote the chance, this provides the recursion
$$p(n,m) = \frac{1}{m}\sum _{k=1}^m \left(\frac{m-k+1}{m}\right)^{n-1} p(n-1,m-k+1),$$
with $p(1,m)=1$ to get it started.
The unique solution is
$$p(n,m) = \frac{(n+1)^{[m-1]}}{m^n(m-1)!}$$
where "$^{[m]}$" denotes an ascending factorial power; in general,
$$x^{[m]} = x(x+1) \cdots (x+m-1).$$
To verify the solution we need only to show it satisfies the recursion and the initial condition; this is a matter of algebraic checking.
For instance, with $n=3, m=10$ we obtain
$$p(3,10) =\frac{(3+1)^{[10-1]}}{10^3(10-1)!} = \frac{4\cdot 5\cdots 11 \cdot 12 }{10^3 (9 \cdot 8 \cdots 3 \cdot 2 \cdot 1)} = \frac{10 \cdot11\cdot 12}{10^3(3\cdot2\cdot1)}=\frac{11}{50}=0.22.$$
Combinatorial Solution
A selection of $n$ values (with repetition) from the numbers $\{1,2,\ldots, m\}$ is a sequence $(k_1, k_2, \ldots, k_n)$. Such sequences are in one-to-one correspondence with the sequences $(k_1, k_2+1, \ldots, k_n + n-1)$ drawn from the numbers $\{1, 2, \ldots, m+n-1\}$. (For instance, drawing $(1,1,4)$ from $1..10$ would correspond to drawing $(1,2,6)$ from $1..12$.) Moreover, the original sequence is in order if and only if the derived sequence is in strict order. It thereby determines (and is determined by) the subset $\{k_1, k_2+1, \ldots, k_n+n-1\}$, of which there are $\binom{n+m-1}{n}$ possibilities (by definition). Because there are $m^n$ equally probable sequences, the desired probability is
$$p(n,m) = m^{-n}\binom{n+m-1}{n}.$$
This of course is just another way to express the previous formula for $p(n,m)$ (or, if you like, equating the two results gives us an explicit formula for the binomial coefficient!).
For example,
$$p(3, 10) = 10^{-3}\binom{3+10-1}{3} = 10^{-3}\binom{12}{3} = \frac{12\cdot 11\cdot 10}{10^3(3 \cdot 2 \cdot 1)} = \frac{11}{50},$$
exactly as determined with the recursive solution.
|
What is the probability of randomly selecting n random numbers in the range 1-m in sorted order?
Recursive Solution
Let the first number chosen be $k$ (with probability $1/m$). The chance that all are in order now equals the chance that the remaining $n-1$ are (a) in order and (b) equal or excee
|
41,687
|
What is the probability of randomly selecting n random numbers in the range 1-m in sorted order?
|
Unless I got something wrong, I think it might me possible to bring down you $n,m$ problem to a $2,n$ problem, which you have resolved for $n=10$ and of which the solution is more generally $p = \frac{n+1}{2n}$
$P[X_m \ge X_{m-1} \ge ... \ge X_1]=$
$P[X_m \ge \max (X_{m-1},X_{m-2} ... X_1);X_{m-1} \ge \max (X_{m-2},X_{m-3} ... X_1);...X_1 \ge \max(X_1)=X_1]$
Because they are $i.i.d$ this makes it:
$=P[X_m \ge \max (X_{m-1},X_{m-2} ... X_1]P[X_{m-1} \ge \max (X_{m-2},X_{m-3} ... X_1]... P[X_1 \ge \max(X_1)=X_1]$
$=\prod_{i=1}^{i=m-1} P[X_m \ge X_{i}] \prod_{i=1}^{i=m-2} P[X_{m-1} \ge X_{i}]\prod_{i=1}^{i=m-(m-1)} P[X_{2} \ge X_{i}]$
where $P_{i\not=j}[X_i \ge X_j] = \frac{n+1}{2n}$
|
What is the probability of randomly selecting n random numbers in the range 1-m in sorted order?
|
Unless I got something wrong, I think it might me possible to bring down you $n,m$ problem to a $2,n$ problem, which you have resolved for $n=10$ and of which the solution is more generally $p = \frac
|
What is the probability of randomly selecting n random numbers in the range 1-m in sorted order?
Unless I got something wrong, I think it might me possible to bring down you $n,m$ problem to a $2,n$ problem, which you have resolved for $n=10$ and of which the solution is more generally $p = \frac{n+1}{2n}$
$P[X_m \ge X_{m-1} \ge ... \ge X_1]=$
$P[X_m \ge \max (X_{m-1},X_{m-2} ... X_1);X_{m-1} \ge \max (X_{m-2},X_{m-3} ... X_1);...X_1 \ge \max(X_1)=X_1]$
Because they are $i.i.d$ this makes it:
$=P[X_m \ge \max (X_{m-1},X_{m-2} ... X_1]P[X_{m-1} \ge \max (X_{m-2},X_{m-3} ... X_1]... P[X_1 \ge \max(X_1)=X_1]$
$=\prod_{i=1}^{i=m-1} P[X_m \ge X_{i}] \prod_{i=1}^{i=m-2} P[X_{m-1} \ge X_{i}]\prod_{i=1}^{i=m-(m-1)} P[X_{2} \ge X_{i}]$
where $P_{i\not=j}[X_i \ge X_j] = \frac{n+1}{2n}$
|
What is the probability of randomly selecting n random numbers in the range 1-m in sorted order?
Unless I got something wrong, I think it might me possible to bring down you $n,m$ problem to a $2,n$ problem, which you have resolved for $n=10$ and of which the solution is more generally $p = \frac
|
41,688
|
How to get conditional variance from Schur complement?
|
Assume WLOG that everything is mean $0$. If you know the formula for the inverse of a matrix in block form, then it should be as simple as checking that
$$(X, Y)^T V^{-1} (X, Y) = X^T A^{-1} X + (Y - \mu_{Y | X})^T S^{-1} (Y - \mu_{Y|X}) + c$$
where $S$ is the Schur complement and $c$ is a constant and $\mu_{Y|X} = B^TA^{-1}X$. Why is this all you need to do? Because this is precisely what is required to factor the Gaussian density
$$
f(x, y) = |2\pi V|^{-1/2}\exp\left(-\frac 1 2 (x, y)^T V^{-1} (x, y)\right)
$$
into a product of two Gaussians (one Gaussian representing the marginal of $X$ and the other the conditional distribution of $Y | X = x$). This, I think, just involves sticky algebra; one lesson here is that Schur complements play a nice algebraic role in the breaking up of quadratic forms that look like $z^T A^{-1}z$, but maybe someone can point out something deeper going on.
The second part of the question, as stated currently, doesn't make any sense; $V^{-1} = 0$ is an impossible equation to satisfy. Maybe what you mean is that you want to show $\mbox{Var}(X|Y) = 0$ provided that $V$ is singular? (This is false, by the way).
|
How to get conditional variance from Schur complement?
|
Assume WLOG that everything is mean $0$. If you know the formula for the inverse of a matrix in block form, then it should be as simple as checking that
$$(X, Y)^T V^{-1} (X, Y) = X^T A^{-1} X + (Y -
|
How to get conditional variance from Schur complement?
Assume WLOG that everything is mean $0$. If you know the formula for the inverse of a matrix in block form, then it should be as simple as checking that
$$(X, Y)^T V^{-1} (X, Y) = X^T A^{-1} X + (Y - \mu_{Y | X})^T S^{-1} (Y - \mu_{Y|X}) + c$$
where $S$ is the Schur complement and $c$ is a constant and $\mu_{Y|X} = B^TA^{-1}X$. Why is this all you need to do? Because this is precisely what is required to factor the Gaussian density
$$
f(x, y) = |2\pi V|^{-1/2}\exp\left(-\frac 1 2 (x, y)^T V^{-1} (x, y)\right)
$$
into a product of two Gaussians (one Gaussian representing the marginal of $X$ and the other the conditional distribution of $Y | X = x$). This, I think, just involves sticky algebra; one lesson here is that Schur complements play a nice algebraic role in the breaking up of quadratic forms that look like $z^T A^{-1}z$, but maybe someone can point out something deeper going on.
The second part of the question, as stated currently, doesn't make any sense; $V^{-1} = 0$ is an impossible equation to satisfy. Maybe what you mean is that you want to show $\mbox{Var}(X|Y) = 0$ provided that $V$ is singular? (This is false, by the way).
|
How to get conditional variance from Schur complement?
Assume WLOG that everything is mean $0$. If you know the formula for the inverse of a matrix in block form, then it should be as simple as checking that
$$(X, Y)^T V^{-1} (X, Y) = X^T A^{-1} X + (Y -
|
41,689
|
How to get conditional variance from Schur complement?
|
The detailed derivation may be found in:
von Mises, Richard (1964). Mathematical theory of probability and statistics. Chapter VIII.9.3. Academic Press.
I have added this reference to the Wikipedia article that you linked to, for the benefit of anyone else who stumbles over this question.
|
How to get conditional variance from Schur complement?
|
The detailed derivation may be found in:
von Mises, Richard (1964). Mathematical theory of probability and statistics. Chapter VIII.9.3. Academic Press.
I have added this reference to the Wikipedi
|
How to get conditional variance from Schur complement?
The detailed derivation may be found in:
von Mises, Richard (1964). Mathematical theory of probability and statistics. Chapter VIII.9.3. Academic Press.
I have added this reference to the Wikipedia article that you linked to, for the benefit of anyone else who stumbles over this question.
|
How to get conditional variance from Schur complement?
The detailed derivation may be found in:
von Mises, Richard (1964). Mathematical theory of probability and statistics. Chapter VIII.9.3. Academic Press.
I have added this reference to the Wikipedi
|
41,690
|
What is the difference between the 'Pivot table' and 'Contingency table'?
|
You may create a contingency table using a software tool called pivot table :)
A contingency table is a crosstable with rows, columns and data related to each of the row/column combination. You may draw such a table on a piece of paper, you may use an OLAP cube as the source of data etc. As this site says, a contingency table is essentially a display format used to analyse and record the relationship between two or more categorical variables.
A pivot table is one of the possible ways of creating a contingency table. A typical pivot table has the visual form of the contingency table, although a pivot table might have only one column or even zero etc. The pivot operation in spreadsheet software can be used to generate a contingency table from sampling data. However you may use the pivot table as a tool to play with the data in other ways, too.
|
What is the difference between the 'Pivot table' and 'Contingency table'?
|
You may create a contingency table using a software tool called pivot table :)
A contingency table is a crosstable with rows, columns and data related to each of the row/column combination. You may dr
|
What is the difference between the 'Pivot table' and 'Contingency table'?
You may create a contingency table using a software tool called pivot table :)
A contingency table is a crosstable with rows, columns and data related to each of the row/column combination. You may draw such a table on a piece of paper, you may use an OLAP cube as the source of data etc. As this site says, a contingency table is essentially a display format used to analyse and record the relationship between two or more categorical variables.
A pivot table is one of the possible ways of creating a contingency table. A typical pivot table has the visual form of the contingency table, although a pivot table might have only one column or even zero etc. The pivot operation in spreadsheet software can be used to generate a contingency table from sampling data. However you may use the pivot table as a tool to play with the data in other ways, too.
|
What is the difference between the 'Pivot table' and 'Contingency table'?
You may create a contingency table using a software tool called pivot table :)
A contingency table is a crosstable with rows, columns and data related to each of the row/column combination. You may dr
|
41,691
|
Where can I use Chebychev's inequality?
|
I once convinced a very tall woman to have dinner with me by using Chebychev's inequality to argue that her high partner height threshold would lead to many lonely nights spent with her cat. Alcohol was involved, so arithmetic mistakes were made that exaggerated the main thrust of the conclusion.
It didn't last.
In this case, the three-sigma rule would have served me better since heights are actually normal and the rule gives a tighter bound. Now, if she had an income threshold, this would have been a better anecdote.
|
Where can I use Chebychev's inequality?
|
I once convinced a very tall woman to have dinner with me by using Chebychev's inequality to argue that her high partner height threshold would lead to many lonely nights spent with her cat. Alcohol w
|
Where can I use Chebychev's inequality?
I once convinced a very tall woman to have dinner with me by using Chebychev's inequality to argue that her high partner height threshold would lead to many lonely nights spent with her cat. Alcohol was involved, so arithmetic mistakes were made that exaggerated the main thrust of the conclusion.
It didn't last.
In this case, the three-sigma rule would have served me better since heights are actually normal and the rule gives a tighter bound. Now, if she had an income threshold, this would have been a better anecdote.
|
Where can I use Chebychev's inequality?
I once convinced a very tall woman to have dinner with me by using Chebychev's inequality to argue that her high partner height threshold would lead to many lonely nights spent with her cat. Alcohol w
|
41,692
|
Where can I use Chebychev's inequality?
|
While it's most often used for establishing bounds in various things, here's an example of it being used for constructing intervals for a real problem:
http://www.sciencedirect.com/science/article/pii/S0001457504000910
(It's not actually necessary for this problem; tighter bounds can be obtained.)
|
Where can I use Chebychev's inequality?
|
While it's most often used for establishing bounds in various things, here's an example of it being used for constructing intervals for a real problem:
http://www.sciencedirect.com/science/article/pii
|
Where can I use Chebychev's inequality?
While it's most often used for establishing bounds in various things, here's an example of it being used for constructing intervals for a real problem:
http://www.sciencedirect.com/science/article/pii/S0001457504000910
(It's not actually necessary for this problem; tighter bounds can be obtained.)
|
Where can I use Chebychev's inequality?
While it's most often used for establishing bounds in various things, here's an example of it being used for constructing intervals for a real problem:
http://www.sciencedirect.com/science/article/pii
|
41,693
|
Finding mean when class sizes are unequal
|
Number of students:
11+10+7+4+4+3+1 = 21+11+8 = 40
Total number of absent days across all students:
(0+6)/2*11+(6+10)/2*10+(10+14)/2*7+(14+20)/2*4+
(20+28)/2*4+(28+30)/2*3+(38+40)/2*1 = 487
Mean number of absent days per student:
(Total number of absent days) / ( Number of students) = 487 / 40 = 12.175
Hence the mean number of absent days per student is approximately 12.
But as Henry pointed out: Your groups are overlapping, that is: It is not clear to which group a student belongs, who has 6 absent day or 10 absent days.
As Max said, we are assuming that that the distribution of the number of days absent within each class is symmetric about the midpoint of the class. This means for example, that in the class
6-10
where there are 10 students, it is expected, that each one of those students has
(6+10)/2 = 8
absent days
|
Finding mean when class sizes are unequal
|
Number of students:
11+10+7+4+4+3+1 = 21+11+8 = 40
Total number of absent days across all students:
(0+6)/2*11+(6+10)/2*10+(10+14)/2*7+(14+20)/2*4+
(20+28)/2*4+(28+30)/2*3+(38+40)/2*1 = 487
Mean n
|
Finding mean when class sizes are unequal
Number of students:
11+10+7+4+4+3+1 = 21+11+8 = 40
Total number of absent days across all students:
(0+6)/2*11+(6+10)/2*10+(10+14)/2*7+(14+20)/2*4+
(20+28)/2*4+(28+30)/2*3+(38+40)/2*1 = 487
Mean number of absent days per student:
(Total number of absent days) / ( Number of students) = 487 / 40 = 12.175
Hence the mean number of absent days per student is approximately 12.
But as Henry pointed out: Your groups are overlapping, that is: It is not clear to which group a student belongs, who has 6 absent day or 10 absent days.
As Max said, we are assuming that that the distribution of the number of days absent within each class is symmetric about the midpoint of the class. This means for example, that in the class
6-10
where there are 10 students, it is expected, that each one of those students has
(6+10)/2 = 8
absent days
|
Finding mean when class sizes are unequal
Number of students:
11+10+7+4+4+3+1 = 21+11+8 = 40
Total number of absent days across all students:
(0+6)/2*11+(6+10)/2*10+(10+14)/2*7+(14+20)/2*4+
(20+28)/2*4+(28+30)/2*3+(38+40)/2*1 = 487
Mean n
|
41,694
|
Finding mean when class sizes are unequal
|
The point of this question is to show that not all means need to be evenly weighted to be summed. It may lose some granularity in the final answer, but the estimate still holds true. In some ways, you can think of it as a mean of means.
So the theory is:
class mean = sum(mean(range_of_days_absent) * number_of_students_for_that_day_range) ) /
total_number_of_students
To start the example:
mean of group 1 = mean(0-6) = 3
mean of group 2 = mean(6-10) = 8
etc...
number of students in group 1 = 11
number of students in group 2 = 10
etc...
so to solve you do:
answer = (3) * (11) + (8) * (10) ... / the_total_number_of_students
which becomes:
(3*11 + 8*10 + 12*7 + 17*4 + 24*4 + 29*3 + 39*1) / 40 = 487 / 40 = 12.175 days
Update
While technically a mean is just a 'weighted' median - here's an alternative approach - Each student falls in a category, and you are finding the best category.
e.g. day range 0-6 is Labeled Group A, day range 6-10 is Labeled Group B...
Using this approach, you essentially find the median student
11*A + 10*B + 7*C + 4*D + 4*E + 3*F + 1*G
The median group is group B, which is 6-10 days and the mean/median number of days in group B, is 8 days.
Notice how in this approach, the long tail of including group F and G, is disregarded, and the expected mean goes from ~12 to 8 days.
|
Finding mean when class sizes are unequal
|
The point of this question is to show that not all means need to be evenly weighted to be summed. It may lose some granularity in the final answer, but the estimate still holds true. In some ways, yo
|
Finding mean when class sizes are unequal
The point of this question is to show that not all means need to be evenly weighted to be summed. It may lose some granularity in the final answer, but the estimate still holds true. In some ways, you can think of it as a mean of means.
So the theory is:
class mean = sum(mean(range_of_days_absent) * number_of_students_for_that_day_range) ) /
total_number_of_students
To start the example:
mean of group 1 = mean(0-6) = 3
mean of group 2 = mean(6-10) = 8
etc...
number of students in group 1 = 11
number of students in group 2 = 10
etc...
so to solve you do:
answer = (3) * (11) + (8) * (10) ... / the_total_number_of_students
which becomes:
(3*11 + 8*10 + 12*7 + 17*4 + 24*4 + 29*3 + 39*1) / 40 = 487 / 40 = 12.175 days
Update
While technically a mean is just a 'weighted' median - here's an alternative approach - Each student falls in a category, and you are finding the best category.
e.g. day range 0-6 is Labeled Group A, day range 6-10 is Labeled Group B...
Using this approach, you essentially find the median student
11*A + 10*B + 7*C + 4*D + 4*E + 3*F + 1*G
The median group is group B, which is 6-10 days and the mean/median number of days in group B, is 8 days.
Notice how in this approach, the long tail of including group F and G, is disregarded, and the expected mean goes from ~12 to 8 days.
|
Finding mean when class sizes are unequal
The point of this question is to show that not all means need to be evenly weighted to be summed. It may lose some granularity in the final answer, but the estimate still holds true. In some ways, yo
|
41,695
|
Finding mean when class sizes are unequal
|
The method used will be the same as used for continuous grouped data with equal intervals.
First of all we will find the class marks of each interval with equal to lower limit + upper limit divided by 2. Then we will multiply the frequencies with their respective class marks.
Will will add all these products and divide them by the total frequency.
We get the mean.
|
Finding mean when class sizes are unequal
|
The method used will be the same as used for continuous grouped data with equal intervals.
First of all we will find the class marks of each interval with equal to lower limit + upper limit divided by
|
Finding mean when class sizes are unequal
The method used will be the same as used for continuous grouped data with equal intervals.
First of all we will find the class marks of each interval with equal to lower limit + upper limit divided by 2. Then we will multiply the frequencies with their respective class marks.
Will will add all these products and divide them by the total frequency.
We get the mean.
|
Finding mean when class sizes are unequal
The method used will be the same as used for continuous grouped data with equal intervals.
First of all we will find the class marks of each interval with equal to lower limit + upper limit divided by
|
41,696
|
Finding mean when class sizes are unequal
|
The overlap of classes in your example is a bit confusing, so I'll use some slightly modified values
No. of days 0-5 6-9 10-13 14-19 20-27 28-30 38- 40
No. of students 11 10 7 4 4 3 1
An approximation to the problem, not assuming symmetry within the classes as you put it, would be that the days absent follows a geometric distribution:
$$
P(X=k) = (1-p)^{k}p
$$
As a first approximation we'll assume that each value in a class occur with the same probability. E.g. 0 absent days is observed $\tfrac{11}{6}$ times.
The maximum likelihood estimate of $p$ is
$$
\hat p = \frac{n}{n+\sum_{i=1}^n k_i} \\
= \frac{40}{40+0\cdot \tfrac{11}{6}+1\cdot\tfrac{11}{6}+\ldots+39\cdot\tfrac{1}{3}+40\cdot\tfrac{1}{3}} \\
= 0.0786
$$
Using this value of $p$ we can now go back and correct for our assumption that each value in a class occurs with the same probability. For the $0-5$ class we have that
$$
P(X=0)=(1-p)^0p=0.0786 \\
P(X=1)=(1-p)^1p = 0.0724 \\
P(X=2)=0.0667 \\
P(X=3)=0.0615 \\
P(X=4)=0.0566 \\
P(X)=5)=0.0522 \\
$$
Since the total number of observations in that class is 11, we get the expected number of observations for $k=0$ is:
$$
11\cdot \frac{0.0786}{0.0786+0.0724+0.0667+0.0615+0.0566+0.0522} = 2.228
$$
and for $k=\{1,2,3,4,5\}$ the expected mean is found to be $\{2.053, 1.891, 1.743, 1.606, 1.48\}$. And similar for the rest of the classes. If we then update our maximum likelihood estimate of $p$, we get that $\hat p = 0.0797$. This can be repeated until convergence. Once the converged value of $p$ has been obtained we can get the mean of the geometric distribution as
$$
\frac{1}{p}-1
$$
|
Finding mean when class sizes are unequal
|
The overlap of classes in your example is a bit confusing, so I'll use some slightly modified values
No. of days 0-5 6-9 10-13 14-19 20-27 28-30 38- 40
No. of students 11
|
Finding mean when class sizes are unequal
The overlap of classes in your example is a bit confusing, so I'll use some slightly modified values
No. of days 0-5 6-9 10-13 14-19 20-27 28-30 38- 40
No. of students 11 10 7 4 4 3 1
An approximation to the problem, not assuming symmetry within the classes as you put it, would be that the days absent follows a geometric distribution:
$$
P(X=k) = (1-p)^{k}p
$$
As a first approximation we'll assume that each value in a class occur with the same probability. E.g. 0 absent days is observed $\tfrac{11}{6}$ times.
The maximum likelihood estimate of $p$ is
$$
\hat p = \frac{n}{n+\sum_{i=1}^n k_i} \\
= \frac{40}{40+0\cdot \tfrac{11}{6}+1\cdot\tfrac{11}{6}+\ldots+39\cdot\tfrac{1}{3}+40\cdot\tfrac{1}{3}} \\
= 0.0786
$$
Using this value of $p$ we can now go back and correct for our assumption that each value in a class occurs with the same probability. For the $0-5$ class we have that
$$
P(X=0)=(1-p)^0p=0.0786 \\
P(X=1)=(1-p)^1p = 0.0724 \\
P(X=2)=0.0667 \\
P(X=3)=0.0615 \\
P(X=4)=0.0566 \\
P(X)=5)=0.0522 \\
$$
Since the total number of observations in that class is 11, we get the expected number of observations for $k=0$ is:
$$
11\cdot \frac{0.0786}{0.0786+0.0724+0.0667+0.0615+0.0566+0.0522} = 2.228
$$
and for $k=\{1,2,3,4,5\}$ the expected mean is found to be $\{2.053, 1.891, 1.743, 1.606, 1.48\}$. And similar for the rest of the classes. If we then update our maximum likelihood estimate of $p$, we get that $\hat p = 0.0797$. This can be repeated until convergence. Once the converged value of $p$ has been obtained we can get the mean of the geometric distribution as
$$
\frac{1}{p}-1
$$
|
Finding mean when class sizes are unequal
The overlap of classes in your example is a bit confusing, so I'll use some slightly modified values
No. of days 0-5 6-9 10-13 14-19 20-27 28-30 38- 40
No. of students 11
|
41,697
|
Is output of Deamer deconvolution not a density?
|
It seems you are really putting my library at a test!!
The output of deamer is an estimation of a density
Several things:
Although, theoretically, any np density estimation (with deconvolution or not) ensures that the resulting estimate integrates to 1 (by use of a kernel which is itself a density), you have (necessary) numerical approximations at several levels in any algorithm (not to mention the numerical integration itself). Therefore, you cannot expect to find an integral exactly =1. Here 1.17 is reasonable given the sample size and the very weird problem you suggest.
You may test the deamer examples - just for checking - using integrate(predict, lower, upper, obj=est) for any deamer object named "est" and defining the "lower" and "upper" integration bounds. You will find that estimates integrate between 0.96 and 1.1 in my experience. By the way integrate() is built-in and quite reliable!
The problem you suggest is very awkward: deconvolving two uniforms with such measurement noise is very unusual and should probably be considered under parametric methods.
Your code does not add noise to your unobserved variable. Here is a corrected example, with a "reasonable" signal-to-noise ratio:
x <- runif(2000, min = 0, max=1)
e <- runif(2000, min = - 0.1, max=0.1)
y <- x + e
err <- runif(1000, min = -0.1, max=0.1)
decon <- deamerSE(y, error=err, from=-1, to=2)
plot(decon)
integrate(predict,-1,2,obj=decon)
This simple example actually yields an integral of 0.999...not too bad!
Good luck with the rest!
Julien Stirnemann
|
Is output of Deamer deconvolution not a density?
|
It seems you are really putting my library at a test!!
The output of deamer is an estimation of a density
Several things:
Although, theoretically, any np density estimation (with deconvolution or not
|
Is output of Deamer deconvolution not a density?
It seems you are really putting my library at a test!!
The output of deamer is an estimation of a density
Several things:
Although, theoretically, any np density estimation (with deconvolution or not) ensures that the resulting estimate integrates to 1 (by use of a kernel which is itself a density), you have (necessary) numerical approximations at several levels in any algorithm (not to mention the numerical integration itself). Therefore, you cannot expect to find an integral exactly =1. Here 1.17 is reasonable given the sample size and the very weird problem you suggest.
You may test the deamer examples - just for checking - using integrate(predict, lower, upper, obj=est) for any deamer object named "est" and defining the "lower" and "upper" integration bounds. You will find that estimates integrate between 0.96 and 1.1 in my experience. By the way integrate() is built-in and quite reliable!
The problem you suggest is very awkward: deconvolving two uniforms with such measurement noise is very unusual and should probably be considered under parametric methods.
Your code does not add noise to your unobserved variable. Here is a corrected example, with a "reasonable" signal-to-noise ratio:
x <- runif(2000, min = 0, max=1)
e <- runif(2000, min = - 0.1, max=0.1)
y <- x + e
err <- runif(1000, min = -0.1, max=0.1)
decon <- deamerSE(y, error=err, from=-1, to=2)
plot(decon)
integrate(predict,-1,2,obj=decon)
This simple example actually yields an integral of 0.999...not too bad!
Good luck with the rest!
Julien Stirnemann
|
Is output of Deamer deconvolution not a density?
It seems you are really putting my library at a test!!
The output of deamer is an estimation of a density
Several things:
Although, theoretically, any np density estimation (with deconvolution or not
|
41,698
|
Beta distribution from mean and quantile
|
If you really have to do it with pesky Excel:
Create cells with quantile probability $p$, quantile value $q$, mean $m$.
Create a cell with some initial $\alpha$ value. Create a cell with formula $\beta=\left(\frac{1-m}{m}\right)\alpha$.
Create a cell with formula $\mathrm{abs}(q - \mathrm{beta.inv}(p, \alpha,\beta))$.
Go to "Data" > "What-If Analysis" > "Goal Seek". Choose the previous cell for item "Set cell", put $0$ in "To value", and choose the $\alpha$ cell for "By changing cell". Press "OK".
Next time: Use R! (I'm joking. I know you're an R user.)
|
Beta distribution from mean and quantile
|
If you really have to do it with pesky Excel:
Create cells with quantile probability $p$, quantile value $q$, mean $m$.
Create a cell with some initial $\alpha$ value. Create a cell with formula $\be
|
Beta distribution from mean and quantile
If you really have to do it with pesky Excel:
Create cells with quantile probability $p$, quantile value $q$, mean $m$.
Create a cell with some initial $\alpha$ value. Create a cell with formula $\beta=\left(\frac{1-m}{m}\right)\alpha$.
Create a cell with formula $\mathrm{abs}(q - \mathrm{beta.inv}(p, \alpha,\beta))$.
Go to "Data" > "What-If Analysis" > "Goal Seek". Choose the previous cell for item "Set cell", put $0$ in "To value", and choose the $\alpha$ cell for "By changing cell". Press "OK".
Next time: Use R! (I'm joking. I know you're an R user.)
|
Beta distribution from mean and quantile
If you really have to do it with pesky Excel:
Create cells with quantile probability $p$, quantile value $q$, mean $m$.
Create a cell with some initial $\alpha$ value. Create a cell with formula $\be
|
41,699
|
Mean centering for PCA in a 2D array...across rows or cols?
|
Usually, each row is an "observation" (in your case image), and each column is a variable (in your case pixel value). Therefore, you should center and scale the columns before doing PCA.
Also, lots of good PCA libraries already exist, such as sklearn.decomposition.PCA, which can save you a lot of effort re-inventing the wheel. But if you persist in implementing PCA yourself you should probably do so via svd rather than via the co-variance matrix.
|
Mean centering for PCA in a 2D array...across rows or cols?
|
Usually, each row is an "observation" (in your case image), and each column is a variable (in your case pixel value). Therefore, you should center and scale the columns before doing PCA.
Also, lots o
|
Mean centering for PCA in a 2D array...across rows or cols?
Usually, each row is an "observation" (in your case image), and each column is a variable (in your case pixel value). Therefore, you should center and scale the columns before doing PCA.
Also, lots of good PCA libraries already exist, such as sklearn.decomposition.PCA, which can save you a lot of effort re-inventing the wheel. But if you persist in implementing PCA yourself you should probably do so via svd rather than via the co-variance matrix.
|
Mean centering for PCA in a 2D array...across rows or cols?
Usually, each row is an "observation" (in your case image), and each column is a variable (in your case pixel value). Therefore, you should center and scale the columns before doing PCA.
Also, lots o
|
41,700
|
Mean centering for PCA in a 2D array...across rows or cols?
|
It depends on the way the data is set up. For instance, if you would calculate the covariance matrix as $\Sigma=\frac{1}{n}X'X$ where $X$ is the de-meaned data, then you would want to remove the mean from each column, and vice-versa.
|
Mean centering for PCA in a 2D array...across rows or cols?
|
It depends on the way the data is set up. For instance, if you would calculate the covariance matrix as $\Sigma=\frac{1}{n}X'X$ where $X$ is the de-meaned data, then you would want to remove the mean
|
Mean centering for PCA in a 2D array...across rows or cols?
It depends on the way the data is set up. For instance, if you would calculate the covariance matrix as $\Sigma=\frac{1}{n}X'X$ where $X$ is the de-meaned data, then you would want to remove the mean from each column, and vice-versa.
|
Mean centering for PCA in a 2D array...across rows or cols?
It depends on the way the data is set up. For instance, if you would calculate the covariance matrix as $\Sigma=\frac{1}{n}X'X$ where $X$ is the de-meaned data, then you would want to remove the mean
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.