idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
23,901
Splines - basis functions - clarification
I know this is an old question but maybe my answer can be useful to someone in the future. It exists a connection between truncated polynomial functions bases (TPF) and B-splines. This is discussed, for example, in the following papers: Paul H.C. Eilers and Brian D. Marx - Splines, Knots, and Penalties, WIREs Computational Statistics (2010). Harald Binder, Willi Sauerbrei - Increasing the usefulness of additive spline models by knot removal, CSDA (2008) My answer will follow ref.1 (further details can be found there). In order to keep notation simple, I will use B-splines defined on equally spaced knots. I will indicate with $x$ a vector of abscissa points and with $t_{1} < t_{2} <...$ a series of equally spaced knots. Then, the TPF-bases of order $p$ are obtained by computing: $$ f^{p}_{ij}(x_{i}, t_{j}) = (x_{i} - t_{j})^{p} I (x_{i} > t_{j}) $$ TPFs of order 1 look like this How can we "transform" these broken lines in "triangles"? The operator that makes this possible is the difference operator: B-splines can be computed as (appropriately scaled) differences of adjacent TPFs. A B-spline basis of degree 1 (one of the triangles in the original question) can be obtained as: $$ B^{1}_{j}(x) = \frac{f_{j-2}^{1}(x) - 2 f^{1}_{j-1}(x) + f^{1}_{j}(x)}{h} $$ where $h = t_{2} - t_{1} = \mbox{const}$ is the distance between 2 consecutive knots (which is constant here because I am assuming equally spaced knots). By applying this operation to the 3 TPFs above, we obtain: The formula can be, of course, generalized to any degree $p$: $$ B^{p}_{j}(x) = (-1)^{p+1} \frac{\Delta^{p+1} f_{j}^{p}(x)}{h^{p} p!} $$ where $\Delta^{p+1}$ is the $p+1$th order difference operator. Once again, this holds only for equally spaced knots. If the distance between successive knots is not constat, more complex transformations are available. The code to reproduce my examples is rather simple (see below). Please note that, differently from the original quesiton, I will use the R function splineDesign() to create the B-splines. The bs() funciton is actually based on splineDesign() (see help page of bs()). # Libraries library(colorout) library(scales) # Get some data set.seed(1) n = 100 x = seq(0, 1, len = n) # B-splines stuffs xr = max(x) xl = min(x) ndx = 4 deg = 1 dx = (xr - xl) / ndx knots = seq(xl - deg * dx, xr + deg * dx, by = dx) B = splineDesign(knots = knots, x = x, ord = deg + 1, derivs = 0, outer.ok = TRUE) # Compute tpf basis and B-slines from them tpower = function(x, t, p) (x - t) ^ p * (x > t) P = outer(x, knots, tpower, deg) # Plot TPFs matplot(x, P[, 3:5], pch = 16, col = alpha(1:3, 0.5), ylab = "") matlines(x, P[, 3:5], pch = 16, lty = 1, col = alpha(1:3, 0.5)) # Compute B-splines as differences of TPFs Bp = t(apply(P, 1, diff, diff = 2)) / dx # Plot single B-spline and compare with splineDesign plot(x, Bp[, 3], pch = 16, ylab=" ") lines(x, B[, 3], pch = 16, lty = 1) # Compare all with splineDesign B-splines matplot(Bp,col = alpha(1:ncol(Bp), 0.5), pch = 16) matlines(B,col = alpha(1:ncol(Bp), 0.5), lty = 1) Hope this is somehow helpful and it replies (at least partially) the original question.
Splines - basis functions - clarification
I know this is an old question but maybe my answer can be useful to someone in the future. It exists a connection between truncated polynomial functions bases (TPF) and B-splines. This is discussed, f
Splines - basis functions - clarification I know this is an old question but maybe my answer can be useful to someone in the future. It exists a connection between truncated polynomial functions bases (TPF) and B-splines. This is discussed, for example, in the following papers: Paul H.C. Eilers and Brian D. Marx - Splines, Knots, and Penalties, WIREs Computational Statistics (2010). Harald Binder, Willi Sauerbrei - Increasing the usefulness of additive spline models by knot removal, CSDA (2008) My answer will follow ref.1 (further details can be found there). In order to keep notation simple, I will use B-splines defined on equally spaced knots. I will indicate with $x$ a vector of abscissa points and with $t_{1} < t_{2} <...$ a series of equally spaced knots. Then, the TPF-bases of order $p$ are obtained by computing: $$ f^{p}_{ij}(x_{i}, t_{j}) = (x_{i} - t_{j})^{p} I (x_{i} > t_{j}) $$ TPFs of order 1 look like this How can we "transform" these broken lines in "triangles"? The operator that makes this possible is the difference operator: B-splines can be computed as (appropriately scaled) differences of adjacent TPFs. A B-spline basis of degree 1 (one of the triangles in the original question) can be obtained as: $$ B^{1}_{j}(x) = \frac{f_{j-2}^{1}(x) - 2 f^{1}_{j-1}(x) + f^{1}_{j}(x)}{h} $$ where $h = t_{2} - t_{1} = \mbox{const}$ is the distance between 2 consecutive knots (which is constant here because I am assuming equally spaced knots). By applying this operation to the 3 TPFs above, we obtain: The formula can be, of course, generalized to any degree $p$: $$ B^{p}_{j}(x) = (-1)^{p+1} \frac{\Delta^{p+1} f_{j}^{p}(x)}{h^{p} p!} $$ where $\Delta^{p+1}$ is the $p+1$th order difference operator. Once again, this holds only for equally spaced knots. If the distance between successive knots is not constat, more complex transformations are available. The code to reproduce my examples is rather simple (see below). Please note that, differently from the original quesiton, I will use the R function splineDesign() to create the B-splines. The bs() funciton is actually based on splineDesign() (see help page of bs()). # Libraries library(colorout) library(scales) # Get some data set.seed(1) n = 100 x = seq(0, 1, len = n) # B-splines stuffs xr = max(x) xl = min(x) ndx = 4 deg = 1 dx = (xr - xl) / ndx knots = seq(xl - deg * dx, xr + deg * dx, by = dx) B = splineDesign(knots = knots, x = x, ord = deg + 1, derivs = 0, outer.ok = TRUE) # Compute tpf basis and B-slines from them tpower = function(x, t, p) (x - t) ^ p * (x > t) P = outer(x, knots, tpower, deg) # Plot TPFs matplot(x, P[, 3:5], pch = 16, col = alpha(1:3, 0.5), ylab = "") matlines(x, P[, 3:5], pch = 16, lty = 1, col = alpha(1:3, 0.5)) # Compute B-splines as differences of TPFs Bp = t(apply(P, 1, diff, diff = 2)) / dx # Plot single B-spline and compare with splineDesign plot(x, Bp[, 3], pch = 16, ylab=" ") lines(x, B[, 3], pch = 16, lty = 1) # Compare all with splineDesign B-splines matplot(Bp,col = alpha(1:ncol(Bp), 0.5), pch = 16) matlines(B,col = alpha(1:ncol(Bp), 0.5), lty = 1) Hope this is somehow helpful and it replies (at least partially) the original question.
Splines - basis functions - clarification I know this is an old question but maybe my answer can be useful to someone in the future. It exists a connection between truncated polynomial functions bases (TPF) and B-splines. This is discussed, f
23,902
QQ plot in Python
Macond's answer is accurate, however from the original post, I thought it might be helpful to simplify the verbiage a bit. A Q-Q plot stands for a "quantile-quantile plot". It is a plot where the axes are purposely transformed in order to make a normal (or Gaussian) distribution appear in a straight line. In other words, a perfectly normal distribution would exactly follow a line with slope = 1 and intercept = 0. Therefore, if the plot does not appear to be - roughly - a straight line, then the underlying distribution is not normal. If it bends up, then there are more "high flyer" values than expected, for instance. (The link provides more examples.) What do the x & y labels represent? The theoretical quantiles are placed along the x-axis. That is, the x-axis is not your data, it is simply an expectation of where your data should have been, if it were normal. The actual data is plotted along the y-axis. The values are the standard deviations from the mean. So, 0 is the mean of the data, 1 is 1 standard deviation above, etc. This means, for instance, that 68.27% of all your data should be between -1 & 1, if you have a normal distribution. What does the $R^2$ value mean? The $R^2$ value is not particularly useful for this sort of plot. $R^2$ is typically used to determine whether one variable is dependent upon another. Well, you are comparing a theoretical value to an actual value. So there will necessarily be some sort of $R^2$. (E.g., even a random uniform distribution will have a moderately decent $R^2$.) Lastly, there is a similar plot that is rarely used called the p-p plot. This plot is more useful if you are interested in focusing upon where the bulk of the data lies, instead of the extremes.
QQ plot in Python
Macond's answer is accurate, however from the original post, I thought it might be helpful to simplify the verbiage a bit. A Q-Q plot stands for a "quantile-quantile plot". It is a plot where the axes
QQ plot in Python Macond's answer is accurate, however from the original post, I thought it might be helpful to simplify the verbiage a bit. A Q-Q plot stands for a "quantile-quantile plot". It is a plot where the axes are purposely transformed in order to make a normal (or Gaussian) distribution appear in a straight line. In other words, a perfectly normal distribution would exactly follow a line with slope = 1 and intercept = 0. Therefore, if the plot does not appear to be - roughly - a straight line, then the underlying distribution is not normal. If it bends up, then there are more "high flyer" values than expected, for instance. (The link provides more examples.) What do the x & y labels represent? The theoretical quantiles are placed along the x-axis. That is, the x-axis is not your data, it is simply an expectation of where your data should have been, if it were normal. The actual data is plotted along the y-axis. The values are the standard deviations from the mean. So, 0 is the mean of the data, 1 is 1 standard deviation above, etc. This means, for instance, that 68.27% of all your data should be between -1 & 1, if you have a normal distribution. What does the $R^2$ value mean? The $R^2$ value is not particularly useful for this sort of plot. $R^2$ is typically used to determine whether one variable is dependent upon another. Well, you are comparing a theoretical value to an actual value. So there will necessarily be some sort of $R^2$. (E.g., even a random uniform distribution will have a moderately decent $R^2$.) Lastly, there is a similar plot that is rarely used called the p-p plot. This plot is more useful if you are interested in focusing upon where the bulk of the data lies, instead of the extremes.
QQ plot in Python Macond's answer is accurate, however from the original post, I thought it might be helpful to simplify the verbiage a bit. A Q-Q plot stands for a "quantile-quantile plot". It is a plot where the axes
23,903
QQ plot in Python
Y axis shows values of observed distribution and X axis, values of theoretical distribution. Each point is a quantile. Let's say, if there were 100 points on the plot, the first point (the one on lower-left side) indicates an upper bound for an interval, and when ordered from smallest to largest, the smallest 1 percent of the data points of the corresponding distribution stays in this interval. Similarly, 2nd point is upper bound of an interval, where smallest 2 percent of data points from the distribution is located. This is the concept of quantile. But it is not limited to a case with 100 intervals, it is a general concept and you can have as many intervals as possible, then you will have that many quantiles describing boundaries of the intervals. What is special about this plot is, each point's position determines actual value of the given quantile in both distributions, as corresponding value on the axis. Let's think as if there are 100 such points (quantiles) again, this plot tell that smallest 1 percent of data points from observed distribution is between ($ -\infty $, -3.5] and also smallest 1 percent of data points from theoretical distribution is between ($ -\infty $, -3.2]. This way you can see locations of each interval boundary's position in both distributions. I used data points throughout my answer, like ordered data points etc. This refers to discrete distributions, but the concept can be generalized for continous distributions. $R^2$ is a measure of how good the points fit to the red line. If both axes had the same distribution, all of the points would be exactly on the line and $R^2$ would equal 1. You can learn more about it in any text explaining linear regression.
QQ plot in Python
Y axis shows values of observed distribution and X axis, values of theoretical distribution. Each point is a quantile. Let's say, if there were 100 points on the plot, the first point (the one on low
QQ plot in Python Y axis shows values of observed distribution and X axis, values of theoretical distribution. Each point is a quantile. Let's say, if there were 100 points on the plot, the first point (the one on lower-left side) indicates an upper bound for an interval, and when ordered from smallest to largest, the smallest 1 percent of the data points of the corresponding distribution stays in this interval. Similarly, 2nd point is upper bound of an interval, where smallest 2 percent of data points from the distribution is located. This is the concept of quantile. But it is not limited to a case with 100 intervals, it is a general concept and you can have as many intervals as possible, then you will have that many quantiles describing boundaries of the intervals. What is special about this plot is, each point's position determines actual value of the given quantile in both distributions, as corresponding value on the axis. Let's think as if there are 100 such points (quantiles) again, this plot tell that smallest 1 percent of data points from observed distribution is between ($ -\infty $, -3.5] and also smallest 1 percent of data points from theoretical distribution is between ($ -\infty $, -3.2]. This way you can see locations of each interval boundary's position in both distributions. I used data points throughout my answer, like ordered data points etc. This refers to discrete distributions, but the concept can be generalized for continous distributions. $R^2$ is a measure of how good the points fit to the red line. If both axes had the same distribution, all of the points would be exactly on the line and $R^2$ would equal 1. You can learn more about it in any text explaining linear regression.
QQ plot in Python Y axis shows values of observed distribution and X axis, values of theoretical distribution. Each point is a quantile. Let's say, if there were 100 points on the plot, the first point (the one on low
23,904
Correlation between two ordinal categorical variables
I would go with Spearman rho and/or Kendall Tau for categorical (ordinal) variables. Related to the Pearson correlation coefficient, the Spearman correlation coefficient (rho) measures the relationship between two variables. Spearman's rho can be understood as a rank-based version of Pearson's correlation coefficient. Like Spearman's rho, Kendall's tau measures the degree of a monotone relationship between variables. Roughly speaking, Kendall's tau distinguishes itself from Spearman's rho by stronger penalization of non-sequential (in context of the ranked variables) dislocations.
Correlation between two ordinal categorical variables
I would go with Spearman rho and/or Kendall Tau for categorical (ordinal) variables. Related to the Pearson correlation coefficient, the Spearman correlation coefficient (rho) measures the relationsh
Correlation between two ordinal categorical variables I would go with Spearman rho and/or Kendall Tau for categorical (ordinal) variables. Related to the Pearson correlation coefficient, the Spearman correlation coefficient (rho) measures the relationship between two variables. Spearman's rho can be understood as a rank-based version of Pearson's correlation coefficient. Like Spearman's rho, Kendall's tau measures the degree of a monotone relationship between variables. Roughly speaking, Kendall's tau distinguishes itself from Spearman's rho by stronger penalization of non-sequential (in context of the ranked variables) dislocations.
Correlation between two ordinal categorical variables I would go with Spearman rho and/or Kendall Tau for categorical (ordinal) variables. Related to the Pearson correlation coefficient, the Spearman correlation coefficient (rho) measures the relationsh
23,905
Correlation between two ordinal categorical variables
Both of these have enough levels that you could just treat them as continuous variables, and use Pearson or Spearman correlation. You can then calculate a significance (p) value based on your correlation and sample size. If you really want to treat the data as categorical, you want to run a chi-squared test on the 10x10 matrix of overall satisfaction vs. availability satisfaction. You will need a decent amount of data for this (~thousands), since the majority of the cells should contain at least 5 observations for the test to be valid. This would allow for more general types of dependence between the two measures, in which even nearby levels show different relationships (e.g. rating1=9 tends to predict rating2=4, rating1=8 tends to predict rating2=10) which are probably not likely in your data.
Correlation between two ordinal categorical variables
Both of these have enough levels that you could just treat them as continuous variables, and use Pearson or Spearman correlation. You can then calculate a significance (p) value based on your correlat
Correlation between two ordinal categorical variables Both of these have enough levels that you could just treat them as continuous variables, and use Pearson or Spearman correlation. You can then calculate a significance (p) value based on your correlation and sample size. If you really want to treat the data as categorical, you want to run a chi-squared test on the 10x10 matrix of overall satisfaction vs. availability satisfaction. You will need a decent amount of data for this (~thousands), since the majority of the cells should contain at least 5 observations for the test to be valid. This would allow for more general types of dependence between the two measures, in which even nearby levels show different relationships (e.g. rating1=9 tends to predict rating2=4, rating1=8 tends to predict rating2=10) which are probably not likely in your data.
Correlation between two ordinal categorical variables Both of these have enough levels that you could just treat them as continuous variables, and use Pearson or Spearman correlation. You can then calculate a significance (p) value based on your correlat
23,906
Correlation between two ordinal categorical variables
I went and searched for it, found this from John Ubersax: http://www.john-uebersax.com/stat/tetra.htm and some papers https://link.springer.com/article/10.1007/s11135-008-9190-y https://escholarship.org/content/qt583610fv/qt583610fv.pdf
Correlation between two ordinal categorical variables
I went and searched for it, found this from John Ubersax: http://www.john-uebersax.com/stat/tetra.htm and some papers https://link.springer.com/article/10.1007/s11135-008-9190-y https://escholarship
Correlation between two ordinal categorical variables I went and searched for it, found this from John Ubersax: http://www.john-uebersax.com/stat/tetra.htm and some papers https://link.springer.com/article/10.1007/s11135-008-9190-y https://escholarship.org/content/qt583610fv/qt583610fv.pdf
Correlation between two ordinal categorical variables I went and searched for it, found this from John Ubersax: http://www.john-uebersax.com/stat/tetra.htm and some papers https://link.springer.com/article/10.1007/s11135-008-9190-y https://escholarship
23,907
How to interpret autocorrelation plot in MCMC
First of all: if memory and computational time for handling the MCMC output are not limiting, thinning is never "optimal". At an equal number of MCMC iterations, thinning of the chain always leads (on average) to a loss precision of the MCMC approximation. Routinely thinning based on autocorrelation or any other diagnostics is therefore not advisable. See Link, W. A. & Eaton, M. J. (2012) On thinning of chains in MCMC. Methods in Ecology and Evolution, 3, 112-115. In every-day practice, however, there is the common case that you have to work with a model for which the sampler doesn't mix very well (high autocorrelation). In this case 1) Close chain elements are very similar, meaning that throwing one away doesn't loose a lot of information (that is what the autocorrelation plot shows) 2) You need a lot of repetitions to get convergence, meaning that you get very large chains if you don't thin. Because of that, working with the full chain can be very slow, costs a lot of storage, or even lead to memory problems when monitoring a lot of variables. 3) Additionally, I have the feeling (but that one I have never tested systematically) that thinning makes JAGS a bit faster as well, so might be able to get a few more iterations in the same time. So, my point is: the autocorrelation plot gives you a rough estimate about how much information you are loosing through thinning (note though that this is an average over the whole posterior, loss may be higher in particular regions). Whether this price is worth paying depends on what you gain by thinning in terms saving computing resources and time later. If MCMC iterations are cheap, you can always compensate the loss of thinning by running a few more iterations as well.
How to interpret autocorrelation plot in MCMC
First of all: if memory and computational time for handling the MCMC output are not limiting, thinning is never "optimal". At an equal number of MCMC iterations, thinning of the chain always leads (on
How to interpret autocorrelation plot in MCMC First of all: if memory and computational time for handling the MCMC output are not limiting, thinning is never "optimal". At an equal number of MCMC iterations, thinning of the chain always leads (on average) to a loss precision of the MCMC approximation. Routinely thinning based on autocorrelation or any other diagnostics is therefore not advisable. See Link, W. A. & Eaton, M. J. (2012) On thinning of chains in MCMC. Methods in Ecology and Evolution, 3, 112-115. In every-day practice, however, there is the common case that you have to work with a model for which the sampler doesn't mix very well (high autocorrelation). In this case 1) Close chain elements are very similar, meaning that throwing one away doesn't loose a lot of information (that is what the autocorrelation plot shows) 2) You need a lot of repetitions to get convergence, meaning that you get very large chains if you don't thin. Because of that, working with the full chain can be very slow, costs a lot of storage, or even lead to memory problems when monitoring a lot of variables. 3) Additionally, I have the feeling (but that one I have never tested systematically) that thinning makes JAGS a bit faster as well, so might be able to get a few more iterations in the same time. So, my point is: the autocorrelation plot gives you a rough estimate about how much information you are loosing through thinning (note though that this is an average over the whole posterior, loss may be higher in particular regions). Whether this price is worth paying depends on what you gain by thinning in terms saving computing resources and time later. If MCMC iterations are cheap, you can always compensate the loss of thinning by running a few more iterations as well.
How to interpret autocorrelation plot in MCMC First of all: if memory and computational time for handling the MCMC output are not limiting, thinning is never "optimal". At an equal number of MCMC iterations, thinning of the chain always leads (on
23,908
How does Pearson's Chi Squared Statistic approximate a Chi Squared Distribution
I'm going to motivate this intuitively, and indicate how it comes about for the special case of two groups, assuming you're happy to accept the normal approximation to the binomial. Hopefully that will be enough for you to get a good sense of why it works the way it does. You're talking about the chi-square goodness of fit test. Let's say there are $k$ groups (you have it as $n$, but there's a reason I tend to prefer to call it $k$). In the model being applied for this situation, the counts $O_i$, $i=1,2,...,k$ are multinomial. Let $N=\sum_{i=1}^k O_i$. The counts are conditioned on the sum $N$ (except in some fairly rare situations); and there are some prespecified set of probabilities for each category, $p_i, i=1, 2, \ldots,k$, which sum to $1$. Just as with the binomial, there's an asymptotic normal approximation for multinomials -- indeed, if you consider only the count in a given cell ("in this category" or not), it would then be binomial. Just as with the binomial, the variances of the counts (as well as their covariances in the multinomial) are functions of $N$ and the $p$'s; you don't estimate a variance separately. That is, if the expected counts are sufficiently large, the vector of counts is approximately normal with mean $E_i=Np_i$. However, because the counts are conditioned on $N$, the distribution is degenerate (it exists in a hyperplane of dimension $k-1$, since specifying $k-1$ of the counts fixes the remaining one). The variance-covariance matrix has diagonal entries $Np_i(1-p_i)$ and off diagonal elements $-Np_ip_j$, and it is of rank $k-1$ because of the degeneracy. As a result, for an individual cell $\text{Var}(O_i)=Np_i(1-p_i)$, and you could write $z_i = \frac{O_i-E_i}{\sqrt{E_i(1-p_i)}}$. However, the terms are dependent (negatively correlated), so if you sum the squares of those $z_i$ it won't have the a $\chi^2_k$ distribution (as it would if they were independent standardized variables). Instead we could potentially construct a set of $k-1$ independent variables from the original $k$ which are independent and still approximately normal (asymptotically normal). If we summed their (standardized) squares, we'd get a $\chi^2_{k-1}$. There are ways to construct such a set of $k-1$ variables explicitly, but fortunately there's a very neat shortcut that avoids what amounts to a substantial amount of effort, and yields the same result (the same value of the statistic) as if we had gone to the trouble. Consider, for simplicity, a goodness of fit with two categories (which is now binomial). The probability of being in the first cell is $p_1=p$, and in the second cell is $p_2=1-p$. There are $X = O_1$ observations in the first cell, and $N-X=O_2$ in the second cell. The observed first cell count, $X$ is asymptotically $\text{N}(Np,Np(1-p))$. We can standardize it as $z=\frac{X-Np}{\sqrt{Np(1-p)}}$. Then $z^2 = \frac{(X-Np)^2}{Np(1-p)}$ is approximately $\sim \chi^2_1$ (asymptotically $\sim \chi^2_1$). Notice that $\sum_{i=1}^2 \frac{(O_i-E_i)^2}{E_i} = \frac{[X-Np]^2}{Np}+ \frac{[(N-X)-(N-Np)]^2}{N(1-p)}= \frac{[X-Np]^2}{Np}+ \frac{[X-Np]^2}{N(1-p)}=(X-Np)^2[\frac{1}{Np}+ \frac{1}{N(1-p)}]$. But $\frac{1}{Np}+ \frac{1}{N(1-p)} =\frac{Np+N(1-p)}{Np.N(1-p)} = \frac{1}{Np(1-p)}$. So $\sum_{i=1}^2 \frac{(O_i-E_i)^2}{E_i} =\frac{(X-Np)^2}{Np(1-p)}$ which is the $z^2$ we started with - which asymptotically will be a $\chi^2_1$ random variable. The dependence between the two cells is such that by diving by $E_i$ instead of $E_i(1-p_i)$ we exactly compensate for the dependence between the two, and get the original square-of-an-approximately-normal random variable. The same kind of sum-dependence is taken care of by the same approach when there are more than two categories -- by summing the $\frac{(O_i-E_i)^2}{E_i}$ instead of $\frac{(O_i-E_i)^2}{E_i(1-p_i)}$ over all $k$ terms, you exactly compensate for the effect of the dependence, and obtain a sum equivalent to a sum of $k-1$ independent normals. There are a variety of ways to show the statistic has a distribution that asymptotically $\chi^2_{k-1}$ for larger $k$ (it's covered in some undergraduate statistics courses, and can be found in a number of undergraduate-level texts), but I don't want to lead you too far beyond the level your question suggests. Indeed derivations are easy to find in notes on the internet, for example there are two different derivations in the space of about two pages here
How does Pearson's Chi Squared Statistic approximate a Chi Squared Distribution
I'm going to motivate this intuitively, and indicate how it comes about for the special case of two groups, assuming you're happy to accept the normal approximation to the binomial. Hopefully that wil
How does Pearson's Chi Squared Statistic approximate a Chi Squared Distribution I'm going to motivate this intuitively, and indicate how it comes about for the special case of two groups, assuming you're happy to accept the normal approximation to the binomial. Hopefully that will be enough for you to get a good sense of why it works the way it does. You're talking about the chi-square goodness of fit test. Let's say there are $k$ groups (you have it as $n$, but there's a reason I tend to prefer to call it $k$). In the model being applied for this situation, the counts $O_i$, $i=1,2,...,k$ are multinomial. Let $N=\sum_{i=1}^k O_i$. The counts are conditioned on the sum $N$ (except in some fairly rare situations); and there are some prespecified set of probabilities for each category, $p_i, i=1, 2, \ldots,k$, which sum to $1$. Just as with the binomial, there's an asymptotic normal approximation for multinomials -- indeed, if you consider only the count in a given cell ("in this category" or not), it would then be binomial. Just as with the binomial, the variances of the counts (as well as their covariances in the multinomial) are functions of $N$ and the $p$'s; you don't estimate a variance separately. That is, if the expected counts are sufficiently large, the vector of counts is approximately normal with mean $E_i=Np_i$. However, because the counts are conditioned on $N$, the distribution is degenerate (it exists in a hyperplane of dimension $k-1$, since specifying $k-1$ of the counts fixes the remaining one). The variance-covariance matrix has diagonal entries $Np_i(1-p_i)$ and off diagonal elements $-Np_ip_j$, and it is of rank $k-1$ because of the degeneracy. As a result, for an individual cell $\text{Var}(O_i)=Np_i(1-p_i)$, and you could write $z_i = \frac{O_i-E_i}{\sqrt{E_i(1-p_i)}}$. However, the terms are dependent (negatively correlated), so if you sum the squares of those $z_i$ it won't have the a $\chi^2_k$ distribution (as it would if they were independent standardized variables). Instead we could potentially construct a set of $k-1$ independent variables from the original $k$ which are independent and still approximately normal (asymptotically normal). If we summed their (standardized) squares, we'd get a $\chi^2_{k-1}$. There are ways to construct such a set of $k-1$ variables explicitly, but fortunately there's a very neat shortcut that avoids what amounts to a substantial amount of effort, and yields the same result (the same value of the statistic) as if we had gone to the trouble. Consider, for simplicity, a goodness of fit with two categories (which is now binomial). The probability of being in the first cell is $p_1=p$, and in the second cell is $p_2=1-p$. There are $X = O_1$ observations in the first cell, and $N-X=O_2$ in the second cell. The observed first cell count, $X$ is asymptotically $\text{N}(Np,Np(1-p))$. We can standardize it as $z=\frac{X-Np}{\sqrt{Np(1-p)}}$. Then $z^2 = \frac{(X-Np)^2}{Np(1-p)}$ is approximately $\sim \chi^2_1$ (asymptotically $\sim \chi^2_1$). Notice that $\sum_{i=1}^2 \frac{(O_i-E_i)^2}{E_i} = \frac{[X-Np]^2}{Np}+ \frac{[(N-X)-(N-Np)]^2}{N(1-p)}= \frac{[X-Np]^2}{Np}+ \frac{[X-Np]^2}{N(1-p)}=(X-Np)^2[\frac{1}{Np}+ \frac{1}{N(1-p)}]$. But $\frac{1}{Np}+ \frac{1}{N(1-p)} =\frac{Np+N(1-p)}{Np.N(1-p)} = \frac{1}{Np(1-p)}$. So $\sum_{i=1}^2 \frac{(O_i-E_i)^2}{E_i} =\frac{(X-Np)^2}{Np(1-p)}$ which is the $z^2$ we started with - which asymptotically will be a $\chi^2_1$ random variable. The dependence between the two cells is such that by diving by $E_i$ instead of $E_i(1-p_i)$ we exactly compensate for the dependence between the two, and get the original square-of-an-approximately-normal random variable. The same kind of sum-dependence is taken care of by the same approach when there are more than two categories -- by summing the $\frac{(O_i-E_i)^2}{E_i}$ instead of $\frac{(O_i-E_i)^2}{E_i(1-p_i)}$ over all $k$ terms, you exactly compensate for the effect of the dependence, and obtain a sum equivalent to a sum of $k-1$ independent normals. There are a variety of ways to show the statistic has a distribution that asymptotically $\chi^2_{k-1}$ for larger $k$ (it's covered in some undergraduate statistics courses, and can be found in a number of undergraduate-level texts), but I don't want to lead you too far beyond the level your question suggests. Indeed derivations are easy to find in notes on the internet, for example there are two different derivations in the space of about two pages here
How does Pearson's Chi Squared Statistic approximate a Chi Squared Distribution I'm going to motivate this intuitively, and indicate how it comes about for the special case of two groups, assuming you're happy to accept the normal approximation to the binomial. Hopefully that wil
23,909
How does Pearson's Chi Squared Statistic approximate a Chi Squared Distribution
The one-page manuscript http://sites.stat.psu.edu/~dhunter/asymp/lectures/p175to184.pdf referred to by user @Glen_b ultimately shows that the statistic can be rewritten as a Hotelling $T^2$ with covariance rank = $k-1$ (see eq. 9.6). We may then invoke a classical result of S.J. Sepanski (1994) to obtain its asymptotic distribution as a chi-squared with $k-1$ degrees of freedom.
How does Pearson's Chi Squared Statistic approximate a Chi Squared Distribution
The one-page manuscript http://sites.stat.psu.edu/~dhunter/asymp/lectures/p175to184.pdf referred to by user @Glen_b ultimately shows that the statistic can be rewritten as a Hotelling $T^2$ with covar
How does Pearson's Chi Squared Statistic approximate a Chi Squared Distribution The one-page manuscript http://sites.stat.psu.edu/~dhunter/asymp/lectures/p175to184.pdf referred to by user @Glen_b ultimately shows that the statistic can be rewritten as a Hotelling $T^2$ with covariance rank = $k-1$ (see eq. 9.6). We may then invoke a classical result of S.J. Sepanski (1994) to obtain its asymptotic distribution as a chi-squared with $k-1$ degrees of freedom.
How does Pearson's Chi Squared Statistic approximate a Chi Squared Distribution The one-page manuscript http://sites.stat.psu.edu/~dhunter/asymp/lectures/p175to184.pdf referred to by user @Glen_b ultimately shows that the statistic can be rewritten as a Hotelling $T^2$ with covar
23,910
testing logistic regression coefficients using $t$ and residual deviance degrees of freedom
Is there in fact statistical theory showing that z really does follow a t distribution in the case of logistic regression and/or other generalized linear models? As far as I am aware, no such theory exists. I do regularly see hand-wavy arguments, and occasionally simulation experiments to support such an approach for some particular GLM family or another. The simulations are more convincing than the handwavy arguments. If there is no such theory, are there at least papers out there showing that assuming a t distribution in this way works as well as, or maybe even better than, assuming a normal distribution? Not that I recall seeing, but that's not saying much. My own (limited) small-sample simulations suggest assuming a t-distribution in the logistic case may be substantially worse than assuming a normal: Here, for example, are the results (as Q-Q plots) of 10000 simulations of the Wald statistic for an ordinary logistic regression (i.e. fixed-effects, not mixed) on 15 equispaced x-observations where the population parameters were both zero. The red line is the y=x line. As you see, in each case the normal is quite quite a good approximation over a good range in the middle - out to about the 5th and 95th percentiles (1.6-1.7ish), and then outside that the actual distribution of the test statistic is substantially lighter tailed than the normal. So for the logistic case, I'd say any argument to use the t- rather than the z- seems to be unlikely to succeed on this basis, since simulations like these tend to suggest the results may tend to lie on the lighter-tailed side of the normal, rather than the heavier tailed. [However, I recommend you don't trust my simulations any further than as a warning to beware - try some of your own, perhaps for circumstances more representative of your own situations typical of your IVs and models (of course, you need to simulate the case where some null is true to see what distribution to use under the null). I'd be interested to hear how they come out for you.]
testing logistic regression coefficients using $t$ and residual deviance degrees of freedom
Is there in fact statistical theory showing that z really does follow a t distribution in the case of logistic regression and/or other generalized linear models? As far as I am aware, no such theory
testing logistic regression coefficients using $t$ and residual deviance degrees of freedom Is there in fact statistical theory showing that z really does follow a t distribution in the case of logistic regression and/or other generalized linear models? As far as I am aware, no such theory exists. I do regularly see hand-wavy arguments, and occasionally simulation experiments to support such an approach for some particular GLM family or another. The simulations are more convincing than the handwavy arguments. If there is no such theory, are there at least papers out there showing that assuming a t distribution in this way works as well as, or maybe even better than, assuming a normal distribution? Not that I recall seeing, but that's not saying much. My own (limited) small-sample simulations suggest assuming a t-distribution in the logistic case may be substantially worse than assuming a normal: Here, for example, are the results (as Q-Q plots) of 10000 simulations of the Wald statistic for an ordinary logistic regression (i.e. fixed-effects, not mixed) on 15 equispaced x-observations where the population parameters were both zero. The red line is the y=x line. As you see, in each case the normal is quite quite a good approximation over a good range in the middle - out to about the 5th and 95th percentiles (1.6-1.7ish), and then outside that the actual distribution of the test statistic is substantially lighter tailed than the normal. So for the logistic case, I'd say any argument to use the t- rather than the z- seems to be unlikely to succeed on this basis, since simulations like these tend to suggest the results may tend to lie on the lighter-tailed side of the normal, rather than the heavier tailed. [However, I recommend you don't trust my simulations any further than as a warning to beware - try some of your own, perhaps for circumstances more representative of your own situations typical of your IVs and models (of course, you need to simulate the case where some null is true to see what distribution to use under the null). I'd be interested to hear how they come out for you.]
testing logistic regression coefficients using $t$ and residual deviance degrees of freedom Is there in fact statistical theory showing that z really does follow a t distribution in the case of logistic regression and/or other generalized linear models? As far as I am aware, no such theory
23,911
testing logistic regression coefficients using $t$ and residual deviance degrees of freedom
Here are a few additional simulations just to expand a bit on what Glen_b already presented. In these simulations I looked at the slope of a logistic regression where the predictor had a uniform distribution in $[-1,1]$. The true regression slope was always 0. I varied the total sample size ($N=10,20,40,80$) and the base rate of the binary response ($p=0.5,0.731,0.881,0.952$). Here are Q-Q plots comparing the observed $z$ values (Wald statistics) to theoretical quantiles of the corresponding $t$ distribution ($df=N-2$). These are based on 1000 runs for each parameter combination. Notice that with small sample sizes and extreme base rates (i.e., the upper-right region of the figure), there were many cases the response only took on a single value, in which case $z=0$ and $p$-value $=1$. Here are histograms showing the distributions of $p$-values for the logistic regression slopes based on those same $t$ distributions. These are based on 10,000 runs for each parameter combination. The $p$-values are grouped into bins of width 0.05 (20 bins in total). The dashed horizontal line shows the 5% mark, that is, frequency=500. Of course, one wants the distribution of $p$-values under the null hypothesis to be uniform, that is, all the bars should be right around the dashed line. Notice again the many degenerate cases in the upper-right part of the figure. The conclusion seems to be that the use of $t$ distributions in this case can lead to severely conservative results when the sample size is small and/or when the base rate is approaching 0 or 1.
testing logistic regression coefficients using $t$ and residual deviance degrees of freedom
Here are a few additional simulations just to expand a bit on what Glen_b already presented. In these simulations I looked at the slope of a logistic regression where the predictor had a uniform distr
testing logistic regression coefficients using $t$ and residual deviance degrees of freedom Here are a few additional simulations just to expand a bit on what Glen_b already presented. In these simulations I looked at the slope of a logistic regression where the predictor had a uniform distribution in $[-1,1]$. The true regression slope was always 0. I varied the total sample size ($N=10,20,40,80$) and the base rate of the binary response ($p=0.5,0.731,0.881,0.952$). Here are Q-Q plots comparing the observed $z$ values (Wald statistics) to theoretical quantiles of the corresponding $t$ distribution ($df=N-2$). These are based on 1000 runs for each parameter combination. Notice that with small sample sizes and extreme base rates (i.e., the upper-right region of the figure), there were many cases the response only took on a single value, in which case $z=0$ and $p$-value $=1$. Here are histograms showing the distributions of $p$-values for the logistic regression slopes based on those same $t$ distributions. These are based on 10,000 runs for each parameter combination. The $p$-values are grouped into bins of width 0.05 (20 bins in total). The dashed horizontal line shows the 5% mark, that is, frequency=500. Of course, one wants the distribution of $p$-values under the null hypothesis to be uniform, that is, all the bars should be right around the dashed line. Notice again the many degenerate cases in the upper-right part of the figure. The conclusion seems to be that the use of $t$ distributions in this case can lead to severely conservative results when the sample size is small and/or when the base rate is approaching 0 or 1.
testing logistic regression coefficients using $t$ and residual deviance degrees of freedom Here are a few additional simulations just to expand a bit on what Glen_b already presented. In these simulations I looked at the slope of a logistic regression where the predictor had a uniform distr
23,912
testing logistic regression coefficients using $t$ and residual deviance degrees of freedom
Nice work both of you. Bill Gould studied this in http://www.citeulike.org/user/harrelfe/article/13264166 making the same conclusions, in a standard fixed-effects binary logistic model. Briefly, since the logistic model does not have an error term there is no residual variance to estimate hence the $t$ distribution does not apply [at least outside the context of multiple imputation adjustments].
testing logistic regression coefficients using $t$ and residual deviance degrees of freedom
Nice work both of you. Bill Gould studied this in http://www.citeulike.org/user/harrelfe/article/13264166 making the same conclusions, in a standard fixed-effects binary logistic model. Briefly, sinc
testing logistic regression coefficients using $t$ and residual deviance degrees of freedom Nice work both of you. Bill Gould studied this in http://www.citeulike.org/user/harrelfe/article/13264166 making the same conclusions, in a standard fixed-effects binary logistic model. Briefly, since the logistic model does not have an error term there is no residual variance to estimate hence the $t$ distribution does not apply [at least outside the context of multiple imputation adjustments].
testing logistic regression coefficients using $t$ and residual deviance degrees of freedom Nice work both of you. Bill Gould studied this in http://www.citeulike.org/user/harrelfe/article/13264166 making the same conclusions, in a standard fixed-effects binary logistic model. Briefly, sinc
23,913
Logistic regression and ordinal independent variables
As @Scortchi notes, you can also use orthogonal polynomials. Here is a quick demonstration in R: set.seed(3406) N = 50 real.x = runif(N, 0, 10) ord.x = cut(real.x, breaks=c(0,2,4,6,8,10), labels=FALSE) ord.x = factor(ord.x, levels=1:5, ordered=TRUE) lo.lin = -3 + .5*real.x p.lin = exp(lo.lin)/(1 + exp(lo.lin)) y.lin = rbinom(N, 1, prob=p.lin) mod.lin = glm(y.lin~ord.x, family=binomial) summary(mod.lin) # ... # Coefficients: # Estimate Std. Error z value Pr(>|z|) # (Intercept) 0.05754 0.36635 0.157 0.87520 # ord.x.L 2.94083 0.90304 3.257 0.00113 ** # ord.x.Q 0.94049 0.85724 1.097 0.27260 # ord.x.C -0.67049 0.77171 -0.869 0.38494 # ord.x^4 -0.09155 0.73376 -0.125 0.90071 # ...
Logistic regression and ordinal independent variables
As @Scortchi notes, you can also use orthogonal polynomials. Here is a quick demonstration in R: set.seed(3406) N = 50 real.x = runif(N, 0, 10) ord.x = cut(real.x, breaks=c(0,2,4,6,8,10), lab
Logistic regression and ordinal independent variables As @Scortchi notes, you can also use orthogonal polynomials. Here is a quick demonstration in R: set.seed(3406) N = 50 real.x = runif(N, 0, 10) ord.x = cut(real.x, breaks=c(0,2,4,6,8,10), labels=FALSE) ord.x = factor(ord.x, levels=1:5, ordered=TRUE) lo.lin = -3 + .5*real.x p.lin = exp(lo.lin)/(1 + exp(lo.lin)) y.lin = rbinom(N, 1, prob=p.lin) mod.lin = glm(y.lin~ord.x, family=binomial) summary(mod.lin) # ... # Coefficients: # Estimate Std. Error z value Pr(>|z|) # (Intercept) 0.05754 0.36635 0.157 0.87520 # ord.x.L 2.94083 0.90304 3.257 0.00113 ** # ord.x.Q 0.94049 0.85724 1.097 0.27260 # ord.x.C -0.67049 0.77171 -0.869 0.38494 # ord.x^4 -0.09155 0.73376 -0.125 0.90071 # ...
Logistic regression and ordinal independent variables As @Scortchi notes, you can also use orthogonal polynomials. Here is a quick demonstration in R: set.seed(3406) N = 50 real.x = runif(N, 0, 10) ord.x = cut(real.x, breaks=c(0,2,4,6,8,10), lab
23,914
Logistic regression and ordinal independent variables
Any good book on logistic regression will have this, although perhaps not in exactly those words. Try Agresti's Categorical Data Analysis for a very authoritative source. It also follows from the definition of logistic regression (or other regressions). There are few methods explicitly for ordinal independent variables. The usual options are treating it as categorical (which loses the order) or as continuous (which makes the assumption stated in what you quoted). If you treat it as continuous then the program doing the analysis doesn't know it's ordinal. E.g. suppose your IV is "How much do you like President Obama?" and your answer choices are a Likert scale from 1. "Very much" to 5. "Not at all". If you treat this as continuous then (from the program's point of view) a "5" answer is 5 times a "1" answer. This may or may not be unreasonable.
Logistic regression and ordinal independent variables
Any good book on logistic regression will have this, although perhaps not in exactly those words. Try Agresti's Categorical Data Analysis for a very authoritative source. It also follows from the def
Logistic regression and ordinal independent variables Any good book on logistic regression will have this, although perhaps not in exactly those words. Try Agresti's Categorical Data Analysis for a very authoritative source. It also follows from the definition of logistic regression (or other regressions). There are few methods explicitly for ordinal independent variables. The usual options are treating it as categorical (which loses the order) or as continuous (which makes the assumption stated in what you quoted). If you treat it as continuous then the program doing the analysis doesn't know it's ordinal. E.g. suppose your IV is "How much do you like President Obama?" and your answer choices are a Likert scale from 1. "Very much" to 5. "Not at all". If you treat this as continuous then (from the program's point of view) a "5" answer is 5 times a "1" answer. This may or may not be unreasonable.
Logistic regression and ordinal independent variables Any good book on logistic regression will have this, although perhaps not in exactly those words. Try Agresti's Categorical Data Analysis for a very authoritative source. It also follows from the def
23,915
Notation for multilevel modeling
I would write ~ attitude + gender + (1|subject) + (1|scenario) as $$ y_i \sim \beta_0 + \beta_1 \cdot I(\textrm{attitude}=\textrm{pol}) + \beta_2 I(\textrm{gender}=\textrm{male}) + b_{1,j[i]} + b_{2,k[i]} + \epsilon_i \\ b_1 \sim N(0,\sigma^2_1) \\ b_2 \sim N(0,\sigma^2_2) \\ \epsilon \sim N(0,\sigma^2_r) $$ where $\beta$ indicates a fixed-effect coefficient, $b$ indicates a random variable, $I$ is an indicator function (this is basically the same as what you said above, just slightly different notation). ~ attitude + gender + (1+attitude|subject) + (1+attitude|scenario) adds among-subject variation in response to attitude and scenario (we could equivalently write the random-effects part as (attitude|subject) + (attitude|scenario), i.e. leaving the intercept implicit; this is a matter of taste). Now $$ y_i \sim \beta_0 + \beta_1 \cdot I(\textrm{attitude}=\textrm{pol}) + \beta_2 I(\textrm{gender}=\textrm{male}) + \\ b_{1,j[i]} + b_{3,j[i]} I(\textrm{attitude}=\textrm{pol}) + b_{2,k[i]} + b_{4,k[i]} I(\textrm{attitude}=\textrm{pol}) + \epsilon_i \\ \{b_1,b_3\} \sim \textrm{MVN}({\mathbf 0},\Sigma_1) \\ \{b_2,b_4\} \sim \textrm{MVN}({\mathbf 0},\Sigma_2) \\ \epsilon \sim N(0,\sigma^2_r) $$ where $\Sigma_1$ and $\Sigma_2$ are unstructured variance-covariance matrices, i.e. they are symmetric and positive (semi)definite but have no other constraints: $$ \Sigma_1 = \left( \begin{array}{cc} \sigma^2_1 & \sigma_{13} \\ \sigma_{13} & \sigma^2_3 \end{array} \right) $$ and similarly for $\Sigma_2$. It might be instructive to group terms as follows: $$ y_i \sim (\beta_0 + b_{1,j[i]} + b_{2,k[i]}) + \\ ( \beta_1 + b_{3,j[i]} + b_{4,k[i]}) \cdot I(\textrm{attitude}=\textrm{pol}) + \beta_2 I(\textrm{gender}=\textrm{male}) + \epsilon_i $$ so you can see which random effects are affecting the intercept and which are affecting the response to attitude. Now if you leave out the fixed-effect attitude term (i.e. set $\beta_1=0$, or drop the attitude term from the formula) you can see (without rewriting everything) that, because the random effects are assumed to have zero mean, we will be assuming that the average response to attitude across subjects and scenarios will be exactly zero, while there is still variation among subjects and scenarios. I won't say this never makes sense from a statistical point of view, but it rarely does. There are discussions of this issue on the r-sig-mixed-models@r-project.org mailing list from time to time ... (or it may be discussed on StackExchange somewhere -- if not, it would make a good follow-up SE question ...)
Notation for multilevel modeling
I would write ~ attitude + gender + (1|subject) + (1|scenario) as $$ y_i \sim \beta_0 + \beta_1 \cdot I(\textrm{attitude}=\textrm{pol}) + \beta_2 I(\textrm{gender}=\textrm{male}) + b_{1,j[i]} + b_{2,
Notation for multilevel modeling I would write ~ attitude + gender + (1|subject) + (1|scenario) as $$ y_i \sim \beta_0 + \beta_1 \cdot I(\textrm{attitude}=\textrm{pol}) + \beta_2 I(\textrm{gender}=\textrm{male}) + b_{1,j[i]} + b_{2,k[i]} + \epsilon_i \\ b_1 \sim N(0,\sigma^2_1) \\ b_2 \sim N(0,\sigma^2_2) \\ \epsilon \sim N(0,\sigma^2_r) $$ where $\beta$ indicates a fixed-effect coefficient, $b$ indicates a random variable, $I$ is an indicator function (this is basically the same as what you said above, just slightly different notation). ~ attitude + gender + (1+attitude|subject) + (1+attitude|scenario) adds among-subject variation in response to attitude and scenario (we could equivalently write the random-effects part as (attitude|subject) + (attitude|scenario), i.e. leaving the intercept implicit; this is a matter of taste). Now $$ y_i \sim \beta_0 + \beta_1 \cdot I(\textrm{attitude}=\textrm{pol}) + \beta_2 I(\textrm{gender}=\textrm{male}) + \\ b_{1,j[i]} + b_{3,j[i]} I(\textrm{attitude}=\textrm{pol}) + b_{2,k[i]} + b_{4,k[i]} I(\textrm{attitude}=\textrm{pol}) + \epsilon_i \\ \{b_1,b_3\} \sim \textrm{MVN}({\mathbf 0},\Sigma_1) \\ \{b_2,b_4\} \sim \textrm{MVN}({\mathbf 0},\Sigma_2) \\ \epsilon \sim N(0,\sigma^2_r) $$ where $\Sigma_1$ and $\Sigma_2$ are unstructured variance-covariance matrices, i.e. they are symmetric and positive (semi)definite but have no other constraints: $$ \Sigma_1 = \left( \begin{array}{cc} \sigma^2_1 & \sigma_{13} \\ \sigma_{13} & \sigma^2_3 \end{array} \right) $$ and similarly for $\Sigma_2$. It might be instructive to group terms as follows: $$ y_i \sim (\beta_0 + b_{1,j[i]} + b_{2,k[i]}) + \\ ( \beta_1 + b_{3,j[i]} + b_{4,k[i]}) \cdot I(\textrm{attitude}=\textrm{pol}) + \beta_2 I(\textrm{gender}=\textrm{male}) + \epsilon_i $$ so you can see which random effects are affecting the intercept and which are affecting the response to attitude. Now if you leave out the fixed-effect attitude term (i.e. set $\beta_1=0$, or drop the attitude term from the formula) you can see (without rewriting everything) that, because the random effects are assumed to have zero mean, we will be assuming that the average response to attitude across subjects and scenarios will be exactly zero, while there is still variation among subjects and scenarios. I won't say this never makes sense from a statistical point of view, but it rarely does. There are discussions of this issue on the r-sig-mixed-models@r-project.org mailing list from time to time ... (or it may be discussed on StackExchange somewhere -- if not, it would make a good follow-up SE question ...)
Notation for multilevel modeling I would write ~ attitude + gender + (1|subject) + (1|scenario) as $$ y_i \sim \beta_0 + \beta_1 \cdot I(\textrm{attitude}=\textrm{pol}) + \beta_2 I(\textrm{gender}=\textrm{male}) + b_{1,j[i]} + b_{2,
23,916
How to calculate a mean and standard deviation for a lognormal distribution using 2 percentiles
It seems that you "know" or otherwise assume that you have two quantiles; say you have that 42 and 666 are the 10% and 90% points for a lognormal. The key is that almost everything is easier to do and understand on the logged (normal) scale; exponentiate as little and as late as possible. I take as examples quantiles that are symmetrically placed on the cumulative probability scale. Then the mean on the log scale is halfway between them and the standard deviation (sd) on the log scale can be estimated using the normal quantile function. I used Mata from Stata for these sample calculations. The backslash \ joins elements column-wise. mean = mean(ln((42 \ 666))) (ln(666) - mean) / invnormal(0.9) 1.078232092 SD = (ln(666) - mean) / invnormal(0.9) The mean on the exponentiated scale is then exp(mean + SD^2/2) 299.0981759 and the variance is left as an exercise. (Aside: It should be as easy or easier in any other decent software. invnormal() is just qnorm() in R if I recall correctly.)
How to calculate a mean and standard deviation for a lognormal distribution using 2 percentiles
It seems that you "know" or otherwise assume that you have two quantiles; say you have that 42 and 666 are the 10% and 90% points for a lognormal. The key is that almost everything is easier to do an
How to calculate a mean and standard deviation for a lognormal distribution using 2 percentiles It seems that you "know" or otherwise assume that you have two quantiles; say you have that 42 and 666 are the 10% and 90% points for a lognormal. The key is that almost everything is easier to do and understand on the logged (normal) scale; exponentiate as little and as late as possible. I take as examples quantiles that are symmetrically placed on the cumulative probability scale. Then the mean on the log scale is halfway between them and the standard deviation (sd) on the log scale can be estimated using the normal quantile function. I used Mata from Stata for these sample calculations. The backslash \ joins elements column-wise. mean = mean(ln((42 \ 666))) (ln(666) - mean) / invnormal(0.9) 1.078232092 SD = (ln(666) - mean) / invnormal(0.9) The mean on the exponentiated scale is then exp(mean + SD^2/2) 299.0981759 and the variance is left as an exercise. (Aside: It should be as easy or easier in any other decent software. invnormal() is just qnorm() in R if I recall correctly.)
How to calculate a mean and standard deviation for a lognormal distribution using 2 percentiles It seems that you "know" or otherwise assume that you have two quantiles; say you have that 42 and 666 are the 10% and 90% points for a lognormal. The key is that almost everything is easier to do an
23,917
How can I pool bootstrapped p-values across multiply imputed data sets?
I think both options result in the correct answer. In general, I would prefer method 1 as that preserves the entire distribution. For method 1, bootstrap the parameter $k$ times within each of the $m$ MI solutions. Then simply mix the $m$ bootstrapped distributions to obtain your final density, now consisting of $k \times m$ samples that include the between-imputation variation. Then treat that as a conventional bootstrap sample to get confidence intervals. Use the Bayesian bootstrap for small samples. I know of no simulation work that investigates this procedure, and this is actually an open problem to be investigated. For method 2, use the Licht-Rubin procedure. See How to get pooled p-values on tests done in multiple imputed datasets?
How can I pool bootstrapped p-values across multiply imputed data sets?
I think both options result in the correct answer. In general, I would prefer method 1 as that preserves the entire distribution. For method 1, bootstrap the parameter $k$ times within each of the $m
How can I pool bootstrapped p-values across multiply imputed data sets? I think both options result in the correct answer. In general, I would prefer method 1 as that preserves the entire distribution. For method 1, bootstrap the parameter $k$ times within each of the $m$ MI solutions. Then simply mix the $m$ bootstrapped distributions to obtain your final density, now consisting of $k \times m$ samples that include the between-imputation variation. Then treat that as a conventional bootstrap sample to get confidence intervals. Use the Bayesian bootstrap for small samples. I know of no simulation work that investigates this procedure, and this is actually an open problem to be investigated. For method 2, use the Licht-Rubin procedure. See How to get pooled p-values on tests done in multiple imputed datasets?
How can I pool bootstrapped p-values across multiply imputed data sets? I think both options result in the correct answer. In general, I would prefer method 1 as that preserves the entire distribution. For method 1, bootstrap the parameter $k$ times within each of the $m
23,918
How can I pool bootstrapped p-values across multiply imputed data sets?
This is not a literature I am familiar with, but one way to approach this might be to ignore the fact that these are bootstrapped p-values, and look at the literature on combining p-values across multiply imputed data sets. In that case, Li, Meng, Raghunathan, and Rubin (1991) applies. The procedure is based on statistics from each of the imputed datasets, weighted using a measure of the information loss due to imputation. They run into issues related to the joint distribution of the statistics across imputations, and they make some simplifying assumptions. Of related interest is Meng (1994). Update A procedure for combining p-values across multiply imputed datasets is described in the dissertation of Christine Licht, Ch. 4. The idea, which she attributes to Don Rubin, is essentially to transform the p-values to be normally distributed, which can then be combined across MI datasets using the standard rules for combination of z-statistics.
How can I pool bootstrapped p-values across multiply imputed data sets?
This is not a literature I am familiar with, but one way to approach this might be to ignore the fact that these are bootstrapped p-values, and look at the literature on combining p-values across mult
How can I pool bootstrapped p-values across multiply imputed data sets? This is not a literature I am familiar with, but one way to approach this might be to ignore the fact that these are bootstrapped p-values, and look at the literature on combining p-values across multiply imputed data sets. In that case, Li, Meng, Raghunathan, and Rubin (1991) applies. The procedure is based on statistics from each of the imputed datasets, weighted using a measure of the information loss due to imputation. They run into issues related to the joint distribution of the statistics across imputations, and they make some simplifying assumptions. Of related interest is Meng (1994). Update A procedure for combining p-values across multiply imputed datasets is described in the dissertation of Christine Licht, Ch. 4. The idea, which she attributes to Don Rubin, is essentially to transform the p-values to be normally distributed, which can then be combined across MI datasets using the standard rules for combination of z-statistics.
How can I pool bootstrapped p-values across multiply imputed data sets? This is not a literature I am familiar with, but one way to approach this might be to ignore the fact that these are bootstrapped p-values, and look at the literature on combining p-values across mult
23,919
Logistic Regression and Inflection Point
As touched upon by @scortchi the reviewer was operating under the false impression that it is not possible to model nonlinear effects of predictors on the logit scale in the context of logistic regression. The original model was quick to assume linearity of all predictors. By relaxing the linearity assumption, using for example restricted cubic splines (natural splines), the entire shape of the curve is flexible and inflection point is no longer an issue. Had there been a single predictor and had it been expanded using a regression spline, one could say that the logistic model makes only the assumptions of smoothness and independence of observations.
Logistic Regression and Inflection Point
As touched upon by @scortchi the reviewer was operating under the false impression that it is not possible to model nonlinear effects of predictors on the logit scale in the context of logistic regres
Logistic Regression and Inflection Point As touched upon by @scortchi the reviewer was operating under the false impression that it is not possible to model nonlinear effects of predictors on the logit scale in the context of logistic regression. The original model was quick to assume linearity of all predictors. By relaxing the linearity assumption, using for example restricted cubic splines (natural splines), the entire shape of the curve is flexible and inflection point is no longer an issue. Had there been a single predictor and had it been expanded using a regression spline, one could say that the logistic model makes only the assumptions of smoothness and independence of observations.
Logistic Regression and Inflection Point As touched upon by @scortchi the reviewer was operating under the false impression that it is not possible to model nonlinear effects of predictors on the logit scale in the context of logistic regres
23,920
Logistic Regression and Inflection Point
It seems to me that the reviewer was just looking for something to say. Before examining such features of the specification like the implied inflection point, there is a ton of assumptions that we have made, in order to arrive at an estimable model. All could be questioned and debated -the use of the logistic function itself being a possible primary target: who told us that the conditional distribution of the underlying error term is logistic? Nobody. So the issue is: what does the change of curvature signify? How important for the real-world phenomenon under study, may be the point at which this change of curvature happens, so that we would consider making it "data-driven"? Moving further away from the principle of parsimony? The question is not "why the inflection point should be at 0.5?" But "how misleading it may be for our conclusions if it is left at 0.5?".
Logistic Regression and Inflection Point
It seems to me that the reviewer was just looking for something to say. Before examining such features of the specification like the implied inflection point, there is a ton of assumptions that we hav
Logistic Regression and Inflection Point It seems to me that the reviewer was just looking for something to say. Before examining such features of the specification like the implied inflection point, there is a ton of assumptions that we have made, in order to arrive at an estimable model. All could be questioned and debated -the use of the logistic function itself being a possible primary target: who told us that the conditional distribution of the underlying error term is logistic? Nobody. So the issue is: what does the change of curvature signify? How important for the real-world phenomenon under study, may be the point at which this change of curvature happens, so that we would consider making it "data-driven"? Moving further away from the principle of parsimony? The question is not "why the inflection point should be at 0.5?" But "how misleading it may be for our conclusions if it is left at 0.5?".
Logistic Regression and Inflection Point It seems to me that the reviewer was just looking for something to say. Before examining such features of the specification like the implied inflection point, there is a ton of assumptions that we hav
23,921
Logistic Regression and Inflection Point
The 0.5 inflection point is a small part of a larger question: the logistic equation is by construction symmetric. And in most derivations of it, the modeled effect has a reason to be symmetric. e.g. as one player wins the other player loses, or the effect responsible for saturation is the same physical effect responsible for the initial growth, etc.... So if there is a reason why the origin of the low X behavior is the same origin as the righ hand behavious or for any other reason the problem is symmetric then you have your justification. if not, perhaps then the next simplest model is the generalized logistic equation. it has more parameters and you may want to add a constraint so they are not all free parameters. this is probably more desirable than the kludges you added because those are adding shelfs where the first derivative is oscilating back and forth-- that sort of thing tends to create fictional false points of local equilibrium if you are trying to optimize some expectation value of this distribution. the generalize form will break the symmetry but in a smooth way.
Logistic Regression and Inflection Point
The 0.5 inflection point is a small part of a larger question: the logistic equation is by construction symmetric. And in most derivations of it, the modeled effect has a reason to be symmetric. e.
Logistic Regression and Inflection Point The 0.5 inflection point is a small part of a larger question: the logistic equation is by construction symmetric. And in most derivations of it, the modeled effect has a reason to be symmetric. e.g. as one player wins the other player loses, or the effect responsible for saturation is the same physical effect responsible for the initial growth, etc.... So if there is a reason why the origin of the low X behavior is the same origin as the righ hand behavious or for any other reason the problem is symmetric then you have your justification. if not, perhaps then the next simplest model is the generalized logistic equation. it has more parameters and you may want to add a constraint so they are not all free parameters. this is probably more desirable than the kludges you added because those are adding shelfs where the first derivative is oscilating back and forth-- that sort of thing tends to create fictional false points of local equilibrium if you are trying to optimize some expectation value of this distribution. the generalize form will break the symmetry but in a smooth way.
Logistic Regression and Inflection Point The 0.5 inflection point is a small part of a larger question: the logistic equation is by construction symmetric. And in most derivations of it, the modeled effect has a reason to be symmetric. e.
23,922
Logistic Regression and Inflection Point
In m.h.o., the logit regression is a reasonable choice for dose- response. Of course, you can use probit , log-log, c-log-log link, and compare the goodness of fit (DEV, BIC, CAIC , etc.). But the simplest logit regression gives a comfortable formal assessment of the inflection point LD50 = -b0/b1. We remember that it is a specific point , for which we obtain the minimum uncertainty (cf. , LD16, LD84 , and any others will have a wider CI , see "Probit analysis” of Finney, 1947 , 1977). In my experience, always (?) It was better to use the logarithm of dose , and then just convert the 95% CI in the original scale . What is the nature of the other covariates in the model? I allude to the possibility to use multi-model approach... Certainly the Splines are flexible, but the formal parametrics is interpreted easier! See http://www.epa.gov/ncea/bmds/bmds_training/software/overp.htm
Logistic Regression and Inflection Point
In m.h.o., the logit regression is a reasonable choice for dose- response. Of course, you can use probit , log-log, c-log-log link, and compare the goodness of fit (DEV, BIC, CAIC , etc.). But the sim
Logistic Regression and Inflection Point In m.h.o., the logit regression is a reasonable choice for dose- response. Of course, you can use probit , log-log, c-log-log link, and compare the goodness of fit (DEV, BIC, CAIC , etc.). But the simplest logit regression gives a comfortable formal assessment of the inflection point LD50 = -b0/b1. We remember that it is a specific point , for which we obtain the minimum uncertainty (cf. , LD16, LD84 , and any others will have a wider CI , see "Probit analysis” of Finney, 1947 , 1977). In my experience, always (?) It was better to use the logarithm of dose , and then just convert the 95% CI in the original scale . What is the nature of the other covariates in the model? I allude to the possibility to use multi-model approach... Certainly the Splines are flexible, but the formal parametrics is interpreted easier! See http://www.epa.gov/ncea/bmds/bmds_training/software/overp.htm
Logistic Regression and Inflection Point In m.h.o., the logit regression is a reasonable choice for dose- response. Of course, you can use probit , log-log, c-log-log link, and compare the goodness of fit (DEV, BIC, CAIC , etc.). But the sim
23,923
Confused by Derivation of Regression Function
For your first confusion, it should be Expectation of squared error, so it is $E[(Y-f(x))^2].$ For the notation of $Pr(dx,dy)$, it is equal to $g(x,y)\,dx\,dy$, where $g(x,y)$ is the joint pdf of x and y. And $Pr(dx)=f(x)\,dx$, this can be interpreted as the probability of x being within a tiny interval of $[x,x+dx]$ is equal to pdf value at the point $x$, i.e. $f(x)$ times the interval length $dx$. The equation about the EPE stems from the theorem $E(E(Y|X))=E(Y)$ for any two random variables $X$ and $Y$. You can prove this by using the conditional distribution. The conditional expectation is the expectation calculated using the conditional distribution. The conditional distribution $Y|X$ means the probability of $Y$ after you know something about $X$. In our case, suppose we denote the squared error as a function $L(x,y)=(y-f(x))^2$, the EPE is calculating $$\begin{equation}\begin{split}E(L(x,y))&=\int\int L(x,y)g(x,y)dx\,dy \\ &=\int\bigg[\int L(x,y)g(y|x)g(x)dy\bigg]dx \\ &=\int\bigg[\int L(x,y)g(y|x)dy\bigg]g(x)dx \\ &=\int\bigg[E_{Y|X} (L(x,y)\bigg]g(x)dx \\ &=E_X(E_{Y|X} (L(x,y)))\end{split}\end{equation}$$ The outcome of above corresponds to the result you listed. Hope this can help you a bit.
Confused by Derivation of Regression Function
For your first confusion, it should be Expectation of squared error, so it is $E[(Y-f(x))^2].$ For the notation of $Pr(dx,dy)$, it is equal to $g(x,y)\,dx\,dy$, where $g(x,y)$ is the joint pdf of x an
Confused by Derivation of Regression Function For your first confusion, it should be Expectation of squared error, so it is $E[(Y-f(x))^2].$ For the notation of $Pr(dx,dy)$, it is equal to $g(x,y)\,dx\,dy$, where $g(x,y)$ is the joint pdf of x and y. And $Pr(dx)=f(x)\,dx$, this can be interpreted as the probability of x being within a tiny interval of $[x,x+dx]$ is equal to pdf value at the point $x$, i.e. $f(x)$ times the interval length $dx$. The equation about the EPE stems from the theorem $E(E(Y|X))=E(Y)$ for any two random variables $X$ and $Y$. You can prove this by using the conditional distribution. The conditional expectation is the expectation calculated using the conditional distribution. The conditional distribution $Y|X$ means the probability of $Y$ after you know something about $X$. In our case, suppose we denote the squared error as a function $L(x,y)=(y-f(x))^2$, the EPE is calculating $$\begin{equation}\begin{split}E(L(x,y))&=\int\int L(x,y)g(x,y)dx\,dy \\ &=\int\bigg[\int L(x,y)g(y|x)g(x)dy\bigg]dx \\ &=\int\bigg[\int L(x,y)g(y|x)dy\bigg]g(x)dx \\ &=\int\bigg[E_{Y|X} (L(x,y)\bigg]g(x)dx \\ &=E_X(E_{Y|X} (L(x,y)))\end{split}\end{equation}$$ The outcome of above corresponds to the result you listed. Hope this can help you a bit.
Confused by Derivation of Regression Function For your first confusion, it should be Expectation of squared error, so it is $E[(Y-f(x))^2].$ For the notation of $Pr(dx,dy)$, it is equal to $g(x,y)\,dx\,dy$, where $g(x,y)$ is the joint pdf of x an
23,924
When is a fixed effect truly fixed?
If you are interested in this formulation for causal inference about $\beta$ then the unknown quantities represented by $c_i$ need only be stable for the duration of the study / data for fixed effects to identify the relevant causal quantity. If you are concerned that the quantities represented by $c_i$ aren't stable even over this period then fixed effects won't do what you want. Then you can use random effects instead, although if you expect correlation between random $c_i$ and $X_i$ you'd want to condition $c_i$ on $\bar{X}_i$ in a multilevel setup. Concern about this correlation is often one of the motivations for a fixed effects formulation because under many (but not all) circumstances you don't need to worry about it then. In short, your concern about variation in the quantities represented by $c_i$ is very reasonable, but mostly as it affects the data for the period you have rather than periods you might have had or that you may eventually have but don't.
When is a fixed effect truly fixed?
If you are interested in this formulation for causal inference about $\beta$ then the unknown quantities represented by $c_i$ need only be stable for the duration of the study / data for fixed effects
When is a fixed effect truly fixed? If you are interested in this formulation for causal inference about $\beta$ then the unknown quantities represented by $c_i$ need only be stable for the duration of the study / data for fixed effects to identify the relevant causal quantity. If you are concerned that the quantities represented by $c_i$ aren't stable even over this period then fixed effects won't do what you want. Then you can use random effects instead, although if you expect correlation between random $c_i$ and $X_i$ you'd want to condition $c_i$ on $\bar{X}_i$ in a multilevel setup. Concern about this correlation is often one of the motivations for a fixed effects formulation because under many (but not all) circumstances you don't need to worry about it then. In short, your concern about variation in the quantities represented by $c_i$ is very reasonable, but mostly as it affects the data for the period you have rather than periods you might have had or that you may eventually have but don't.
When is a fixed effect truly fixed? If you are interested in this formulation for causal inference about $\beta$ then the unknown quantities represented by $c_i$ need only be stable for the duration of the study / data for fixed effects
23,925
When is a fixed effect truly fixed?
The distinction between a fixed effect and a random effect has typically no implications on the estimates (Edit: at least in the simple textbook uncorrelated cases), besides a matter of efficiency, but considerable implication for testing. For the purpose of testing, the question you should be asking yourself is what is the level of noise your signal should surpass? I.e., to what population do you want to generalize your findings? Using example (1): should it be the variability over the same day, a longer period, or the variability over different individuals? The more variance components you infer over, the stronger your scientific finding, with better chances of replicating. There is naturally a limit to the amount of generalization you can ask for, as not only the noise gets stronger, but also the signal ($E(c_i$)) gets weaker. To see this, imagine $E(c_i)$ is the expected effect of $X_i$ on weight but not over some life periods of a single subject, but rather over all mammals.
When is a fixed effect truly fixed?
The distinction between a fixed effect and a random effect has typically no implications on the estimates (Edit: at least in the simple textbook uncorrelated cases), besides a matter of efficiency, bu
When is a fixed effect truly fixed? The distinction between a fixed effect and a random effect has typically no implications on the estimates (Edit: at least in the simple textbook uncorrelated cases), besides a matter of efficiency, but considerable implication for testing. For the purpose of testing, the question you should be asking yourself is what is the level of noise your signal should surpass? I.e., to what population do you want to generalize your findings? Using example (1): should it be the variability over the same day, a longer period, or the variability over different individuals? The more variance components you infer over, the stronger your scientific finding, with better chances of replicating. There is naturally a limit to the amount of generalization you can ask for, as not only the noise gets stronger, but also the signal ($E(c_i$)) gets weaker. To see this, imagine $E(c_i)$ is the expected effect of $X_i$ on weight but not over some life periods of a single subject, but rather over all mammals.
When is a fixed effect truly fixed? The distinction between a fixed effect and a random effect has typically no implications on the estimates (Edit: at least in the simple textbook uncorrelated cases), besides a matter of efficiency, bu
23,926
When is a fixed effect truly fixed?
I've struggled with similar questions, see A Festschrift (blog post) for Lord, his paradox and Novick’s prediction, and here is my best attempt (hopefully with corrections if I am woefully wrong). If we drop the non-random shocks, $X_{it} \beta$, from the equation we then simply have: $$y_{it} = c_i + e_{it}$$ Which can be viewed as a random walk by going further back in time: \begin{align} y_{it} &= c_i + e_{it} \\ y_{it-1} &= c_i + e_{it-1} \\ y_{it} - y_{it-1} &= e_{it} - e_{it-1} \end{align} So this is just a reframing of conjugate prior's answer "need only be stable for the duration of the study" - but a reframing I find useful. So, during the duration of the study is it reasonable to consider that absent the treatments of interest, the $X_{it} \beta$ part, would the outcome be a random walk, only guided by random exogenous shocks - the $e_{it}$'s? Of course this is not true except for trivially pedantic circumstances. That is where my advice ends though. As gung mentions the George Box phrase, "all models are wrong, but some are useful". You would know better than I how to determine when this simplification is justified in a particular research design. It can be assumed we can't observe $c_i$ just the same as the random walk is not an accurate representation of reality - even for a tiny slice of time. I might guess for your particular example of the survey, questions measuring flow type data (e.g. income, weight) may be reasonable as random walks over particularly short time frames. Stock type data though (such as how many coffees did you drink today) it seems a bit more of a perverse presumption.
When is a fixed effect truly fixed?
I've struggled with similar questions, see A Festschrift (blog post) for Lord, his paradox and Novick’s prediction, and here is my best attempt (hopefully with corrections if I am woefully wrong). If
When is a fixed effect truly fixed? I've struggled with similar questions, see A Festschrift (blog post) for Lord, his paradox and Novick’s prediction, and here is my best attempt (hopefully with corrections if I am woefully wrong). If we drop the non-random shocks, $X_{it} \beta$, from the equation we then simply have: $$y_{it} = c_i + e_{it}$$ Which can be viewed as a random walk by going further back in time: \begin{align} y_{it} &= c_i + e_{it} \\ y_{it-1} &= c_i + e_{it-1} \\ y_{it} - y_{it-1} &= e_{it} - e_{it-1} \end{align} So this is just a reframing of conjugate prior's answer "need only be stable for the duration of the study" - but a reframing I find useful. So, during the duration of the study is it reasonable to consider that absent the treatments of interest, the $X_{it} \beta$ part, would the outcome be a random walk, only guided by random exogenous shocks - the $e_{it}$'s? Of course this is not true except for trivially pedantic circumstances. That is where my advice ends though. As gung mentions the George Box phrase, "all models are wrong, but some are useful". You would know better than I how to determine when this simplification is justified in a particular research design. It can be assumed we can't observe $c_i$ just the same as the random walk is not an accurate representation of reality - even for a tiny slice of time. I might guess for your particular example of the survey, questions measuring flow type data (e.g. income, weight) may be reasonable as random walks over particularly short time frames. Stock type data though (such as how many coffees did you drink today) it seems a bit more of a perverse presumption.
When is a fixed effect truly fixed? I've struggled with similar questions, see A Festschrift (blog post) for Lord, his paradox and Novick’s prediction, and here is my best attempt (hopefully with corrections if I am woefully wrong). If
23,927
How to visualize Bayesian goodness of fit for logistic regression
I have a feeling your not quite giving up all the goods to your situation, but given what we have in front of us lets consider the utility of a simple dot-plot to display the information. The only real thing to not here (that aren't perhaps default behaviors) are: I utilized redundant encodings, shape and color, to discriminate between the observed values of no defects and defects. With such simple information, placing a dot on the graph is not necessary. Also you have a problem when the point is near the middle values, it takes more look-up to see if the observed value is either zero or one. I sorted the graphic according to observed proportion. Sorting is the real kicker for dot-plots like these. Sorting by values of proportion here helps easily uncover high residual observations. Having a system where you can easily sort by values either contained in the plot or in external characteristics of the cases is the best way to get the bang for your buck. This advice extends to continuous observations as well. You could color/shape the points according to whether the residual is negative or positive, and then size the point according to the absolute (or squared) residual. This is IMO not necessary here though because of the simplicity of the observed values.
How to visualize Bayesian goodness of fit for logistic regression
I have a feeling your not quite giving up all the goods to your situation, but given what we have in front of us lets consider the utility of a simple dot-plot to display the information. The only re
How to visualize Bayesian goodness of fit for logistic regression I have a feeling your not quite giving up all the goods to your situation, but given what we have in front of us lets consider the utility of a simple dot-plot to display the information. The only real thing to not here (that aren't perhaps default behaviors) are: I utilized redundant encodings, shape and color, to discriminate between the observed values of no defects and defects. With such simple information, placing a dot on the graph is not necessary. Also you have a problem when the point is near the middle values, it takes more look-up to see if the observed value is either zero or one. I sorted the graphic according to observed proportion. Sorting is the real kicker for dot-plots like these. Sorting by values of proportion here helps easily uncover high residual observations. Having a system where you can easily sort by values either contained in the plot or in external characteristics of the cases is the best way to get the bang for your buck. This advice extends to continuous observations as well. You could color/shape the points according to whether the residual is negative or positive, and then size the point according to the absolute (or squared) residual. This is IMO not necessary here though because of the simplicity of the observed values.
How to visualize Bayesian goodness of fit for logistic regression I have a feeling your not quite giving up all the goods to your situation, but given what we have in front of us lets consider the utility of a simple dot-plot to display the information. The only re
23,928
How to visualize Bayesian goodness of fit for logistic regression
The usual way to visualising the fit of a Bayesian logistic regression model with one predictor is to plot the predictive distribution together with the corresponding proportions. (Please, let me know if I understood your question) An example using the popular Bliss' data set. Code Below in R: library(mcmc) # Beetle data ni = c(59, 60, 62, 56, 63, 59, 62, 60) # Number of individuals no = c(6, 13, 18, 28, 52, 53, 61, 60) # Observed successes dose = c(1.6907, 1.7242, 1.7552, 1.7842, 1.8113, 1.8369, 1.8610, 1.8839) # dose dat = cbind(dose,ni,no) ns = length(dat[,1]) # Log-posterior using a uniform prior on the parameters logpost = function(par){ var = dat[,3]*log(plogis(par[1]+par[2]*dat[,1])) + (dat[,2]-dat[,3])*log(1-plogis(par[1]+par[2]*dat[,1])) if( par[1]>-100000 ) return( sum(var) ) else return(-Inf) } # Metropolis-Hastings N = 60000 samp <- metrop(logpost, scale = .35, initial = c(-60,33), nbatch = N) samp$accept burnin = 10000 thinning = 50 ind = seq(burnin,N,thinning) mu1p = samp$batch[ , 1][ind] mu2p = samp$batch[ , 2][ind] # Visual tool points = no/ni # Predictive dose-response curve DRL <- function(d) return(mean(plogis(mu1p+mu2p*d))) DRLV = Vectorize(DRL) v <- seq(1.55,2,length.out=55) FL = DRLV(v) plot(v,FL,type="l",xlab="dose",ylab="response") points(dose,points,lwd=2)
How to visualize Bayesian goodness of fit for logistic regression
The usual way to visualising the fit of a Bayesian logistic regression model with one predictor is to plot the predictive distribution together with the corresponding proportions. (Please, let me know
How to visualize Bayesian goodness of fit for logistic regression The usual way to visualising the fit of a Bayesian logistic regression model with one predictor is to plot the predictive distribution together with the corresponding proportions. (Please, let me know if I understood your question) An example using the popular Bliss' data set. Code Below in R: library(mcmc) # Beetle data ni = c(59, 60, 62, 56, 63, 59, 62, 60) # Number of individuals no = c(6, 13, 18, 28, 52, 53, 61, 60) # Observed successes dose = c(1.6907, 1.7242, 1.7552, 1.7842, 1.8113, 1.8369, 1.8610, 1.8839) # dose dat = cbind(dose,ni,no) ns = length(dat[,1]) # Log-posterior using a uniform prior on the parameters logpost = function(par){ var = dat[,3]*log(plogis(par[1]+par[2]*dat[,1])) + (dat[,2]-dat[,3])*log(1-plogis(par[1]+par[2]*dat[,1])) if( par[1]>-100000 ) return( sum(var) ) else return(-Inf) } # Metropolis-Hastings N = 60000 samp <- metrop(logpost, scale = .35, initial = c(-60,33), nbatch = N) samp$accept burnin = 10000 thinning = 50 ind = seq(burnin,N,thinning) mu1p = samp$batch[ , 1][ind] mu2p = samp$batch[ , 2][ind] # Visual tool points = no/ni # Predictive dose-response curve DRL <- function(d) return(mean(plogis(mu1p+mu2p*d))) DRLV = Vectorize(DRL) v <- seq(1.55,2,length.out=55) FL = DRLV(v) plot(v,FL,type="l",xlab="dose",ylab="response") points(dose,points,lwd=2)
How to visualize Bayesian goodness of fit for logistic regression The usual way to visualising the fit of a Bayesian logistic regression model with one predictor is to plot the predictive distribution together with the corresponding proportions. (Please, let me know
23,929
How to visualize Bayesian goodness of fit for logistic regression
I am responding to a request for alternative graphical techniques that show how well simulated failure events match observed failure events. The question arose in "Probabilistic Programming and Bayesian Methods for Hackers " found here. Here's my graphical approach: Code found here.
How to visualize Bayesian goodness of fit for logistic regression
I am responding to a request for alternative graphical techniques that show how well simulated failure events match observed failure events. The question arose in "Probabilistic Programming and Bayes
How to visualize Bayesian goodness of fit for logistic regression I am responding to a request for alternative graphical techniques that show how well simulated failure events match observed failure events. The question arose in "Probabilistic Programming and Bayesian Methods for Hackers " found here. Here's my graphical approach: Code found here.
How to visualize Bayesian goodness of fit for logistic regression I am responding to a request for alternative graphical techniques that show how well simulated failure events match observed failure events. The question arose in "Probabilistic Programming and Bayes
23,930
How to visualize Bayesian goodness of fit for logistic regression
I arrive here after reading "Probabilistic Programming and Bayesian Methods for Hackers". Disclaimer not sure if I'm committing a crime here, but as we have a "prediction" model with a classification problem, one way to evaluate and visualize its goodness is to apply some standard ML classification eval plots. I usually use AUC and AUCPR plots, but there's also DET curve as well. we base these plots on the posterior_probability given by the sampling procedure. Another alternative (the one that I like the most) is the "precision-recall threshold curves" you can clearly see here how "good" your prediction probability is, given a different probability threshold. ( I think is the closest to the one that is in the book and publication) if you also define some probability threshold you can evaluate the confusion matrix among other metrics. At least for me, it is clearer when I have the AUC score, but again, maybe I'm getting too ahead of myself using this method for a bayesian model. code here
How to visualize Bayesian goodness of fit for logistic regression
I arrive here after reading "Probabilistic Programming and Bayesian Methods for Hackers". Disclaimer not sure if I'm committing a crime here, but as we have a "prediction" model with a classification
How to visualize Bayesian goodness of fit for logistic regression I arrive here after reading "Probabilistic Programming and Bayesian Methods for Hackers". Disclaimer not sure if I'm committing a crime here, but as we have a "prediction" model with a classification problem, one way to evaluate and visualize its goodness is to apply some standard ML classification eval plots. I usually use AUC and AUCPR plots, but there's also DET curve as well. we base these plots on the posterior_probability given by the sampling procedure. Another alternative (the one that I like the most) is the "precision-recall threshold curves" you can clearly see here how "good" your prediction probability is, given a different probability threshold. ( I think is the closest to the one that is in the book and publication) if you also define some probability threshold you can evaluate the confusion matrix among other metrics. At least for me, it is clearer when I have the AUC score, but again, maybe I'm getting too ahead of myself using this method for a bayesian model. code here
How to visualize Bayesian goodness of fit for logistic regression I arrive here after reading "Probabilistic Programming and Bayesian Methods for Hackers". Disclaimer not sure if I'm committing a crime here, but as we have a "prediction" model with a classification
23,931
Strategy for fitting highly non-linear function
The methods we would use to fit this manually (that is, of Exploratory Data Analysis) can work remarkably well with such data. I wish to reparameterize the model slightly in order to make its parameters positive: $$y = a x - b / \sqrt{x}.$$ For a given $y$, let's assume there is a unique real $x$ satisfying this equation; call this $f(y; a,b)$ or, for brevity, $f(y)$ when $(a,b)$ are understood. We observe a collection of ordered pairs $(x_i, y_i)$ where the $x_i$ deviate from $f(y_i; a,b)$ by independent random variates with zero means. In this discussion I will assume they all have a common variance, but an extension of these results (using weighted least squares) is possible, obvious, and easy to implement. Here is a simulated example of such a collection of $100$ values, with $a=0.0001$, $b=0.1$, and a common variance of $\sigma^2=4$. This is a (deliberately) tough example, as can be appreciated by the nonphysical (negative) $x$ values and their extraordinary spread (which is typically $\pm 2$ horizontal units, but can range up to $5$ or $6$ on the $x$ axis). If we can obtain a reasonable fit to these data that comes anywhere close to estimating the $a$, $b$, and $\sigma^2$ used, we will have done well indeed. An exploratory fitting is iterative. Each stage consists of two steps: estimate $a$ (based on the data and previous estimates $\hat{a}$ and $\hat{b}$ of $a$ and $b$, from which previous predicted values $\hat{x}_i$ can be obtained for the $x_i$) and then estimate $b$. Because the errors are in x, the fits estimate the $x_i$ from the $(y_i)$, rather than the other way around. To first order in the errors in $x$, when $x$ is sufficiently large, $$x_i \approx \frac{1}{a}\left(y_i + \frac{\hat{b}}{\sqrt{\hat{x}_i}}\right).$$ Therefore, we may update $\hat{a}$ by fitting this model with least squares (notice it has only one parameter--a slope, $a$--and no intercept) and taking the reciprocal of the coefficient as the updated estimate of $a$. Next, when $x$ is sufficiently small, the inverse quadratic term dominates and we find (again to first order in the errors) that $$x_i \approx b^2\frac{1 - 2 \hat{a} \hat{b} \hat{x}^{3/2}}{y_i^2}.$$ Once again using least squares (with just a slope term $b$) we obtain an updated estimate $\hat{b}$ via the square root of the fitted slope. To see why this works, a crude exploratory approximation to this fit can be obtained by plotting $x_i$ against $1/y_i^2$ for the smaller $x_i$. Better yet, because the $x_i$ are measured with error and the $y_i$ change monotonically with the $x_i$, we should focus on the data with the larger values of $1/y_i^2$. Here is an example from our simulated dataset showing the largest half of the $y_i$ in red, the smallest half in blue, and a line through the origin fit to the red points. The points approximately line up, although there is a bit of curvature at the small values of $x$ and $y$. (Notice the choice of axes: because $x$ is the measurement, it is conventional to plot it on the vertical axis.) By focusing the fit on the red points, where curvature should be minimal, we ought to obtain a reasonable estimate of $b$. The value of $0.096$ shown in the title is the square root of the slope of this line: it's only $4$% less than the true value! At this point the predicted values can be updated via $$\hat{x}_i = f(y_i; \hat{a}, \hat{b}).$$ Iterate until either the estimates stabilize (which is not guaranteed) or they cycle through small ranges of values (which still cannot be guaranteed). It turns out that $a$ is difficult to estimate unless we have a good set of very large values of $x$, but that $b$--which determines the vertical asymptote in the original plot (in the question) and is the focus of the question--can be pinned down quite accurately, provided there are some data within the vertical asymptote. In our running example, the iterations do converge to $\hat{a} = 0.000196$ (which is almost twice the correct value of $0.0001$) and $\hat{b} = 0.1073$ (which is close to the correct value of $0.1$). This plot shows the data once more, upon which are superimposed (a) the true curve in gray (dashed) and (b) the estimated curve in red (solid): This fit is so good that it is difficult to distinguish the true curve from the fitted curve: they overlap almost everywhere. Incidentally, the estimated error variance of $3.73$ is very close to the true value of $4$. There are some issues with this approach: The estimates are biased. The bias becomes apparent when the dataset is small and relatively few values are close to the x-axis. The fit is systematically a little low. The estimation procedure requires a method to tell "large" from "small" values of the $y_i$. I could propose exploratory ways to identify optimal definitions, but as a practical matter you can leave these as "tuning" constants and alter them to check the sensitivity of the results. I have set them arbitrarily by dividing the data into three equal groups according to the value of $y_i$ and using the two outer groups. The procedure will not work for all possible combinations of $a$ and $b$ or all possible ranges of data. However, it ought to work well whenever enough of the curve is represented in the dataset to reflect both asymptotes: the vertical one at one end and the slanted one at the other end. Code The following is written in Mathematica. estimate[{a_, b_, xHat_}, {x_, y_}] := Module[{n = Length[x], k0, k1, yLarge, xLarge, xHatLarge, ySmall, xSmall, xHatSmall, a1, b1, xHat1, u, fr}, fr[y_, {a_, b_}] := Root[-b^2 + y^2 #1 - 2 a y #1^2 + a^2 #1^3 &, 1]; k0 = Floor[1 n/3]; k1 = Ceiling[2 n/3];(* The tuning constants *) yLarge = y[[k1 + 1 ;;]]; xLarge = x[[k1 + 1 ;;]]; xHatLarge = xHat[[k1 + 1 ;;]]; ySmall = y[[;; k0]]; xSmall = x[[;; k0]]; xHatSmall = xHat[[;; k0]]; a1 = 1/ Last[LinearModelFit[{yLarge + b/Sqrt[xHatLarge], xLarge}\[Transpose], u, u]["BestFitParameters"]]; b1 = Sqrt[ Last[LinearModelFit[{(1 - 2 a1 b xHatSmall^(3/2)) / ySmall^2, xSmall}\[Transpose], u, u]["BestFitParameters"]]]; xHat1 = fr[#, {a1, b1}] & /@ y; {a1, b1, xHat1} ]; Apply this to data (given by parallel vectors x and y formed into a two-column matrix data = {x,y}) until convergence, starting with estimates of $a=b=0$: {a, b, xHat} = NestWhile[estimate[##, data] &, {0, 0, data[[1]]}, Norm[Most[#1] - Most[#2]] >= 0.001 &, 2, 100]
Strategy for fitting highly non-linear function
The methods we would use to fit this manually (that is, of Exploratory Data Analysis) can work remarkably well with such data. I wish to reparameterize the model slightly in order to make its paramete
Strategy for fitting highly non-linear function The methods we would use to fit this manually (that is, of Exploratory Data Analysis) can work remarkably well with such data. I wish to reparameterize the model slightly in order to make its parameters positive: $$y = a x - b / \sqrt{x}.$$ For a given $y$, let's assume there is a unique real $x$ satisfying this equation; call this $f(y; a,b)$ or, for brevity, $f(y)$ when $(a,b)$ are understood. We observe a collection of ordered pairs $(x_i, y_i)$ where the $x_i$ deviate from $f(y_i; a,b)$ by independent random variates with zero means. In this discussion I will assume they all have a common variance, but an extension of these results (using weighted least squares) is possible, obvious, and easy to implement. Here is a simulated example of such a collection of $100$ values, with $a=0.0001$, $b=0.1$, and a common variance of $\sigma^2=4$. This is a (deliberately) tough example, as can be appreciated by the nonphysical (negative) $x$ values and their extraordinary spread (which is typically $\pm 2$ horizontal units, but can range up to $5$ or $6$ on the $x$ axis). If we can obtain a reasonable fit to these data that comes anywhere close to estimating the $a$, $b$, and $\sigma^2$ used, we will have done well indeed. An exploratory fitting is iterative. Each stage consists of two steps: estimate $a$ (based on the data and previous estimates $\hat{a}$ and $\hat{b}$ of $a$ and $b$, from which previous predicted values $\hat{x}_i$ can be obtained for the $x_i$) and then estimate $b$. Because the errors are in x, the fits estimate the $x_i$ from the $(y_i)$, rather than the other way around. To first order in the errors in $x$, when $x$ is sufficiently large, $$x_i \approx \frac{1}{a}\left(y_i + \frac{\hat{b}}{\sqrt{\hat{x}_i}}\right).$$ Therefore, we may update $\hat{a}$ by fitting this model with least squares (notice it has only one parameter--a slope, $a$--and no intercept) and taking the reciprocal of the coefficient as the updated estimate of $a$. Next, when $x$ is sufficiently small, the inverse quadratic term dominates and we find (again to first order in the errors) that $$x_i \approx b^2\frac{1 - 2 \hat{a} \hat{b} \hat{x}^{3/2}}{y_i^2}.$$ Once again using least squares (with just a slope term $b$) we obtain an updated estimate $\hat{b}$ via the square root of the fitted slope. To see why this works, a crude exploratory approximation to this fit can be obtained by plotting $x_i$ against $1/y_i^2$ for the smaller $x_i$. Better yet, because the $x_i$ are measured with error and the $y_i$ change monotonically with the $x_i$, we should focus on the data with the larger values of $1/y_i^2$. Here is an example from our simulated dataset showing the largest half of the $y_i$ in red, the smallest half in blue, and a line through the origin fit to the red points. The points approximately line up, although there is a bit of curvature at the small values of $x$ and $y$. (Notice the choice of axes: because $x$ is the measurement, it is conventional to plot it on the vertical axis.) By focusing the fit on the red points, where curvature should be minimal, we ought to obtain a reasonable estimate of $b$. The value of $0.096$ shown in the title is the square root of the slope of this line: it's only $4$% less than the true value! At this point the predicted values can be updated via $$\hat{x}_i = f(y_i; \hat{a}, \hat{b}).$$ Iterate until either the estimates stabilize (which is not guaranteed) or they cycle through small ranges of values (which still cannot be guaranteed). It turns out that $a$ is difficult to estimate unless we have a good set of very large values of $x$, but that $b$--which determines the vertical asymptote in the original plot (in the question) and is the focus of the question--can be pinned down quite accurately, provided there are some data within the vertical asymptote. In our running example, the iterations do converge to $\hat{a} = 0.000196$ (which is almost twice the correct value of $0.0001$) and $\hat{b} = 0.1073$ (which is close to the correct value of $0.1$). This plot shows the data once more, upon which are superimposed (a) the true curve in gray (dashed) and (b) the estimated curve in red (solid): This fit is so good that it is difficult to distinguish the true curve from the fitted curve: they overlap almost everywhere. Incidentally, the estimated error variance of $3.73$ is very close to the true value of $4$. There are some issues with this approach: The estimates are biased. The bias becomes apparent when the dataset is small and relatively few values are close to the x-axis. The fit is systematically a little low. The estimation procedure requires a method to tell "large" from "small" values of the $y_i$. I could propose exploratory ways to identify optimal definitions, but as a practical matter you can leave these as "tuning" constants and alter them to check the sensitivity of the results. I have set them arbitrarily by dividing the data into three equal groups according to the value of $y_i$ and using the two outer groups. The procedure will not work for all possible combinations of $a$ and $b$ or all possible ranges of data. However, it ought to work well whenever enough of the curve is represented in the dataset to reflect both asymptotes: the vertical one at one end and the slanted one at the other end. Code The following is written in Mathematica. estimate[{a_, b_, xHat_}, {x_, y_}] := Module[{n = Length[x], k0, k1, yLarge, xLarge, xHatLarge, ySmall, xSmall, xHatSmall, a1, b1, xHat1, u, fr}, fr[y_, {a_, b_}] := Root[-b^2 + y^2 #1 - 2 a y #1^2 + a^2 #1^3 &, 1]; k0 = Floor[1 n/3]; k1 = Ceiling[2 n/3];(* The tuning constants *) yLarge = y[[k1 + 1 ;;]]; xLarge = x[[k1 + 1 ;;]]; xHatLarge = xHat[[k1 + 1 ;;]]; ySmall = y[[;; k0]]; xSmall = x[[;; k0]]; xHatSmall = xHat[[;; k0]]; a1 = 1/ Last[LinearModelFit[{yLarge + b/Sqrt[xHatLarge], xLarge}\[Transpose], u, u]["BestFitParameters"]]; b1 = Sqrt[ Last[LinearModelFit[{(1 - 2 a1 b xHatSmall^(3/2)) / ySmall^2, xSmall}\[Transpose], u, u]["BestFitParameters"]]]; xHat1 = fr[#, {a1, b1}] & /@ y; {a1, b1, xHat1} ]; Apply this to data (given by parallel vectors x and y formed into a two-column matrix data = {x,y}) until convergence, starting with estimates of $a=b=0$: {a, b, xHat} = NestWhile[estimate[##, data] &, {0, 0, data[[1]]}, Norm[Most[#1] - Most[#2]] >= 0.001 &, 2, 100]
Strategy for fitting highly non-linear function The methods we would use to fit this manually (that is, of Exploratory Data Analysis) can work remarkably well with such data. I wish to reparameterize the model slightly in order to make its paramete
23,932
Strategy for fitting highly non-linear function
See the important questions @probabilityislogic posted If you only have errors in y, and they're additive and you have constant variance (i.e. your assumptions fit what it sounds like you did), then if you let $y^* = y\sqrt{x}$, you could perhaps try a weighted linear fit of $y^*$ on $x^* = x^{3/2}$, where the weights will then be proportional to $1/x$ ... (and yes, this might simply be shifting the problem around, so it may still be problematic - but you should at least find it easier to regularize with this transformation of the problem). Note that with this manipulation, your $b$ becomes the intercept of the new equation If your variances are already not constant or your errors aren't additive or you have errors in the $x$, this will change things. -- Edit to consider the additional information: We got to a model of the form: $y^* = b + a x^*$ We now have that the errors are in x and additive. We still don't know if the variance is constant on that scale. Rewrite as $x^* = y^*/a - b/a = m y^* + c$ Let $x_o^* = x^* + \eta$, where this error term may be heteroskedastic (if the original $x$ has constant spread, it will be heteroskedastic, but of known form) (where the $o$ in $x_o^*$ stands for 'observed') Then $x^*_o = c + m y^* + \epsilon$ where $\epsilon = -\zeta$ looks nice but now has correlated errors in the $x$ and $y$ variables; so it's a linear errors-in-variables model, with heteroskedasticity and known form of dependence in the errors. I am not sure that improves things! I believe there are methods for that kind of thing, but it's not really my area at all. I mentioned in the comments that you might like to look at inverse regression, but the particular form of your function may preclude getting far with that. You might even be stuck with trying fairly robust-to-errors-in-x methods in that linear form. -- Now a huge question: if the errors are in x, how the heck were you fitting the nonlinear model? Were you just blindly minimizing the sum of squared errors in $y$? That might well be your problem. I suppose one could try to rewrite the original thing as a model with errors in the $x$ and try to optimize the fit but I am not sure I see how to set that up right.
Strategy for fitting highly non-linear function
See the important questions @probabilityislogic posted If you only have errors in y, and they're additive and you have constant variance (i.e. your assumptions fit what it sounds like you did), then i
Strategy for fitting highly non-linear function See the important questions @probabilityislogic posted If you only have errors in y, and they're additive and you have constant variance (i.e. your assumptions fit what it sounds like you did), then if you let $y^* = y\sqrt{x}$, you could perhaps try a weighted linear fit of $y^*$ on $x^* = x^{3/2}$, where the weights will then be proportional to $1/x$ ... (and yes, this might simply be shifting the problem around, so it may still be problematic - but you should at least find it easier to regularize with this transformation of the problem). Note that with this manipulation, your $b$ becomes the intercept of the new equation If your variances are already not constant or your errors aren't additive or you have errors in the $x$, this will change things. -- Edit to consider the additional information: We got to a model of the form: $y^* = b + a x^*$ We now have that the errors are in x and additive. We still don't know if the variance is constant on that scale. Rewrite as $x^* = y^*/a - b/a = m y^* + c$ Let $x_o^* = x^* + \eta$, where this error term may be heteroskedastic (if the original $x$ has constant spread, it will be heteroskedastic, but of known form) (where the $o$ in $x_o^*$ stands for 'observed') Then $x^*_o = c + m y^* + \epsilon$ where $\epsilon = -\zeta$ looks nice but now has correlated errors in the $x$ and $y$ variables; so it's a linear errors-in-variables model, with heteroskedasticity and known form of dependence in the errors. I am not sure that improves things! I believe there are methods for that kind of thing, but it's not really my area at all. I mentioned in the comments that you might like to look at inverse regression, but the particular form of your function may preclude getting far with that. You might even be stuck with trying fairly robust-to-errors-in-x methods in that linear form. -- Now a huge question: if the errors are in x, how the heck were you fitting the nonlinear model? Were you just blindly minimizing the sum of squared errors in $y$? That might well be your problem. I suppose one could try to rewrite the original thing as a model with errors in the $x$ and try to optimize the fit but I am not sure I see how to set that up right.
Strategy for fitting highly non-linear function See the important questions @probabilityislogic posted If you only have errors in y, and they're additive and you have constant variance (i.e. your assumptions fit what it sounds like you did), then i
23,933
Strategy for fitting highly non-linear function
After some more weeks of experimenting, a different technique seems to work the best in this particular case: Total Least Squares fitting. It's a variant of the usual (nonlinear) Least Squares fitting, but instead of measuring fit errors along just one of the axes (which causes problems in highly nonlinear cases such as this one), it takes both axes into account. There's a plethora of articles, tutorials and books avaiable on the subject, although the nonlinear case is more elusive. There's even some MATLAB code available.
Strategy for fitting highly non-linear function
After some more weeks of experimenting, a different technique seems to work the best in this particular case: Total Least Squares fitting. It's a variant of the usual (nonlinear) Least Squares fitting
Strategy for fitting highly non-linear function After some more weeks of experimenting, a different technique seems to work the best in this particular case: Total Least Squares fitting. It's a variant of the usual (nonlinear) Least Squares fitting, but instead of measuring fit errors along just one of the axes (which causes problems in highly nonlinear cases such as this one), it takes both axes into account. There's a plethora of articles, tutorials and books avaiable on the subject, although the nonlinear case is more elusive. There's even some MATLAB code available.
Strategy for fitting highly non-linear function After some more weeks of experimenting, a different technique seems to work the best in this particular case: Total Least Squares fitting. It's a variant of the usual (nonlinear) Least Squares fitting
23,934
Can I detrend and difference to make a series stationary?
If your process is given by $$y_t = \alpha + \beta t + \gamma x_{t} + \epsilon_t $$ then differencing it takes out the constant and the trend so that you're left with $$\Delta y_t = \gamma\Delta x_t + u_t $$ Therefore differencing the series takes out the trend by itself, there's no need to detrend the process beforehand. EDIT: As noted by @djom and @Placidia in the comments, if the trend is not linear things could get more complicated. To get back to the example above, we would have more precisely $$ \Delta y_t = \beta + \gamma \Delta x_t + \epsilon_t - \epsilon_{t-1} $$ so that the trend is transformed actually to a constant. However if your deterministic trend is some function $f(t)$, then it will depend on behaviour of $f(t) - f(t-1)$. For a polynomial trend with degree $p$, you'll need to difference $p$ times to get rid of it while for exponential trend differencing won't theoretically help at all. If you observe that differencing twice eliminates the trend, you may be simply facing a quadratic trend, i.e. $\beta_1 t^2 + \beta_2 t$.
Can I detrend and difference to make a series stationary?
If your process is given by $$y_t = \alpha + \beta t + \gamma x_{t} + \epsilon_t $$ then differencing it takes out the constant and the trend so that you're left with $$\Delta y_t = \gamma\Delta x_t +
Can I detrend and difference to make a series stationary? If your process is given by $$y_t = \alpha + \beta t + \gamma x_{t} + \epsilon_t $$ then differencing it takes out the constant and the trend so that you're left with $$\Delta y_t = \gamma\Delta x_t + u_t $$ Therefore differencing the series takes out the trend by itself, there's no need to detrend the process beforehand. EDIT: As noted by @djom and @Placidia in the comments, if the trend is not linear things could get more complicated. To get back to the example above, we would have more precisely $$ \Delta y_t = \beta + \gamma \Delta x_t + \epsilon_t - \epsilon_{t-1} $$ so that the trend is transformed actually to a constant. However if your deterministic trend is some function $f(t)$, then it will depend on behaviour of $f(t) - f(t-1)$. For a polynomial trend with degree $p$, you'll need to difference $p$ times to get rid of it while for exponential trend differencing won't theoretically help at all. If you observe that differencing twice eliminates the trend, you may be simply facing a quadratic trend, i.e. $\beta_1 t^2 + \beta_2 t$.
Can I detrend and difference to make a series stationary? If your process is given by $$y_t = \alpha + \beta t + \gamma x_{t} + \epsilon_t $$ then differencing it takes out the constant and the trend so that you're left with $$\Delta y_t = \gamma\Delta x_t +
23,935
Can I detrend and difference to make a series stationary?
I assume you're referring to nonlinear trend; detrending and differencing in whatever order won't necessarily make a series stationary; it depends on whether the form of nonstationarity is such that it is all captured by integration and trend.
Can I detrend and difference to make a series stationary?
I assume you're referring to nonlinear trend; detrending and differencing in whatever order won't necessarily make a series stationary; it depends on whether the form of nonstationarity is such that i
Can I detrend and difference to make a series stationary? I assume you're referring to nonlinear trend; detrending and differencing in whatever order won't necessarily make a series stationary; it depends on whether the form of nonstationarity is such that it is all captured by integration and trend.
Can I detrend and difference to make a series stationary? I assume you're referring to nonlinear trend; detrending and differencing in whatever order won't necessarily make a series stationary; it depends on whether the form of nonstationarity is such that i
23,936
What are $\ell_p$ norms and how are they relevant to regularization?
$\ell_p$ norms are functions that take vectors and return nonnegative numbers. They're defined as $$\|\vec x\|_p = \left(\sum_{i=1}^d |x_i|^p\right)^{1/p}$$ In the case where $p=2$, this is called the Euclidean norm. You can define the Euclidean distance as $\|\vec x - \vec y\|_2$. When $p = \infty$, this just means $\|\vec x\|_\infty = \sup_i x_i$ (or $\max_i x_i$). Strictly speaking, $p$ must be at least one for $\|\vec x\|_p$ to be a norm. If $0 < p < 1$, then $\|\vec x\|_p$ isn't really a norm, because norms must satisfy the triangle inequality. (There are also $L_p$ norms, which are defined analagously, except for functions instead of vectors or sequences -- really this is the same thing, since vectors are functions with finite domains.) I'm not aware of any use for a norm in a machine learning application where $p > 2$, except where $p = \infty$. Usually you see $p = 2$ or $p = 1$, or sometimes $1 < p < 2$ where you want to relax the $p = 1$ case; $\|\vec x\|_1$ isn't strictly convex in $\vec x$, but $\|\vec x\|_p$ is, for $1 < p < \infty$. This can make finding the solution "easier" in certain cases. In the context of regularization, if you add $\|\vec x\|_1$ to your objective function, what you're saying is that you expect $\vec x$ to be sparse, that is, mostly made up of zeros. It's a bit technical, but basically, if there is a dense solution, there's likely a sparser solution with the same norm. If you expect your solution to be dense, you can add $\|\vec x\|_2^2$ to your objective, because then it's much easier to work with its derivative. Both serve the purpose of keeping the solution from having too much weight. The mixed norm comes in when you're trying to integrate several sources. Basically you want the solution vector to be made up of several pieces $\vec{x}^j$, where $j$ is the index of some source. The $\ell_{p,q}$ norm is just the $q$-norm of all the $p$-norms collected in a vector. I.e., $$\|\vec x\|_{p,q} = \left( \sum_{j = 1}^m \left( \sum_{i=1}^d |x_i^j|^p\right)^{q/p}\right)^{1/q}$$ The purpose of this is not to "oversparsify" a set of solutions, say by using $\|\vec x\|_{1,2}$. The individual pieces are sparse, but you don't risk nuking a whole solution vector by taking the $1$-norm of all of the solutions. So you use the $2$-norm on the outside instead. Hope that helps. See this paper for more details.
What are $\ell_p$ norms and how are they relevant to regularization?
$\ell_p$ norms are functions that take vectors and return nonnegative numbers. They're defined as $$\|\vec x\|_p = \left(\sum_{i=1}^d |x_i|^p\right)^{1/p}$$ In the case where $p=2$, this is called the
What are $\ell_p$ norms and how are they relevant to regularization? $\ell_p$ norms are functions that take vectors and return nonnegative numbers. They're defined as $$\|\vec x\|_p = \left(\sum_{i=1}^d |x_i|^p\right)^{1/p}$$ In the case where $p=2$, this is called the Euclidean norm. You can define the Euclidean distance as $\|\vec x - \vec y\|_2$. When $p = \infty$, this just means $\|\vec x\|_\infty = \sup_i x_i$ (or $\max_i x_i$). Strictly speaking, $p$ must be at least one for $\|\vec x\|_p$ to be a norm. If $0 < p < 1$, then $\|\vec x\|_p$ isn't really a norm, because norms must satisfy the triangle inequality. (There are also $L_p$ norms, which are defined analagously, except for functions instead of vectors or sequences -- really this is the same thing, since vectors are functions with finite domains.) I'm not aware of any use for a norm in a machine learning application where $p > 2$, except where $p = \infty$. Usually you see $p = 2$ or $p = 1$, or sometimes $1 < p < 2$ where you want to relax the $p = 1$ case; $\|\vec x\|_1$ isn't strictly convex in $\vec x$, but $\|\vec x\|_p$ is, for $1 < p < \infty$. This can make finding the solution "easier" in certain cases. In the context of regularization, if you add $\|\vec x\|_1$ to your objective function, what you're saying is that you expect $\vec x$ to be sparse, that is, mostly made up of zeros. It's a bit technical, but basically, if there is a dense solution, there's likely a sparser solution with the same norm. If you expect your solution to be dense, you can add $\|\vec x\|_2^2$ to your objective, because then it's much easier to work with its derivative. Both serve the purpose of keeping the solution from having too much weight. The mixed norm comes in when you're trying to integrate several sources. Basically you want the solution vector to be made up of several pieces $\vec{x}^j$, where $j$ is the index of some source. The $\ell_{p,q}$ norm is just the $q$-norm of all the $p$-norms collected in a vector. I.e., $$\|\vec x\|_{p,q} = \left( \sum_{j = 1}^m \left( \sum_{i=1}^d |x_i^j|^p\right)^{q/p}\right)^{1/q}$$ The purpose of this is not to "oversparsify" a set of solutions, say by using $\|\vec x\|_{1,2}$. The individual pieces are sparse, but you don't risk nuking a whole solution vector by taking the $1$-norm of all of the solutions. So you use the $2$-norm on the outside instead. Hope that helps. See this paper for more details.
What are $\ell_p$ norms and how are they relevant to regularization? $\ell_p$ norms are functions that take vectors and return nonnegative numbers. They're defined as $$\|\vec x\|_p = \left(\sum_{i=1}^d |x_i|^p\right)^{1/p}$$ In the case where $p=2$, this is called the
23,937
Expected value of spurious correlation
I found the following article, which addresses this problem: Jiang, Tiefeng (2004). The Asymptotic Distributions of the Largest Entries of Sample Correlation Matrices. The Annals of Applied Probability, 14(2), 865-880 Jiang shows the asymptotic distribution of the statistic $L_n = \max_{1\leq i<j\leq N} |\rho_{ij}|$, where $\rho_{ij}$ is the correlation between the $i$th and $j$th random vectors of length $n$ (with $i\neq j$), is $$ \lim_{n \to \infty} \Pr[ nL_n^2 - 4\log n + \log(\log(n)) \leq y] = \exp\left(-\frac{1}{a^2\sqrt{8\pi}}\exp(-y/2)\right) \,, $$ where $a = \lim_{n\to\infty} n/N$ is assumed to exist in the paper and $N$ is a function of $n$. Apparently this result holds for any distribution distributions with a sufficient number of finite moments (Edit: See @cardinal's comment below). Jiang points out that this is a Type I extreme value distribution. The location and scale are $$ \sigma=2,\quad\mu = 2\log\left( \frac{1}{a^2\sqrt{8\pi}} \right). $$ The expected value of the Type-I EV distribution is $\mu + \sigma \gamma$, where $\gamma$ denotes Euler's constant. However, as noted in the comments, convergence in distribution does not, in and of itself, guarantee convergence of the means to that of the limiting distribution. If we could show such a result in this case, then the asymptotic expected value of $n L_n^2 -4\log n + \log(\log(n))$ would be $$ \lim_{n\to\infty} \mathbb E\left[ nL_n^2 - 4\log n + \log(\log(n)) \right] = -2\log\left(a^2\sqrt{8\pi} \right) + 2\gamma \,. $$ Note that this would give the asymptotic expected value of the largest squared correlation, whereas the question asked for the expected value of the largest absolute correlation. So not 100% there, but close. I did a few brief simulations that lead me to think either 1) there's a problem with my simulation (likely), 2) there's a problem with my transcription / algebra (also likely), or 3) the approximation isn't valid for the values of $n$ and $N$ I used. Perhaps the OP can weigh in with some simulation results using this approximation?
Expected value of spurious correlation
I found the following article, which addresses this problem: Jiang, Tiefeng (2004). The Asymptotic Distributions of the Largest Entries of Sample Correlation Matrices. The Annals of Applied Probabilit
Expected value of spurious correlation I found the following article, which addresses this problem: Jiang, Tiefeng (2004). The Asymptotic Distributions of the Largest Entries of Sample Correlation Matrices. The Annals of Applied Probability, 14(2), 865-880 Jiang shows the asymptotic distribution of the statistic $L_n = \max_{1\leq i<j\leq N} |\rho_{ij}|$, where $\rho_{ij}$ is the correlation between the $i$th and $j$th random vectors of length $n$ (with $i\neq j$), is $$ \lim_{n \to \infty} \Pr[ nL_n^2 - 4\log n + \log(\log(n)) \leq y] = \exp\left(-\frac{1}{a^2\sqrt{8\pi}}\exp(-y/2)\right) \,, $$ where $a = \lim_{n\to\infty} n/N$ is assumed to exist in the paper and $N$ is a function of $n$. Apparently this result holds for any distribution distributions with a sufficient number of finite moments (Edit: See @cardinal's comment below). Jiang points out that this is a Type I extreme value distribution. The location and scale are $$ \sigma=2,\quad\mu = 2\log\left( \frac{1}{a^2\sqrt{8\pi}} \right). $$ The expected value of the Type-I EV distribution is $\mu + \sigma \gamma$, where $\gamma$ denotes Euler's constant. However, as noted in the comments, convergence in distribution does not, in and of itself, guarantee convergence of the means to that of the limiting distribution. If we could show such a result in this case, then the asymptotic expected value of $n L_n^2 -4\log n + \log(\log(n))$ would be $$ \lim_{n\to\infty} \mathbb E\left[ nL_n^2 - 4\log n + \log(\log(n)) \right] = -2\log\left(a^2\sqrt{8\pi} \right) + 2\gamma \,. $$ Note that this would give the asymptotic expected value of the largest squared correlation, whereas the question asked for the expected value of the largest absolute correlation. So not 100% there, but close. I did a few brief simulations that lead me to think either 1) there's a problem with my simulation (likely), 2) there's a problem with my transcription / algebra (also likely), or 3) the approximation isn't valid for the values of $n$ and $N$ I used. Perhaps the OP can weigh in with some simulation results using this approximation?
Expected value of spurious correlation I found the following article, which addresses this problem: Jiang, Tiefeng (2004). The Asymptotic Distributions of the Largest Entries of Sample Correlation Matrices. The Annals of Applied Probabilit
23,938
Expected value of spurious correlation
Further to the answer provided by @jmtroos, below are the details of my simulation, and a comparison with @jmtroos's derivation of the expectation from Jiang (2004), that is: $$E\left[L_n^2 \right]= \frac{1}{n} \left \{ 2\log\left( \frac{N^2}{n^2\sqrt{8\pi}} \right) + 2\gamma+ 4\log n - \log(\log(n))\right \}$$ The values of this expectation seem to be above the simulated values for small $N$ and below for large $N$ and they appear to diverge slightly as $N$ increases. However, the differences diminish for increasing $n$, as we would expect as the paper claims that the distribution is asymptotic. I have tried various $n \in [100,500]$. The simulation below uses $n=200$. I'm pretty new to R, so any hints or suggestions to make my code better would be warmly welcomed. set.seed(1) ns <- 500 # number of simulations for each N n <- 200 # length of each vector mu <- 0 sigma <- 1 # parameters for the distribution we simulate from par(mfrow=c(5,5)) x<-trunc(seq(from=5,to=n, length=20)) #vector of Ns y<-vector(mode = "numeric") #vector to store the mean correlations k<- 1 #index for y for (N in x) { # loop over a range of N dt <- matrix(nrow=n,ncol=N) J <- vector(mode = "numeric") # vector to store the simulated largest absolute # correlations for each N for (j in 1:ns) { # for each N, simulated ns times for (i in 1:N) { dt[,i] <- rnorm(n,mu,sigma) } # perform the simulation M<-matrix(cor(dt),nrow=N,ncol=N) m <- M diag(m) <- NA J[j] <- max(abs(m), na.rm=TRUE) # obtain the largest absolute correlation # these 3 lines came from stackoverflow } hist(J,main=paste("N=",N, " n=",n, " N(0,1)", "\nmean=",round(J[j],4))) y[k]<-mean(J) k=k+1 } lm1 <- lm(y~log(x)) summary(lm1) logx_sq=log(x)^2 lm2<-lm(y~log(x)+logx_sq) summary(lm2) # linear models for these simulations # Jiang 2004 paper, computation: gamma = 0.5772 yy <- vector(mode = "numeric") yy <- sqrt((2*log((x^2)/(sqrt(8*pi)*n^2)) + 2*gamma-(-4*log(n)+log(log(n))))/n) plot(x,yy) # plot the simulated correlations points(x,y,col='red') # add the points using the expectation
Expected value of spurious correlation
Further to the answer provided by @jmtroos, below are the details of my simulation, and a comparison with @jmtroos's derivation of the expectation from Jiang (2004), that is: $$E\left[L_n^2 \right]= \
Expected value of spurious correlation Further to the answer provided by @jmtroos, below are the details of my simulation, and a comparison with @jmtroos's derivation of the expectation from Jiang (2004), that is: $$E\left[L_n^2 \right]= \frac{1}{n} \left \{ 2\log\left( \frac{N^2}{n^2\sqrt{8\pi}} \right) + 2\gamma+ 4\log n - \log(\log(n))\right \}$$ The values of this expectation seem to be above the simulated values for small $N$ and below for large $N$ and they appear to diverge slightly as $N$ increases. However, the differences diminish for increasing $n$, as we would expect as the paper claims that the distribution is asymptotic. I have tried various $n \in [100,500]$. The simulation below uses $n=200$. I'm pretty new to R, so any hints or suggestions to make my code better would be warmly welcomed. set.seed(1) ns <- 500 # number of simulations for each N n <- 200 # length of each vector mu <- 0 sigma <- 1 # parameters for the distribution we simulate from par(mfrow=c(5,5)) x<-trunc(seq(from=5,to=n, length=20)) #vector of Ns y<-vector(mode = "numeric") #vector to store the mean correlations k<- 1 #index for y for (N in x) { # loop over a range of N dt <- matrix(nrow=n,ncol=N) J <- vector(mode = "numeric") # vector to store the simulated largest absolute # correlations for each N for (j in 1:ns) { # for each N, simulated ns times for (i in 1:N) { dt[,i] <- rnorm(n,mu,sigma) } # perform the simulation M<-matrix(cor(dt),nrow=N,ncol=N) m <- M diag(m) <- NA J[j] <- max(abs(m), na.rm=TRUE) # obtain the largest absolute correlation # these 3 lines came from stackoverflow } hist(J,main=paste("N=",N, " n=",n, " N(0,1)", "\nmean=",round(J[j],4))) y[k]<-mean(J) k=k+1 } lm1 <- lm(y~log(x)) summary(lm1) logx_sq=log(x)^2 lm2<-lm(y~log(x)+logx_sq) summary(lm2) # linear models for these simulations # Jiang 2004 paper, computation: gamma = 0.5772 yy <- vector(mode = "numeric") yy <- sqrt((2*log((x^2)/(sqrt(8*pi)*n^2)) + 2*gamma-(-4*log(n)+log(log(n))))/n) plot(x,yy) # plot the simulated correlations points(x,y,col='red') # add the points using the expectation
Expected value of spurious correlation Further to the answer provided by @jmtroos, below are the details of my simulation, and a comparison with @jmtroos's derivation of the expectation from Jiang (2004), that is: $$E\left[L_n^2 \right]= \
23,939
A hair dresser's conundrum
There are a lot of moving parts in this problem, which makes it ripe for simulation. First off, as Elvis mentioned in the comments, it seems like Stacey should take about 16 appointments, as each one is about half an hour. But you know that as the appointments start to get delayed, things start shifting later and later - so if Stacey is going to only start an appointment if she has half an hour left (so much for sweeping the hair off the floor, eh, Stacey?) then we're going to have less than 16 possible slots, if we used a crystal ball to schedule appointments with no resting time. In the next simulation, we can investigate the curve of cost as a function of appointment length. Of course, the rest of the parameters will also end up playing a role here - and in reality, Stacey isn't going to schedule her appointments fractional minutes apart, but this gives us some intuition about what's going on. I've also plotted the time that Stacey has to be at work as the color. I decided that Stacey would never schedule her last appointment after 7:30, but sometimes the appointment shows up late, or there's been a delay! You can see that the time she gets to go home is quantized, so that as appointments get longer, you get one less appointment and then don't have to work as late. And I think that's a missing element here - maybe scheduling your appointments 45 minutes apart is great, but you'll get an extra appointment in if you can squeeze it to 40. That cost is incorporated by Stacey's waiting (which is why the cost goes up as the appointment length goes up) but your valuation of Stacey's time waiting might not be correct. Anyway, fun problem! And a good way to learn some ggplot goodness and remember that my R syntax is super shaky. :) My code is below - please feel free to offer suggestions for improvement. To generate the code for the top plot: hairtime = 30 hairsd = 10 nSim = 1000 allCuts = rep(0,nSim) allTime = rep(0,nSim) for (i in 1:nSim) { t = 0 ncuts = 0 while (t < 7.5) { ncuts = ncuts+1 nexthairtime = rnorm(1,hairtime,hairsd) t = t+(nexthairtime/60) } allCuts[i] = ncuts allTime[i] = t } hist(allCuts,main="Number of haircuts in an 8 hour day",xlab="Customers") The second simulation is a lot longer... nSim = 100 allCuts = rep(0,nSim) allTime = rep(0,nSim) allCost = rep(0,nSim) lateMean = 10 lateSD = 3 staceyWasted = 1 customerWasted = 3 allLengths = seq(30,60,0.25) # Keep everything in 'long form' just to make our plotting lives easier later allApptCosts = data.frame(matrix(ncol=3,nrow=length(allLengths)*nSim)) names(allApptCosts) <- c("Appt.Length","Cost","Time") ind = 1 # for every appointment length... for (a in 1:length(allLengths)) { apptlen = allLengths[a] # ...simulate the time, and the cost of cutting hair. for (i in 1:nSim) { appts = seq(from=0,to=(8-hairtime/60),by=apptlen/60) t = 0 cost = 0 ncuts = 0 for (a in 1:length(appts)) { customerArrival = appts[a] # late! if (runif(1)>0.9) { customerArrival = appts[a]+rnorm(1,lateMean,lateSD)/60 } waitTime = t-customerArrival # negative waitTime means the customer arrives late cost = cost+max(waitTime,0)*customerWasted+abs(min(waitTime,0))*staceyWasted # get the haircut nexthairtime = rnorm(1,hairtime,hairsd) t = customerArrival+(nexthairtime/60) } allCost[i] = cost allApptCosts[ind,1] = apptlen allApptCosts[ind,2] = cost allApptCosts[ind,3] = t ind = ind+1 } } qplot(Appt.Length,Cost,geom=c("point"),alpha=I(0.75),color=Time,data=allApptCosts,xlab="Appointment Length (minutes)",ylab="Cost")+ geom_smooth(color="black",size=2)+ opts(axis.title.x=theme_text(size=16))+ opts(axis.title.y=theme_text(size=16))+ opts(axis.text.x=theme_text(size=14))+ opts(axis.text.y=theme_text(size=14))+ opts(legend.text=theme_text(size=12))+ opts(legend.title=theme_text(size=12,hjust=-.2))
A hair dresser's conundrum
There are a lot of moving parts in this problem, which makes it ripe for simulation. First off, as Elvis mentioned in the comments, it seems like Stacey should take about 16 appointments, as each one
A hair dresser's conundrum There are a lot of moving parts in this problem, which makes it ripe for simulation. First off, as Elvis mentioned in the comments, it seems like Stacey should take about 16 appointments, as each one is about half an hour. But you know that as the appointments start to get delayed, things start shifting later and later - so if Stacey is going to only start an appointment if she has half an hour left (so much for sweeping the hair off the floor, eh, Stacey?) then we're going to have less than 16 possible slots, if we used a crystal ball to schedule appointments with no resting time. In the next simulation, we can investigate the curve of cost as a function of appointment length. Of course, the rest of the parameters will also end up playing a role here - and in reality, Stacey isn't going to schedule her appointments fractional minutes apart, but this gives us some intuition about what's going on. I've also plotted the time that Stacey has to be at work as the color. I decided that Stacey would never schedule her last appointment after 7:30, but sometimes the appointment shows up late, or there's been a delay! You can see that the time she gets to go home is quantized, so that as appointments get longer, you get one less appointment and then don't have to work as late. And I think that's a missing element here - maybe scheduling your appointments 45 minutes apart is great, but you'll get an extra appointment in if you can squeeze it to 40. That cost is incorporated by Stacey's waiting (which is why the cost goes up as the appointment length goes up) but your valuation of Stacey's time waiting might not be correct. Anyway, fun problem! And a good way to learn some ggplot goodness and remember that my R syntax is super shaky. :) My code is below - please feel free to offer suggestions for improvement. To generate the code for the top plot: hairtime = 30 hairsd = 10 nSim = 1000 allCuts = rep(0,nSim) allTime = rep(0,nSim) for (i in 1:nSim) { t = 0 ncuts = 0 while (t < 7.5) { ncuts = ncuts+1 nexthairtime = rnorm(1,hairtime,hairsd) t = t+(nexthairtime/60) } allCuts[i] = ncuts allTime[i] = t } hist(allCuts,main="Number of haircuts in an 8 hour day",xlab="Customers") The second simulation is a lot longer... nSim = 100 allCuts = rep(0,nSim) allTime = rep(0,nSim) allCost = rep(0,nSim) lateMean = 10 lateSD = 3 staceyWasted = 1 customerWasted = 3 allLengths = seq(30,60,0.25) # Keep everything in 'long form' just to make our plotting lives easier later allApptCosts = data.frame(matrix(ncol=3,nrow=length(allLengths)*nSim)) names(allApptCosts) <- c("Appt.Length","Cost","Time") ind = 1 # for every appointment length... for (a in 1:length(allLengths)) { apptlen = allLengths[a] # ...simulate the time, and the cost of cutting hair. for (i in 1:nSim) { appts = seq(from=0,to=(8-hairtime/60),by=apptlen/60) t = 0 cost = 0 ncuts = 0 for (a in 1:length(appts)) { customerArrival = appts[a] # late! if (runif(1)>0.9) { customerArrival = appts[a]+rnorm(1,lateMean,lateSD)/60 } waitTime = t-customerArrival # negative waitTime means the customer arrives late cost = cost+max(waitTime,0)*customerWasted+abs(min(waitTime,0))*staceyWasted # get the haircut nexthairtime = rnorm(1,hairtime,hairsd) t = customerArrival+(nexthairtime/60) } allCost[i] = cost allApptCosts[ind,1] = apptlen allApptCosts[ind,2] = cost allApptCosts[ind,3] = t ind = ind+1 } } qplot(Appt.Length,Cost,geom=c("point"),alpha=I(0.75),color=Time,data=allApptCosts,xlab="Appointment Length (minutes)",ylab="Cost")+ geom_smooth(color="black",size=2)+ opts(axis.title.x=theme_text(size=16))+ opts(axis.title.y=theme_text(size=16))+ opts(axis.text.x=theme_text(size=14))+ opts(axis.text.y=theme_text(size=14))+ opts(legend.text=theme_text(size=12))+ opts(legend.title=theme_text(size=12,hjust=-.2))
A hair dresser's conundrum There are a lot of moving parts in this problem, which makes it ripe for simulation. First off, as Elvis mentioned in the comments, it seems like Stacey should take about 16 appointments, as each one
23,940
What is a good index of the degree of violation of normality and what descriptive labels could be attached to that index?
A) What is the best single index of the degree to which the data violates normality? B) Or is it just better to talk about multiple indices of normality violation (e.g., skewness, kurtosis, outlier prevalence)? I would vote for B. Different violations have different consequences. For example, unimodal, symmetrical distributions with heavy tails make your CIs very wide and presumably reduce the power to detect any effects. The mean, however, still hits the "typical" value. For very skewed distributions, the mean for example, might not be a very sensible index of "the typical value". C) How can confidence intervals be calculated (or perhaps a Bayesian approach) for the index? I don't know about Bayesian statistics, but concerning classical test of normality, I'd like to cite Erceg-Hurn et al. (2008) [2]: Another problem is that assumption tests have their own assumptions. Normality tests usually assume that data are homoscedastic; tests of homoscedasticity assume that data are normally distributed. If the normality and homoscedasticity assumptions are violated, the validity of the assumption tests can be seriously compromised. Prominent statisticians have described the assumption tests (e.g., Levene’s test, the Kolmogorov–Smirnov test) built into software such as SPSS as fatally flawed and recommended that these tests never be used (D’Agostino, 1986; Glass & Hopkins, 1996). D) What kind of verbal labels could you assign to points on that index to indicate the degree of violation of normality (e.g., mild, moderate, strong, extreme, etc.)? Micceri (1989) [1] did an analysis of 440 large scale data sets in psychology. He assessed the symmetry and the tail weight and defined criteria and labels. Labels for asymmetry range from 'relatively symmetric' to 'moderate --> extreme --> exponential asymmetry'. Labels for tail weight range from 'Uniform --> less than Gaussian --> About Gaussian --> Moderate --> Extreme --> Double exponential contamination'. Each classification is based on multiple, robust criteria. He found, that from these 440 data sets only 28% were relatively symmetric, and only 15% were about Gaussian concerning tail weights. Therefore the nice title of the paper: The unicorn, the normal curve, and other improbable creatures I wrote an R function, that automatically assesses Micceri's criteria and also prints out the labels: # This function prints out the Micceri-criteria for tail weight and symmetry of a distribution micceri <- function(x, plot=FALSE) { library(fBasics) QS <- (quantile(x, prob=c(.975, .95, .90)) - median(x)) / (quantile(x, prob=c(.75)) - median(x)) n <- length(x) x.s <- sort(x) U05 <- mean(x.s[(.95*n ):n]) L05 <- mean(x.s[1:(.05*n)]) U20 <- mean(x.s[(.80*n):n]) L20 <- mean(x.s[1:(.20*n)]) U50 <- mean(x.s[(.50*n):n]) L50 <- mean(x.s[1:(.50*n)]) M25 <- mean(x.s[(.375*n):(.625*n)]) Q <- (U05 - L05)/(U50 - L50) Q1 <- (U20 - L20)/(U50 - L50) Q2 <- (U05 - M25)/(M25 - L05) # mean/median interval QR <- quantile(x, prob=c(.25, .75)) # Interquartile range MM <- abs(mean(x) - median(x)) / (1.4807*(abs(QR[2] - QR[1])/2)) SKEW <- skewness(x) if (plot==TRUE) plot(density(x)) tail_weight <- round(c(QS, Q=Q, Q1=Q1), 2) symmetry <- round(c(Skewness=SKEW, MM=MM, Q2=Q2), 2) cat.tail <- matrix(c(1.9, 2.75, 3.05, 3.9, 4.3, 1.8, 2.3, 2.5, 2.8, 3.3, 1.6, 1.85, 1.93, 2, 2.3, 1.9, 2.5, 2.65, 2.73, 3.3, 1.6, 1.7, 1.8, 1.85, 1.93), ncol=5, nrow=5) cat.sym <- matrix(c(0.31, 0.71, 2, 0.05, 0.18, 0.37, 1.25, 1.75, 4.70), ncol=3, nrow=3) ts <- c() for (i in 1:5) {ts <- c(ts, sum(abs(tail_weight[i]) > cat.tail[,i]) + 1)} ss <- c() for (i in 1:3) {ss <- c(ss, sum(abs(symmetry[i]) > cat.sym[,i]) + 1)} tlabels <- c("Uniform", "Less than Gaussian", "About Gaussian", "Moderate contamination", "Extreme contamination", "Double exponential contamination") slabels <- c("Relatively symmetric", "Moderate asymmetry", "Extreme asymmetry", "Exponential asymmetry") cat("Tail weight indexes:\n") print(tail_weight) cat(paste("\nMicceri category:", tlabels[max(ts)],"\n")) cat("\n\nAsymmetry indexes:\n") print(symmetry) cat(paste("\nMicceri category:", slabels[max(ss)])) tail.cat <- factor(max(ts), levels=1:length(tlabels), labels=tlabels, ordered=TRUE) sym.cat <- factor(max(ss), levels=1:length(slabels), labels=slabels, ordered=TRUE) invisible(list(tail_weight=tail_weight, symmetry=symmetry, tail.cat=tail.cat, sym.cat=sym.cat)) } Here's a test for the standard normal distribution, a $t$ with 8 df, and a log-normal: > micceri(rnorm(10000)) Tail weight indexes: 97.5% 95% 90% Q Q1 2.86 2.42 1.88 2.59 1.76 Micceri category: About Gaussian Asymmetry indexes: Skewness MM.75% Q2 0.01 0.00 1.00 Micceri category: Relatively symmetric > micceri(rt(10000, 8)) Tail weight indexes: 97.5% 95% 90% Q Q1 3.19 2.57 1.94 2.81 1.79 Micceri category: Extreme contamination Asymmetry indexes: Skewness MM.75% Q2 -0.03 0.00 0.98 Micceri category: Relatively symmetric > micceri(rlnorm(10000)) Tail weight indexes: 97.5% 95% 90% Q Q1 6.24 4.30 2.67 3.72 1.93 Micceri category: Double exponential contamination Asymmetry indexes: Skewness MM.75% Q2 5.28 0.59 8.37 Micceri category: Exponential asymmetry [1] Micceri, T. (1989). The unicorn, the normal curve, and other improbable creatures. Psychological Bulletin, 105, 156-166. doi:10.1037/0033-2909.105.1.156 [2] Erceg-Hurn, D. M., & Mirosevich, V. M. (2008). Modern robust statistical methods: An easy way to maximize the accuracy and power of your research. American Psychologist, 63, 591-601.
What is a good index of the degree of violation of normality and what descriptive labels could be at
A) What is the best single index of the degree to which the data violates normality? B) Or is it just better to talk about multiple indices of normality violation (e.g., skewness, kurtosis, outlier pr
What is a good index of the degree of violation of normality and what descriptive labels could be attached to that index? A) What is the best single index of the degree to which the data violates normality? B) Or is it just better to talk about multiple indices of normality violation (e.g., skewness, kurtosis, outlier prevalence)? I would vote for B. Different violations have different consequences. For example, unimodal, symmetrical distributions with heavy tails make your CIs very wide and presumably reduce the power to detect any effects. The mean, however, still hits the "typical" value. For very skewed distributions, the mean for example, might not be a very sensible index of "the typical value". C) How can confidence intervals be calculated (or perhaps a Bayesian approach) for the index? I don't know about Bayesian statistics, but concerning classical test of normality, I'd like to cite Erceg-Hurn et al. (2008) [2]: Another problem is that assumption tests have their own assumptions. Normality tests usually assume that data are homoscedastic; tests of homoscedasticity assume that data are normally distributed. If the normality and homoscedasticity assumptions are violated, the validity of the assumption tests can be seriously compromised. Prominent statisticians have described the assumption tests (e.g., Levene’s test, the Kolmogorov–Smirnov test) built into software such as SPSS as fatally flawed and recommended that these tests never be used (D’Agostino, 1986; Glass & Hopkins, 1996). D) What kind of verbal labels could you assign to points on that index to indicate the degree of violation of normality (e.g., mild, moderate, strong, extreme, etc.)? Micceri (1989) [1] did an analysis of 440 large scale data sets in psychology. He assessed the symmetry and the tail weight and defined criteria and labels. Labels for asymmetry range from 'relatively symmetric' to 'moderate --> extreme --> exponential asymmetry'. Labels for tail weight range from 'Uniform --> less than Gaussian --> About Gaussian --> Moderate --> Extreme --> Double exponential contamination'. Each classification is based on multiple, robust criteria. He found, that from these 440 data sets only 28% were relatively symmetric, and only 15% were about Gaussian concerning tail weights. Therefore the nice title of the paper: The unicorn, the normal curve, and other improbable creatures I wrote an R function, that automatically assesses Micceri's criteria and also prints out the labels: # This function prints out the Micceri-criteria for tail weight and symmetry of a distribution micceri <- function(x, plot=FALSE) { library(fBasics) QS <- (quantile(x, prob=c(.975, .95, .90)) - median(x)) / (quantile(x, prob=c(.75)) - median(x)) n <- length(x) x.s <- sort(x) U05 <- mean(x.s[(.95*n ):n]) L05 <- mean(x.s[1:(.05*n)]) U20 <- mean(x.s[(.80*n):n]) L20 <- mean(x.s[1:(.20*n)]) U50 <- mean(x.s[(.50*n):n]) L50 <- mean(x.s[1:(.50*n)]) M25 <- mean(x.s[(.375*n):(.625*n)]) Q <- (U05 - L05)/(U50 - L50) Q1 <- (U20 - L20)/(U50 - L50) Q2 <- (U05 - M25)/(M25 - L05) # mean/median interval QR <- quantile(x, prob=c(.25, .75)) # Interquartile range MM <- abs(mean(x) - median(x)) / (1.4807*(abs(QR[2] - QR[1])/2)) SKEW <- skewness(x) if (plot==TRUE) plot(density(x)) tail_weight <- round(c(QS, Q=Q, Q1=Q1), 2) symmetry <- round(c(Skewness=SKEW, MM=MM, Q2=Q2), 2) cat.tail <- matrix(c(1.9, 2.75, 3.05, 3.9, 4.3, 1.8, 2.3, 2.5, 2.8, 3.3, 1.6, 1.85, 1.93, 2, 2.3, 1.9, 2.5, 2.65, 2.73, 3.3, 1.6, 1.7, 1.8, 1.85, 1.93), ncol=5, nrow=5) cat.sym <- matrix(c(0.31, 0.71, 2, 0.05, 0.18, 0.37, 1.25, 1.75, 4.70), ncol=3, nrow=3) ts <- c() for (i in 1:5) {ts <- c(ts, sum(abs(tail_weight[i]) > cat.tail[,i]) + 1)} ss <- c() for (i in 1:3) {ss <- c(ss, sum(abs(symmetry[i]) > cat.sym[,i]) + 1)} tlabels <- c("Uniform", "Less than Gaussian", "About Gaussian", "Moderate contamination", "Extreme contamination", "Double exponential contamination") slabels <- c("Relatively symmetric", "Moderate asymmetry", "Extreme asymmetry", "Exponential asymmetry") cat("Tail weight indexes:\n") print(tail_weight) cat(paste("\nMicceri category:", tlabels[max(ts)],"\n")) cat("\n\nAsymmetry indexes:\n") print(symmetry) cat(paste("\nMicceri category:", slabels[max(ss)])) tail.cat <- factor(max(ts), levels=1:length(tlabels), labels=tlabels, ordered=TRUE) sym.cat <- factor(max(ss), levels=1:length(slabels), labels=slabels, ordered=TRUE) invisible(list(tail_weight=tail_weight, symmetry=symmetry, tail.cat=tail.cat, sym.cat=sym.cat)) } Here's a test for the standard normal distribution, a $t$ with 8 df, and a log-normal: > micceri(rnorm(10000)) Tail weight indexes: 97.5% 95% 90% Q Q1 2.86 2.42 1.88 2.59 1.76 Micceri category: About Gaussian Asymmetry indexes: Skewness MM.75% Q2 0.01 0.00 1.00 Micceri category: Relatively symmetric > micceri(rt(10000, 8)) Tail weight indexes: 97.5% 95% 90% Q Q1 3.19 2.57 1.94 2.81 1.79 Micceri category: Extreme contamination Asymmetry indexes: Skewness MM.75% Q2 -0.03 0.00 0.98 Micceri category: Relatively symmetric > micceri(rlnorm(10000)) Tail weight indexes: 97.5% 95% 90% Q Q1 6.24 4.30 2.67 3.72 1.93 Micceri category: Double exponential contamination Asymmetry indexes: Skewness MM.75% Q2 5.28 0.59 8.37 Micceri category: Exponential asymmetry [1] Micceri, T. (1989). The unicorn, the normal curve, and other improbable creatures. Psychological Bulletin, 105, 156-166. doi:10.1037/0033-2909.105.1.156 [2] Erceg-Hurn, D. M., & Mirosevich, V. M. (2008). Modern robust statistical methods: An easy way to maximize the accuracy and power of your research. American Psychologist, 63, 591-601.
What is a good index of the degree of violation of normality and what descriptive labels could be at A) What is the best single index of the degree to which the data violates normality? B) Or is it just better to talk about multiple indices of normality violation (e.g., skewness, kurtosis, outlier pr
23,941
Can I use a paired t-test when the samples are normally distributed but their difference is not?
A paired t test only analyzes the list of paired differences, and assumes that sample of values is randomly sampled from a Gaussian population. If that assumption is grossly violated, the paired t test is not valid. The distribution from which the before and after values are sampled is irrelevant -- only the population the differences are sampled from matters.
Can I use a paired t-test when the samples are normally distributed but their difference is not?
A paired t test only analyzes the list of paired differences, and assumes that sample of values is randomly sampled from a Gaussian population. If that assumption is grossly violated, the paired t tes
Can I use a paired t-test when the samples are normally distributed but their difference is not? A paired t test only analyzes the list of paired differences, and assumes that sample of values is randomly sampled from a Gaussian population. If that assumption is grossly violated, the paired t test is not valid. The distribution from which the before and after values are sampled is irrelevant -- only the population the differences are sampled from matters.
Can I use a paired t-test when the samples are normally distributed but their difference is not? A paired t test only analyzes the list of paired differences, and assumes that sample of values is randomly sampled from a Gaussian population. If that assumption is grossly violated, the paired t tes
23,942
How to specify specific contrasts for repeated measures ANOVA using car?
This method is generally considered "old-fashioned" so while it may be possible, the syntax is difficult and I suspect fewer people know how to manipulate the anova commands to get what you want. The more common method is using glht with a likelihood-based model from nlme or lme4. (I'm certainly welcome to be proved wrong by other answers though.) That said, if I needed to do this, I wouldn't bother with the anova commands; I'd just fit the equivalent model using lm, pick out the right error term for this contrast, and compute the F test myself (or equivalently, t test since there's only 1 df). This requires everything to be balanced and have sphericity, but if you don't have that, you should probably be using a likelihood-based model anyway. You might be able to somewhat correct for non-sphericity using the Greenhouse-Geiser or Huynh-Feldt corrections which (I believe) use the same F statistic but modify the df of the error term. If you really want to use car, you might find the heplot vignettes helpful; they describe how the matrices in the car package are defined. Using caracal's method (for the contrasts 1&2 - 3 and 1&2 - 4&5), I get psiHat tStat F pVal 1 -3.0208333 -7.2204644 52.1351067 2.202677e-09 2 -0.2083333 -0.6098777 0.3719508 5.445988e-01 This is how I'd get those same p-values: Reshape the data into long format and run lm to get all the SS terms. library(reshape2) d <- OBrienKaiser d$id <- factor(1:nrow(d)) dd <- melt(d, id.vars=c(18,1:2), measure.vars=3:17) dd$hour <- factor(as.numeric(gsub("[a-z.]*","",dd$variable))) dd$phase <- factor(gsub("[0-9.]*","", dd$variable), levels=c("pre","post","fup")) m <- lm(value ~ treatment*hour*phase + treatment*hour*phase*id, data=dd) anova(m) Make an alternate contrast matrix for the hour term. foo <- matrix(0, nrow=nrow(dd), ncol=4) foo[dd$hour %in% c(1,2) ,1] <- 0.5 foo[dd$hour %in% c(3) ,1] <- -1 foo[dd$hour %in% c(1,2) ,2] <- 0.5 foo[dd$hour %in% c(4,5) ,2] <- -0.5 foo[dd$hour %in% 1 ,3] <- 1 foo[dd$hour %in% 2 ,3] <- 0 foo[dd$hour %in% 4 ,4] <- 1 foo[dd$hour %in% 5 ,4] <- 0 Check that my contrasts give the same SS as the default contrasts (and the same as from the full model). anova(lm(value ~ hour, data=dd)) anova(lm(value ~ foo, data=dd)) Get the SS and df for just the two contrasts I want. anova(lm(value ~ foo[,1], data=dd)) anova(lm(value ~ foo[,2], data=dd)) Get the p-values. > F <- 73.003/(72.81/52) > pf(F, 1, 52, lower=FALSE) [1] 2.201148e-09 > F <- .5208/(72.81/52) > pf(F, 1, 52, lower=FALSE) [1] 0.5445999 Optionally adjust for sphericity. pf(F, 1*.48867, 52*.48867, lower=FALSE) pf(F, 1*.57413, 52*.57413, lower=FALSE)
How to specify specific contrasts for repeated measures ANOVA using car?
This method is generally considered "old-fashioned" so while it may be possible, the syntax is difficult and I suspect fewer people know how to manipulate the anova commands to get what you want. The
How to specify specific contrasts for repeated measures ANOVA using car? This method is generally considered "old-fashioned" so while it may be possible, the syntax is difficult and I suspect fewer people know how to manipulate the anova commands to get what you want. The more common method is using glht with a likelihood-based model from nlme or lme4. (I'm certainly welcome to be proved wrong by other answers though.) That said, if I needed to do this, I wouldn't bother with the anova commands; I'd just fit the equivalent model using lm, pick out the right error term for this contrast, and compute the F test myself (or equivalently, t test since there's only 1 df). This requires everything to be balanced and have sphericity, but if you don't have that, you should probably be using a likelihood-based model anyway. You might be able to somewhat correct for non-sphericity using the Greenhouse-Geiser or Huynh-Feldt corrections which (I believe) use the same F statistic but modify the df of the error term. If you really want to use car, you might find the heplot vignettes helpful; they describe how the matrices in the car package are defined. Using caracal's method (for the contrasts 1&2 - 3 and 1&2 - 4&5), I get psiHat tStat F pVal 1 -3.0208333 -7.2204644 52.1351067 2.202677e-09 2 -0.2083333 -0.6098777 0.3719508 5.445988e-01 This is how I'd get those same p-values: Reshape the data into long format and run lm to get all the SS terms. library(reshape2) d <- OBrienKaiser d$id <- factor(1:nrow(d)) dd <- melt(d, id.vars=c(18,1:2), measure.vars=3:17) dd$hour <- factor(as.numeric(gsub("[a-z.]*","",dd$variable))) dd$phase <- factor(gsub("[0-9.]*","", dd$variable), levels=c("pre","post","fup")) m <- lm(value ~ treatment*hour*phase + treatment*hour*phase*id, data=dd) anova(m) Make an alternate contrast matrix for the hour term. foo <- matrix(0, nrow=nrow(dd), ncol=4) foo[dd$hour %in% c(1,2) ,1] <- 0.5 foo[dd$hour %in% c(3) ,1] <- -1 foo[dd$hour %in% c(1,2) ,2] <- 0.5 foo[dd$hour %in% c(4,5) ,2] <- -0.5 foo[dd$hour %in% 1 ,3] <- 1 foo[dd$hour %in% 2 ,3] <- 0 foo[dd$hour %in% 4 ,4] <- 1 foo[dd$hour %in% 5 ,4] <- 0 Check that my contrasts give the same SS as the default contrasts (and the same as from the full model). anova(lm(value ~ hour, data=dd)) anova(lm(value ~ foo, data=dd)) Get the SS and df for just the two contrasts I want. anova(lm(value ~ foo[,1], data=dd)) anova(lm(value ~ foo[,2], data=dd)) Get the p-values. > F <- 73.003/(72.81/52) > pf(F, 1, 52, lower=FALSE) [1] 2.201148e-09 > F <- .5208/(72.81/52) > pf(F, 1, 52, lower=FALSE) [1] 0.5445999 Optionally adjust for sphericity. pf(F, 1*.48867, 52*.48867, lower=FALSE) pf(F, 1*.57413, 52*.57413, lower=FALSE)
How to specify specific contrasts for repeated measures ANOVA using car? This method is generally considered "old-fashioned" so while it may be possible, the syntax is difficult and I suspect fewer people know how to manipulate the anova commands to get what you want. The
23,943
How to specify specific contrasts for repeated measures ANOVA using car?
If you want/have to use contrasts with the pooled error term from the corresponding ANOVA, you could do the following. Unfortunately, this will be long, and I don't know how to do this more conveniently. Still, I think the results are correct, as they are verified against Maxwell & Delaney (see below). You want to compare groups of your first within factor hour in an SPF-p.qr design (notation from Kirk (1995): Split-Plot-Factorial design 1 between factor treatment with p groups, first within factor hour with q groups, second within factor prePostFup with r groups). The following assumes identically sized treatment groups and sphericity. Nj <- 10 # number of subjects per group P <- 3 # number of treatment groups Q <- 5 # number of hour groups R <- 3 # number of PrePostFup groups id <- factor(rep(1:(P*Nj), times=Q*R)) # subject treat <- factor(rep(LETTERS[1:P], times=Q*R*Nj), labels=c("CG", "A", "B")) # treatment hour <- factor(rep(rep(1:Q, each=P*Nj), times=R)) # hour ppf <- factor(rep(1:R, each=P*Q*Nj), labels=c("pre", "post", "fup")) # prePostFup DV <- round(rnorm(Nj*P*Q*R, 15, 2), 2) # some data with no effects dfPQR <- data.frame(id, treat, hour, ppf, DV) # data frame long format summary(aov(DV ~ treat*hour*ppf + Error(id/(hour*ppf)), data=dfPQR)) # SPF-p.qr ANOVA First note that the main effect for hour is the same after averaging over prePostFup, thus switching to the simpler SPF-p.q design which only contains treatment and hour as IVs. dfPQ <- aggregate(DV ~ id + treat + hour, FUN=mean, data=dfPQR) # average over ppf # SPF-p.q ANOVA, note effect for hour is the same as before summary(aov(DV ~ treat*hour + Error(id/hour), data=dfPQ)) Now note that in the SPF-p.q ANOVA, the effect for hour is tested against the interaction id:hour, i.e., this interaction provides the error term for the test. Now the contrasts for hour groups can be tested just like in a oneway between-subjects ANOVA by simply substituting the error term, and corresponding degrees of freedom. The easy way to get the SS and df of this interaction is to fit the model with lm(). (anRes <- anova(lm(DV ~ treat*hour*id, data=dfPQ))) SSE <- anRes["hour:id", "Sum Sq"] # SS interaction hour:id -> will be error SS dfSSE <- anRes["hour:id", "Df"] # corresponding df But let's also calculate everything manually here. # substitute DV with its difference to cell / person / treatment group means Mjk <- ave(dfPQ$DV, dfPQ$treat, dfPQ$hour, FUN=mean) # cell means Mi <- ave(dfPQ$DV, dfPQ$id, FUN=mean) # person means Mj <- ave(dfPQ$DV, dfPQ$treat, FUN=mean) # treatment means dfPQ$IDxIV <- dfPQ$DV - Mi - Mjk + Mj # interaction hour:id (SSE <- sum(dfPQ$IDxIV^2)) # SS interaction hour:id -> will be error SS dfSSE <- (Nj*P - P) * (Q-1) # corresponding df (MSE <- SSE / dfSSE) # mean square Now that we have the correct error term, we can build the usual test statistic for planned comparisons: $t = \frac{\hat{\psi} - 0}{||c|| \sqrt{MS_{E}}}$ where $c$ is the contrast vector, $||c||$ is its length, $\hat{\psi} = \sum\limits_{k=1}^{q} c_{k} M_{.k}$ is the contrast estimate, and $MS_{E}$ is the mean square for the hour:id interaction (the suitable error term). Mj <- tapply(dfPQ$DV, dfPQ$hour, FUN=mean) # group means for hour Nj <- table(dfPQ$hour) # cell sizes for hour (here the same) cntr <- rbind(c(1, 1, -2, 0, 0), c(1, 1, -1, -1, 0)) # matrix of contrast vectors psiHat <- cntr %*% Mj # estimates psi-hat lenSq <- cntr^2 %*% (1/Nj) # squared lengths of contrast vectors tStat <- psiHat / sqrt(lenSq*MSE) # t-statistics pVal <- 2*(1-pt(abs(tStat), dfSSE)) # p-values data.frame(psiHat, tStat, pVal) For multiple comparisons, you'd have to think about $\alpha$-correction methods, e.g., Bonferroni. The corresponding calculations for Maxwell & Delaney's (2004) example on p. 599f can be found here. Note that M&D calculate the F-value, to see that the results are identical, you have to square the value for the t-statistic. That code also includes the analysis done with Anova() from car, as well as a manual calculation of the $\hat{\epsilon}$ corrections for the main effect of the within-factor.
How to specify specific contrasts for repeated measures ANOVA using car?
If you want/have to use contrasts with the pooled error term from the corresponding ANOVA, you could do the following. Unfortunately, this will be long, and I don't know how to do this more convenient
How to specify specific contrasts for repeated measures ANOVA using car? If you want/have to use contrasts with the pooled error term from the corresponding ANOVA, you could do the following. Unfortunately, this will be long, and I don't know how to do this more conveniently. Still, I think the results are correct, as they are verified against Maxwell & Delaney (see below). You want to compare groups of your first within factor hour in an SPF-p.qr design (notation from Kirk (1995): Split-Plot-Factorial design 1 between factor treatment with p groups, first within factor hour with q groups, second within factor prePostFup with r groups). The following assumes identically sized treatment groups and sphericity. Nj <- 10 # number of subjects per group P <- 3 # number of treatment groups Q <- 5 # number of hour groups R <- 3 # number of PrePostFup groups id <- factor(rep(1:(P*Nj), times=Q*R)) # subject treat <- factor(rep(LETTERS[1:P], times=Q*R*Nj), labels=c("CG", "A", "B")) # treatment hour <- factor(rep(rep(1:Q, each=P*Nj), times=R)) # hour ppf <- factor(rep(1:R, each=P*Q*Nj), labels=c("pre", "post", "fup")) # prePostFup DV <- round(rnorm(Nj*P*Q*R, 15, 2), 2) # some data with no effects dfPQR <- data.frame(id, treat, hour, ppf, DV) # data frame long format summary(aov(DV ~ treat*hour*ppf + Error(id/(hour*ppf)), data=dfPQR)) # SPF-p.qr ANOVA First note that the main effect for hour is the same after averaging over prePostFup, thus switching to the simpler SPF-p.q design which only contains treatment and hour as IVs. dfPQ <- aggregate(DV ~ id + treat + hour, FUN=mean, data=dfPQR) # average over ppf # SPF-p.q ANOVA, note effect for hour is the same as before summary(aov(DV ~ treat*hour + Error(id/hour), data=dfPQ)) Now note that in the SPF-p.q ANOVA, the effect for hour is tested against the interaction id:hour, i.e., this interaction provides the error term for the test. Now the contrasts for hour groups can be tested just like in a oneway between-subjects ANOVA by simply substituting the error term, and corresponding degrees of freedom. The easy way to get the SS and df of this interaction is to fit the model with lm(). (anRes <- anova(lm(DV ~ treat*hour*id, data=dfPQ))) SSE <- anRes["hour:id", "Sum Sq"] # SS interaction hour:id -> will be error SS dfSSE <- anRes["hour:id", "Df"] # corresponding df But let's also calculate everything manually here. # substitute DV with its difference to cell / person / treatment group means Mjk <- ave(dfPQ$DV, dfPQ$treat, dfPQ$hour, FUN=mean) # cell means Mi <- ave(dfPQ$DV, dfPQ$id, FUN=mean) # person means Mj <- ave(dfPQ$DV, dfPQ$treat, FUN=mean) # treatment means dfPQ$IDxIV <- dfPQ$DV - Mi - Mjk + Mj # interaction hour:id (SSE <- sum(dfPQ$IDxIV^2)) # SS interaction hour:id -> will be error SS dfSSE <- (Nj*P - P) * (Q-1) # corresponding df (MSE <- SSE / dfSSE) # mean square Now that we have the correct error term, we can build the usual test statistic for planned comparisons: $t = \frac{\hat{\psi} - 0}{||c|| \sqrt{MS_{E}}}$ where $c$ is the contrast vector, $||c||$ is its length, $\hat{\psi} = \sum\limits_{k=1}^{q} c_{k} M_{.k}$ is the contrast estimate, and $MS_{E}$ is the mean square for the hour:id interaction (the suitable error term). Mj <- tapply(dfPQ$DV, dfPQ$hour, FUN=mean) # group means for hour Nj <- table(dfPQ$hour) # cell sizes for hour (here the same) cntr <- rbind(c(1, 1, -2, 0, 0), c(1, 1, -1, -1, 0)) # matrix of contrast vectors psiHat <- cntr %*% Mj # estimates psi-hat lenSq <- cntr^2 %*% (1/Nj) # squared lengths of contrast vectors tStat <- psiHat / sqrt(lenSq*MSE) # t-statistics pVal <- 2*(1-pt(abs(tStat), dfSSE)) # p-values data.frame(psiHat, tStat, pVal) For multiple comparisons, you'd have to think about $\alpha$-correction methods, e.g., Bonferroni. The corresponding calculations for Maxwell & Delaney's (2004) example on p. 599f can be found here. Note that M&D calculate the F-value, to see that the results are identical, you have to square the value for the t-statistic. That code also includes the analysis done with Anova() from car, as well as a manual calculation of the $\hat{\epsilon}$ corrections for the main effect of the within-factor.
How to specify specific contrasts for repeated measures ANOVA using car? If you want/have to use contrasts with the pooled error term from the corresponding ANOVA, you could do the following. Unfortunately, this will be long, and I don't know how to do this more convenient
23,944
MCMC sampling of decision tree space vs. random forest
This was done some 13 years ago by Chapman, George and McCulloch (1998, JASA). Of course there's been huge literature on Bayesian regression trees that grew out of this idea.
MCMC sampling of decision tree space vs. random forest
This was done some 13 years ago by Chapman, George and McCulloch (1998, JASA). Of course there's been huge literature on Bayesian regression trees that grew out of this idea.
MCMC sampling of decision tree space vs. random forest This was done some 13 years ago by Chapman, George and McCulloch (1998, JASA). Of course there's been huge literature on Bayesian regression trees that grew out of this idea.
MCMC sampling of decision tree space vs. random forest This was done some 13 years ago by Chapman, George and McCulloch (1998, JASA). Of course there's been huge literature on Bayesian regression trees that grew out of this idea.
23,945
MCMC sampling of decision tree space vs. random forest
Unfortunately, Chipman et al. in their Bayesian CART approach only extract the most probable tree. They never tried to average over trees and compare the performance to Random Forest and Extra-Trees. I've just read the BART paper from Chipman. If I understand correctly, it is a Bayesian averaging of K samples over a collection of m tree. It is interesting in many ways and does seems to perform really good. When m='1', it is a simple Bayesian averaging of K samples of 1 tree, coming from the posterior. However, not much test have been done on that particular aspect. And I would still be interested in knowing how does Random Forest or Extra-Trees compare to the true Bayes model.
MCMC sampling of decision tree space vs. random forest
Unfortunately, Chipman et al. in their Bayesian CART approach only extract the most probable tree. They never tried to average over trees and compare the performance to Random Forest and Extra-Trees.
MCMC sampling of decision tree space vs. random forest Unfortunately, Chipman et al. in their Bayesian CART approach only extract the most probable tree. They never tried to average over trees and compare the performance to Random Forest and Extra-Trees. I've just read the BART paper from Chipman. If I understand correctly, it is a Bayesian averaging of K samples over a collection of m tree. It is interesting in many ways and does seems to perform really good. When m='1', it is a simple Bayesian averaging of K samples of 1 tree, coming from the posterior. However, not much test have been done on that particular aspect. And I would still be interested in knowing how does Random Forest or Extra-Trees compare to the true Bayes model.
MCMC sampling of decision tree space vs. random forest Unfortunately, Chipman et al. in their Bayesian CART approach only extract the most probable tree. They never tried to average over trees and compare the performance to Random Forest and Extra-Trees.
23,946
Setting up Sweave, R, Latex, Eclipse StatET [closed]
I use Eclipse / StatEt to produce document with Sweave and LaTex, and find Eclipse perfect as an editing environment. I can recommend the following guides: Longhow Lam's pdf guide Jeromy Anglim's post (Note this includes info on installing the RJ package required for the latest versions of StatEt.) I also use MikTex on Windows and find everything works really well once it's setup. There's a few good questions and answers on Stack Overflow as well.
Setting up Sweave, R, Latex, Eclipse StatET [closed]
I use Eclipse / StatEt to produce document with Sweave and LaTex, and find Eclipse perfect as an editing environment. I can recommend the following guides: Longhow Lam's pdf guide Jeromy Anglim's pos
Setting up Sweave, R, Latex, Eclipse StatET [closed] I use Eclipse / StatEt to produce document with Sweave and LaTex, and find Eclipse perfect as an editing environment. I can recommend the following guides: Longhow Lam's pdf guide Jeromy Anglim's post (Note this includes info on installing the RJ package required for the latest versions of StatEt.) I also use MikTex on Windows and find everything works really well once it's setup. There's a few good questions and answers on Stack Overflow as well.
Setting up Sweave, R, Latex, Eclipse StatET [closed] I use Eclipse / StatEt to produce document with Sweave and LaTex, and find Eclipse perfect as an editing environment. I can recommend the following guides: Longhow Lam's pdf guide Jeromy Anglim's pos
23,947
Setting up Sweave, R, Latex, Eclipse StatET [closed]
For me, I found that Eclipse was overkill for creation of scientific papers. So, for Windows, what i did was the following: Install Miktex 2.8 (? not sure of version). Make sure that you install Miktex into a directory such as C:\Miktex, as Latex hates file paths with spaces in them. Make sure to select the option to install packages on the fly. Also make sure that R is installed somewhere that Latex can find it i.e. in a path with no spaces. I installed TechNix center as my program to write documents in, but there are many others such as WinEdt, eclipse, texmaker, or indeed Emacs. Now, make sure that you have \usepackage{Sweave} and usepackage{graphicx} in your preamble. As I'm sure you know, you need to put <>= at the start of your R chunk, and end it with @. You will need either the package xtable or Hmisc to convert R objects to a latex format. I like xtable, but you will probably need to do quite a bit of juggling of objects to get it into a form that xtable will accept (lm outputs, data frames, matrices). When inserting a table make sure to put the results=tex option into your preamble for the code chunk, and if you need a figure, ensure that the fig=TRUE option is also there. You can also only generate one figure per chunk, so just bear that in mind. Something to be very careful with is that the R code is at the extreme left of the page, as if it is enclosed in an environment then it will be ignored (this took me a long time to figure out). You need to save the file as .Rnw - make sure that whatever tex program you use does not append a .tex after this, as this will cause problems. Then either run R CMD Sweave foo.Rnw from the command line, or from within R run Sweave("foo.Rnw"). Inevitably it will fail at some point (especially if you haven't done this before) so just debug your .Rnw file, rinse and repeat. If it is the first time you have done this, it may prove easier to code all the R analyses from within r, and then use print statements to insert them into LaTex. I wouldn't recommend this as a good idea though, as if you discover that your datafile has errors at the end of this procedure (as i did last weekend) then you will need to rerun all of your analyses, which if you could properly from within latex from the beginning, can be avoided. Also, Sweave computations can take some time, so you may wish to use the R package cacheSweave to save rerunning analyses. Apparently the R package highlight allows for colour coding of R code in documents, but i have not used this. I've never used latex or R on a Mac, so i will leave that explanation to someone else. Hope this helps.
Setting up Sweave, R, Latex, Eclipse StatET [closed]
For me, I found that Eclipse was overkill for creation of scientific papers. So, for Windows, what i did was the following: Install Miktex 2.8 (? not sure of version). Make sure that you install Mikt
Setting up Sweave, R, Latex, Eclipse StatET [closed] For me, I found that Eclipse was overkill for creation of scientific papers. So, for Windows, what i did was the following: Install Miktex 2.8 (? not sure of version). Make sure that you install Miktex into a directory such as C:\Miktex, as Latex hates file paths with spaces in them. Make sure to select the option to install packages on the fly. Also make sure that R is installed somewhere that Latex can find it i.e. in a path with no spaces. I installed TechNix center as my program to write documents in, but there are many others such as WinEdt, eclipse, texmaker, or indeed Emacs. Now, make sure that you have \usepackage{Sweave} and usepackage{graphicx} in your preamble. As I'm sure you know, you need to put <>= at the start of your R chunk, and end it with @. You will need either the package xtable or Hmisc to convert R objects to a latex format. I like xtable, but you will probably need to do quite a bit of juggling of objects to get it into a form that xtable will accept (lm outputs, data frames, matrices). When inserting a table make sure to put the results=tex option into your preamble for the code chunk, and if you need a figure, ensure that the fig=TRUE option is also there. You can also only generate one figure per chunk, so just bear that in mind. Something to be very careful with is that the R code is at the extreme left of the page, as if it is enclosed in an environment then it will be ignored (this took me a long time to figure out). You need to save the file as .Rnw - make sure that whatever tex program you use does not append a .tex after this, as this will cause problems. Then either run R CMD Sweave foo.Rnw from the command line, or from within R run Sweave("foo.Rnw"). Inevitably it will fail at some point (especially if you haven't done this before) so just debug your .Rnw file, rinse and repeat. If it is the first time you have done this, it may prove easier to code all the R analyses from within r, and then use print statements to insert them into LaTex. I wouldn't recommend this as a good idea though, as if you discover that your datafile has errors at the end of this procedure (as i did last weekend) then you will need to rerun all of your analyses, which if you could properly from within latex from the beginning, can be avoided. Also, Sweave computations can take some time, so you may wish to use the R package cacheSweave to save rerunning analyses. Apparently the R package highlight allows for colour coding of R code in documents, but i have not used this. I've never used latex or R on a Mac, so i will leave that explanation to someone else. Hope this helps.
Setting up Sweave, R, Latex, Eclipse StatET [closed] For me, I found that Eclipse was overkill for creation of scientific papers. So, for Windows, what i did was the following: Install Miktex 2.8 (? not sure of version). Make sure that you install Mikt
23,948
Setting up Sweave, R, Latex, Eclipse StatET [closed]
RStudio (rstudio.org) makes things quite easy assuming LaTeX is already installed on your system. There is a PDF button that runs the code through Sweave then runs it through pdflatex and launches a pdf viewer.
Setting up Sweave, R, Latex, Eclipse StatET [closed]
RStudio (rstudio.org) makes things quite easy assuming LaTeX is already installed on your system. There is a PDF button that runs the code through Sweave then runs it through pdflatex and launches a
Setting up Sweave, R, Latex, Eclipse StatET [closed] RStudio (rstudio.org) makes things quite easy assuming LaTeX is already installed on your system. There is a PDF button that runs the code through Sweave then runs it through pdflatex and launches a pdf viewer.
Setting up Sweave, R, Latex, Eclipse StatET [closed] RStudio (rstudio.org) makes things quite easy assuming LaTeX is already installed on your system. There is a PDF button that runs the code through Sweave then runs it through pdflatex and launches a
23,949
Setting up Sweave, R, Latex, Eclipse StatET [closed]
I installed this suite quite recently and followed the instructions as per instructions here. There are links to all required software components required. I use MiKTex for all LaTex components. There are a few pitfalls if you are planning to use 64-bit windows as you will need the additional 64-bit java runtime. This is quite easy to overcome if you go to java.com in a 64-bit IE and verify your installation, it will point you to the 64-bit installer which is otherwise difficult to find. To avoid mucking around with path variables I simply extracted the eclipse folder in C:\Program Files as this is where java lives and 64-bit R. From here the configuration options in eclipse can easily run automatically and find the appropriate parameters. I hope this helps.
Setting up Sweave, R, Latex, Eclipse StatET [closed]
I installed this suite quite recently and followed the instructions as per instructions here. There are links to all required software components required. I use MiKTex for all LaTex components. There
Setting up Sweave, R, Latex, Eclipse StatET [closed] I installed this suite quite recently and followed the instructions as per instructions here. There are links to all required software components required. I use MiKTex for all LaTex components. There are a few pitfalls if you are planning to use 64-bit windows as you will need the additional 64-bit java runtime. This is quite easy to overcome if you go to java.com in a 64-bit IE and verify your installation, it will point you to the 64-bit installer which is otherwise difficult to find. To avoid mucking around with path variables I simply extracted the eclipse folder in C:\Program Files as this is where java lives and 64-bit R. From here the configuration options in eclipse can easily run automatically and find the appropriate parameters. I hope this helps.
Setting up Sweave, R, Latex, Eclipse StatET [closed] I installed this suite quite recently and followed the instructions as per instructions here. There are links to all required software components required. I use MiKTex for all LaTex components. There
23,950
How to compare median survival between groups?
One thing to keep in mind with the Kaplan-Meier survival curve is that it is basically descriptive and not inferential. It is just a function of the data, with an incredibly flexible model that lies behind it. This is a strength because this means there is virtually no assumptions that might be broken, but a weakness because it is hard to generalise it, and that it fits "noise" as well as "signal". If you want to make an inference, then you basically have to introduce something that is unknown that you wish to know. Now one way to compare the median survival times is to make the following assumptions: I have an estimate of the median survival time $t_{i}$ for each of the $i$ states, given by the kaplan meier curve. I expect the true median survival time, $T_{i}$ to be equal to this estimate. $E(T_{i}|t_{i})=t_{i}$ I am 100% certain that the true median survival time is positive. $Pr(T_{i}>0)=1$ Now the "most conservative" way to use these assumptions is the principle of maximum entropy, so you get: $$p(T_{i}|t_{i})= K exp(-\lambda T_{i})$$ Where $K$ and $\lambda$ are chosen such that the PDF is normalised, and the expected value is $t_{i}$. Now we have: $$1=\int_{0}^{\infty}p(T_{i}|t_{i})dT_{i} =K \int_{0}^{\infty}exp(-\lambda T_{i})dT_{i} $$ $$=K \left[-\frac{exp(-\lambda T_{i})}{\lambda}\right]_{T_{i}=0}^{T_{i}=\infty}=\frac{K}{\lambda}\implies K=\lambda $$ and now we have $E(T_{i})=\frac{1}{\lambda}\implies \lambda=t_{i}^{-1}$ And so you have a set of probability distributions for each state. $$p(T_{i}|t_{i})= \frac{1}{t_{i}} exp\left(-\frac{T_{i}}{t_{i}}\right)\;\;\;\;\;(i=1,\dots,N)$$ Which give a joint probability distribution of: $$p(T_{1},T_{2},\dots,T_{N}|t_{1},t_{2},\dots,t_{N})= \prod_{i=1}^{N}\frac{1}{t_{i}} exp\left(-\frac{T_{i}}{t_{i}}\right)$$ Now it sounds like you want to test the hypothesis $H_{0}:T_{1}=T_{2}=\dots=T_{N}=\overline{t}$, where $\overline{t}=\frac{1}{N}\sum_{i=1}^{N}t_{i}$ is the mean median survivial time. The severe alternative hypothesis to test against is the "every state is a unique and beautiful snowflake" hypothesis $H_{A}:T_{1}=t_{1},\dots,T_{N}=t_{N}$ because this is the most likely alternative, and thus represents the information lost in moving to the simpler hypothesis (a "minimax" test). The measure of the evidence against the simpler hypothesis is given by the odds ratio: $$O(H_{A}|H_{0})=\frac{p(T_{1}=t_{1},T_{2}=t_{2},\dots,T_{N}=t_{N}|t_{1},t_{2},\dots,t_{N})}{ p(T_{1}=\overline{t},T_{2}=\overline{t},\dots,T_{N}=\overline{t}|t_{1},t_{2},\dots,t_{N})}$$ $$=\frac{ \left[\prod_{i=1}^{N}\frac{1}{t_{i}}\right] exp\left(-\sum_{i=1}^{N}\frac{t_{i}}{t_{i}}\right) }{ \left[\prod_{i=1}^{N}\frac{1}{t_{i}}\right] exp\left(-\sum_{i=1}^{N}\frac{\overline{t}}{t_{i}}\right) } =exp\left(N\left[\frac{\overline{t}}{t_{harm}}-1\right]\right)$$ Where $$t_{harm}=\left[\frac{1}{N}\sum_{i=1}^{N}t_{i}^{-1}\right]^{-1}\leq \overline{t}$$ is the harmonic mean. Note that the odds will always favour the perfect fit, but not by much if the median survival times are reasonably close. Further, this gives you a direct way to state the evidence of this particular hypothesis test: assumptions 1-3 give maximum odds of $O(H_{A}|H_{0}):1$ against equal median survival times across all states Combine this with a decision rule, loss function, utility function, etc. which says how advantageous it is to accept the simpler hypothesis, and you've got your conclusion! There is no limit to the amount of hypothesis you can test for, and give similar odds for. Just change $H_{0}$ to specify a different set of possible "true values". You could do "significance testing" by choosing the hypothesis as: $$H_{S,i}:T_{i}=t_{i},T_{j}=T=\overline{t}_{(i)}=\frac{1}{N-1}\sum_{j\neq i}t_{j}$$ So this hypothesis is verbally "state $i$ has different median survival rate, but all other states are the same". And then re-do the odds ratio calculation I did above. Although you should be careful about what the alternative hypothesis is. For any one of these below is "reasonable" in the sense that they might be questions you are interested in answering (and they will generally have different answers) my $H_{A}$ defined above - how much worse is $H_{S,i}$ compared to the perfect fit? my $H_{0}$ defined above - how much better is $H_{S,i}$ compared to the average fit? a different $H_{S,k}$ - how much is state $k$ "more different" compared to state $i$? Now one thing which has been over-looked here is correlations between states - this structure assumes that knowing the median survival rate in one state tells you nothing about the median survival rate in another state. While this may seem "bad" it is not to difficult to improve on, and the above calculations are good initial results which are easy to calculate. Adding connections between states will change the probability models, and you will effectively see some "pooling" of the median survival times. One way to incorporate correlations into the analysis is to separate the true survival times into two components, a "common part" or "trend" and an "individual part": $$T_{i}=T+U_{i}$$ And then constrain the individual part $U_{i}$ to have average zero over all units and unknown variance $\sigma$ to be integrated out using a prior describing what knowledge you have of the individual variability, prior to observing the data (or jeffreys prior if you know nothing, and half cauchy if jeffreys causes problems).
How to compare median survival between groups?
One thing to keep in mind with the Kaplan-Meier survival curve is that it is basically descriptive and not inferential. It is just a function of the data, with an incredibly flexible model that lies
How to compare median survival between groups? One thing to keep in mind with the Kaplan-Meier survival curve is that it is basically descriptive and not inferential. It is just a function of the data, with an incredibly flexible model that lies behind it. This is a strength because this means there is virtually no assumptions that might be broken, but a weakness because it is hard to generalise it, and that it fits "noise" as well as "signal". If you want to make an inference, then you basically have to introduce something that is unknown that you wish to know. Now one way to compare the median survival times is to make the following assumptions: I have an estimate of the median survival time $t_{i}$ for each of the $i$ states, given by the kaplan meier curve. I expect the true median survival time, $T_{i}$ to be equal to this estimate. $E(T_{i}|t_{i})=t_{i}$ I am 100% certain that the true median survival time is positive. $Pr(T_{i}>0)=1$ Now the "most conservative" way to use these assumptions is the principle of maximum entropy, so you get: $$p(T_{i}|t_{i})= K exp(-\lambda T_{i})$$ Where $K$ and $\lambda$ are chosen such that the PDF is normalised, and the expected value is $t_{i}$. Now we have: $$1=\int_{0}^{\infty}p(T_{i}|t_{i})dT_{i} =K \int_{0}^{\infty}exp(-\lambda T_{i})dT_{i} $$ $$=K \left[-\frac{exp(-\lambda T_{i})}{\lambda}\right]_{T_{i}=0}^{T_{i}=\infty}=\frac{K}{\lambda}\implies K=\lambda $$ and now we have $E(T_{i})=\frac{1}{\lambda}\implies \lambda=t_{i}^{-1}$ And so you have a set of probability distributions for each state. $$p(T_{i}|t_{i})= \frac{1}{t_{i}} exp\left(-\frac{T_{i}}{t_{i}}\right)\;\;\;\;\;(i=1,\dots,N)$$ Which give a joint probability distribution of: $$p(T_{1},T_{2},\dots,T_{N}|t_{1},t_{2},\dots,t_{N})= \prod_{i=1}^{N}\frac{1}{t_{i}} exp\left(-\frac{T_{i}}{t_{i}}\right)$$ Now it sounds like you want to test the hypothesis $H_{0}:T_{1}=T_{2}=\dots=T_{N}=\overline{t}$, where $\overline{t}=\frac{1}{N}\sum_{i=1}^{N}t_{i}$ is the mean median survivial time. The severe alternative hypothesis to test against is the "every state is a unique and beautiful snowflake" hypothesis $H_{A}:T_{1}=t_{1},\dots,T_{N}=t_{N}$ because this is the most likely alternative, and thus represents the information lost in moving to the simpler hypothesis (a "minimax" test). The measure of the evidence against the simpler hypothesis is given by the odds ratio: $$O(H_{A}|H_{0})=\frac{p(T_{1}=t_{1},T_{2}=t_{2},\dots,T_{N}=t_{N}|t_{1},t_{2},\dots,t_{N})}{ p(T_{1}=\overline{t},T_{2}=\overline{t},\dots,T_{N}=\overline{t}|t_{1},t_{2},\dots,t_{N})}$$ $$=\frac{ \left[\prod_{i=1}^{N}\frac{1}{t_{i}}\right] exp\left(-\sum_{i=1}^{N}\frac{t_{i}}{t_{i}}\right) }{ \left[\prod_{i=1}^{N}\frac{1}{t_{i}}\right] exp\left(-\sum_{i=1}^{N}\frac{\overline{t}}{t_{i}}\right) } =exp\left(N\left[\frac{\overline{t}}{t_{harm}}-1\right]\right)$$ Where $$t_{harm}=\left[\frac{1}{N}\sum_{i=1}^{N}t_{i}^{-1}\right]^{-1}\leq \overline{t}$$ is the harmonic mean. Note that the odds will always favour the perfect fit, but not by much if the median survival times are reasonably close. Further, this gives you a direct way to state the evidence of this particular hypothesis test: assumptions 1-3 give maximum odds of $O(H_{A}|H_{0}):1$ against equal median survival times across all states Combine this with a decision rule, loss function, utility function, etc. which says how advantageous it is to accept the simpler hypothesis, and you've got your conclusion! There is no limit to the amount of hypothesis you can test for, and give similar odds for. Just change $H_{0}$ to specify a different set of possible "true values". You could do "significance testing" by choosing the hypothesis as: $$H_{S,i}:T_{i}=t_{i},T_{j}=T=\overline{t}_{(i)}=\frac{1}{N-1}\sum_{j\neq i}t_{j}$$ So this hypothesis is verbally "state $i$ has different median survival rate, but all other states are the same". And then re-do the odds ratio calculation I did above. Although you should be careful about what the alternative hypothesis is. For any one of these below is "reasonable" in the sense that they might be questions you are interested in answering (and they will generally have different answers) my $H_{A}$ defined above - how much worse is $H_{S,i}$ compared to the perfect fit? my $H_{0}$ defined above - how much better is $H_{S,i}$ compared to the average fit? a different $H_{S,k}$ - how much is state $k$ "more different" compared to state $i$? Now one thing which has been over-looked here is correlations between states - this structure assumes that knowing the median survival rate in one state tells you nothing about the median survival rate in another state. While this may seem "bad" it is not to difficult to improve on, and the above calculations are good initial results which are easy to calculate. Adding connections between states will change the probability models, and you will effectively see some "pooling" of the median survival times. One way to incorporate correlations into the analysis is to separate the true survival times into two components, a "common part" or "trend" and an "individual part": $$T_{i}=T+U_{i}$$ And then constrain the individual part $U_{i}$ to have average zero over all units and unknown variance $\sigma$ to be integrated out using a prior describing what knowledge you have of the individual variability, prior to observing the data (or jeffreys prior if you know nothing, and half cauchy if jeffreys causes problems).
How to compare median survival between groups? One thing to keep in mind with the Kaplan-Meier survival curve is that it is basically descriptive and not inferential. It is just a function of the data, with an incredibly flexible model that lies
23,951
How to compare median survival between groups?
Thought I just add to this topic that you might be interested in quantile regression with censoring. Bottai & Zhang 2010 proposed a "Laplace Regression" that can do just this task, you can find a PDF on this here. There is a package for Stata for this, it has yet not been translated to R although the quantreg package in R has a function for censored quantile regression, crq, that could be an option. I think the approach is very interesting and might be much more intuitive to patients that hazards ratios. Knowing for instance that 50 % on the drug survive 2 more months than ones that don't take the drug and the side effects force you to stay 1-2 months at the hospital might make the choice of treatment much easier.
How to compare median survival between groups?
Thought I just add to this topic that you might be interested in quantile regression with censoring. Bottai & Zhang 2010 proposed a "Laplace Regression" that can do just this task, you can find a PDF
How to compare median survival between groups? Thought I just add to this topic that you might be interested in quantile regression with censoring. Bottai & Zhang 2010 proposed a "Laplace Regression" that can do just this task, you can find a PDF on this here. There is a package for Stata for this, it has yet not been translated to R although the quantreg package in R has a function for censored quantile regression, crq, that could be an option. I think the approach is very interesting and might be much more intuitive to patients that hazards ratios. Knowing for instance that 50 % on the drug survive 2 more months than ones that don't take the drug and the side effects force you to stay 1-2 months at the hospital might make the choice of treatment much easier.
How to compare median survival between groups? Thought I just add to this topic that you might be interested in quantile regression with censoring. Bottai & Zhang 2010 proposed a "Laplace Regression" that can do just this task, you can find a PDF
23,952
How to compare median survival between groups?
First I would visualize the data: calculate confidence intervals and standard errors for the median survivals in each state and show CIs on a forest plot, medians and their SEs using a funnel plot. The “mean median survival all across the country” is a quantity that is estimated from the data and thus has uncertainty so you can not take it as a sharp reference value during significance testing. An other difficulty with the mean-of-all approach is that when you compare a state median to it you are comparing the median to a quantity that already includes that quantity as a component. So it is easier to compare each state to all other states combined. This can be done by performing a log rank test (or its alternatives) for each state. (Edit after reading the answer of probabilityislogic: the log rank test does compare survival in two (or more) groups, but it is not strictly the median that it is comparing. If you are sure it is the median that you want to compare, you may rely on his equations or use resampling here, too) You labelled your question [multiple comparisons], so I assume you also want to adjust (increase) your p values in a way that if you see at least one adjusted p value less than 5% you could conclude that “median survival across states is not equal” at the 5% significance level. You may use generic and overly conservative methods like Bonferroni, but the optimal correction scheme will take the correlations of the p values into consideration. I assume that you don't want to build any a priori knowledge into the correction scheme, so I will discuss a scheme where the adjustment is multiplying each p value by the same C constant. As I don't know how to derive the formula to obtain the optimal C multiplyer, I would use resampling. Under the null hypothesis that the survival characteristics are the same across all states, so you can permutate the state labels of the cancer cases and recalculate medians. After obtaining many resampled vectors of state p values I would numerically find the C multiplyer below which less than 95% of the vectors include no significant p values and above which more then 95%. While the range looks wide I would repeatedly increase the number of resamples by an order of magnitude.
How to compare median survival between groups?
First I would visualize the data: calculate confidence intervals and standard errors for the median survivals in each state and show CIs on a forest plot, medians and their SEs using a funnel plot. T
How to compare median survival between groups? First I would visualize the data: calculate confidence intervals and standard errors for the median survivals in each state and show CIs on a forest plot, medians and their SEs using a funnel plot. The “mean median survival all across the country” is a quantity that is estimated from the data and thus has uncertainty so you can not take it as a sharp reference value during significance testing. An other difficulty with the mean-of-all approach is that when you compare a state median to it you are comparing the median to a quantity that already includes that quantity as a component. So it is easier to compare each state to all other states combined. This can be done by performing a log rank test (or its alternatives) for each state. (Edit after reading the answer of probabilityislogic: the log rank test does compare survival in two (or more) groups, but it is not strictly the median that it is comparing. If you are sure it is the median that you want to compare, you may rely on his equations or use resampling here, too) You labelled your question [multiple comparisons], so I assume you also want to adjust (increase) your p values in a way that if you see at least one adjusted p value less than 5% you could conclude that “median survival across states is not equal” at the 5% significance level. You may use generic and overly conservative methods like Bonferroni, but the optimal correction scheme will take the correlations of the p values into consideration. I assume that you don't want to build any a priori knowledge into the correction scheme, so I will discuss a scheme where the adjustment is multiplying each p value by the same C constant. As I don't know how to derive the formula to obtain the optimal C multiplyer, I would use resampling. Under the null hypothesis that the survival characteristics are the same across all states, so you can permutate the state labels of the cancer cases and recalculate medians. After obtaining many resampled vectors of state p values I would numerically find the C multiplyer below which less than 95% of the vectors include no significant p values and above which more then 95%. While the range looks wide I would repeatedly increase the number of resamples by an order of magnitude.
How to compare median survival between groups? First I would visualize the data: calculate confidence intervals and standard errors for the median survivals in each state and show CIs on a forest plot, medians and their SEs using a funnel plot. T
23,953
Why l2 norm squared but l1 norm not squared?
But in the ElasticNet and Ridge, we use the l2 norm squared. Why is that, is there a particular reason (computational, optimization dynamics, statistical?) A possible reason for the l2 norm being squared in ridge regression (or Tikhonov regularisation) is that it allows an easy expression for the solution of the problem $$\hat\beta = (\textbf{X}^T\textbf{X} + \lambda \textbf{I})^{-1} \textbf{X}^T y$$ where $\textbf{X}$ is the regressor matrix or design matrix, $\lambda$ the scaling parameter for the penalty, $\textbf{I}$ the identity matrix, $y$ the observations, and $\hat{\beta}$ the estimate of the coefficients. That solution can be derived by taking the derivative of the cost function and setting it equal to zero $$\nabla_{\hat\beta} \left[ (y - \textbf{X}\hat\beta )^T(y - \textbf{X}\hat\beta ) + \hat\beta^T \lambda \textbf{I} \hat\beta \right] = \textbf{X}^T \left (y - \textbf{X}\hat\beta \right)+ \lambda \textbf{I} \hat\beta = \textbf{0}$$
Why l2 norm squared but l1 norm not squared?
But in the ElasticNet and Ridge, we use the l2 norm squared. Why is that, is there a particular reason (computational, optimization dynamics, statistical?) A possible reason for the l2 norm being squ
Why l2 norm squared but l1 norm not squared? But in the ElasticNet and Ridge, we use the l2 norm squared. Why is that, is there a particular reason (computational, optimization dynamics, statistical?) A possible reason for the l2 norm being squared in ridge regression (or Tikhonov regularisation) is that it allows an easy expression for the solution of the problem $$\hat\beta = (\textbf{X}^T\textbf{X} + \lambda \textbf{I})^{-1} \textbf{X}^T y$$ where $\textbf{X}$ is the regressor matrix or design matrix, $\lambda$ the scaling parameter for the penalty, $\textbf{I}$ the identity matrix, $y$ the observations, and $\hat{\beta}$ the estimate of the coefficients. That solution can be derived by taking the derivative of the cost function and setting it equal to zero $$\nabla_{\hat\beta} \left[ (y - \textbf{X}\hat\beta )^T(y - \textbf{X}\hat\beta ) + \hat\beta^T \lambda \textbf{I} \hat\beta \right] = \textbf{X}^T \left (y - \textbf{X}\hat\beta \right)+ \lambda \textbf{I} \hat\beta = \textbf{0}$$
Why l2 norm squared but l1 norm not squared? But in the ElasticNet and Ridge, we use the l2 norm squared. Why is that, is there a particular reason (computational, optimization dynamics, statistical?) A possible reason for the l2 norm being squ
23,954
Why l2 norm squared but l1 norm not squared?
A practical reason for squaring the L2 (that is not specific to ridge regression) is that "squaring" the L2 consists of not bothering to take the square root in the first place. And since $x^2$ is strictly increasing (for non-negative x), $||f(\textbf{x})||_2$ and $||f(\textbf{x})||_2^2$ will be optimal at the same point, so if the L2 is the optimization target (as opposed to a regularization penalty or something), it's a free speed gain. Squaring the L1 also doesn't change the optimal point, but it takes more time, so there's no reason to do it. (If it is being used as a regularization penalty, we might still favor the faster option unless we have a specific reason to want the L2 exactly instead of just something (nonlinearly) proportional to it.)
Why l2 norm squared but l1 norm not squared?
A practical reason for squaring the L2 (that is not specific to ridge regression) is that "squaring" the L2 consists of not bothering to take the square root in the first place. And since $x^2$ is st
Why l2 norm squared but l1 norm not squared? A practical reason for squaring the L2 (that is not specific to ridge regression) is that "squaring" the L2 consists of not bothering to take the square root in the first place. And since $x^2$ is strictly increasing (for non-negative x), $||f(\textbf{x})||_2$ and $||f(\textbf{x})||_2^2$ will be optimal at the same point, so if the L2 is the optimization target (as opposed to a regularization penalty or something), it's a free speed gain. Squaring the L1 also doesn't change the optimal point, but it takes more time, so there's no reason to do it. (If it is being used as a regularization penalty, we might still favor the faster option unless we have a specific reason to want the L2 exactly instead of just something (nonlinearly) proportional to it.)
Why l2 norm squared but l1 norm not squared? A practical reason for squaring the L2 (that is not specific to ridge regression) is that "squaring" the L2 consists of not bothering to take the square root in the first place. And since $x^2$ is st
23,955
Correlation without Causation
No. With the caveat that the direct causal relationships embedded in a DAG are beliefs (or at least presuppositions of belief), so that the counterfactual formal causal analysis one performs is predicated on the DAG being true, then your question gets at the utility of this kind of reasoning, because in this worldview correlations can only be interpreted causally given the d-separation of the path from one variable to another. If a set of variables (say, $L$) is sufficient to d-separate the path from $A$ to $Y$ (say, $Y$ as putative effect, and $A$ as putative cause of $Y$), then: one infers a $\text{cor}(Y,A|L) \ne 0$ as evidence that $A$ causes $Y$ (this is nonstandard notation… the folks I am familiar with would more typically write something like $P(Y=1|A=0,L) - P(Y=1|A=1,L) \ne 0$ for levels of $L$ instead of speaking specifically of correlation… likely because DAGs and the inferences drawn from them are nonparametric, but Pearson's correlation is linear, and Spearman's is monotonic), and one infers $\text{cor}(Y,A|L) = 0$ as evidence that $A$ does not cause $Y$. That is the point of this kind of causal analysis. (And is also why it offers value by directing critique of an analysis specifically to the construction of $L$ and the DAG.) Except, kinda yes (but still no). Back to the caveat about DAGs embodying beliefs. Those beliefs may be more or less valid for any given analysis. In fact, the DAG you provide indicates a good reason why: most variables we might imagine (whether fitting into $L$, $Y$, or $A$ in my nomenclature above) are themselves caused by some other variable… likely a variable in the set of unmeasured prior causes $U$. This is why the validity of causal inferences from observation studies are always subject to threats from unmeasured backdoor confounding (i.e. this quality is part of what we mean by 'observational study'), and why randomized control trials have a special kind of value (even though causal inferences from randomized control trials are just as subject to threats from selection bias as observational study designs). Many great examples of correlations existing between 'causally unrelated' variables and processes are provided in links in comments to Mir Henglin's question. I would argue that rather than falsifying my unqualified "No." at the start of my answer, these indicate merely that the DAG has not actually been expanded to cover all the causal variables at play: the set of causal beliefs is incomplete (for example, see Pearl's point about incorporating hidden variables into the DAG). @whuber also made an important comment along these lines: The whole point is that literally any two processes, even when completely independent of each other (causally and probabilistically), that undergo similar deterministic changes over time, will have non-zero correlations. If that's what you mean by "confounding," then so be it—but there doesn't seem to be a new question involved. There are competing interpretations about the appropriateness of time as a causal variable in counterfactual formal causal reasoning. I will point out that: DAG formalisms are explicit only about the qualitative temporal ordering of variables but DAGs are otherwise silent about quantitative lengths of time. So there is a case to be made that lengths of time can serve as a confounding variable in counterfactual formal causal reasoning. The upshot is to repeat my opening caveat: conditional on a DAG being true, then if a path from $A$ to $Y$ is d-separated, then $A$ cannot cause $Y$ if $\text{cor}(Y,A|L) = 0$.
Correlation without Causation
No. With the caveat that the direct causal relationships embedded in a DAG are beliefs (or at least presuppositions of belief), so that the counterfactual formal causal analysis one performs is predic
Correlation without Causation No. With the caveat that the direct causal relationships embedded in a DAG are beliefs (or at least presuppositions of belief), so that the counterfactual formal causal analysis one performs is predicated on the DAG being true, then your question gets at the utility of this kind of reasoning, because in this worldview correlations can only be interpreted causally given the d-separation of the path from one variable to another. If a set of variables (say, $L$) is sufficient to d-separate the path from $A$ to $Y$ (say, $Y$ as putative effect, and $A$ as putative cause of $Y$), then: one infers a $\text{cor}(Y,A|L) \ne 0$ as evidence that $A$ causes $Y$ (this is nonstandard notation… the folks I am familiar with would more typically write something like $P(Y=1|A=0,L) - P(Y=1|A=1,L) \ne 0$ for levels of $L$ instead of speaking specifically of correlation… likely because DAGs and the inferences drawn from them are nonparametric, but Pearson's correlation is linear, and Spearman's is monotonic), and one infers $\text{cor}(Y,A|L) = 0$ as evidence that $A$ does not cause $Y$. That is the point of this kind of causal analysis. (And is also why it offers value by directing critique of an analysis specifically to the construction of $L$ and the DAG.) Except, kinda yes (but still no). Back to the caveat about DAGs embodying beliefs. Those beliefs may be more or less valid for any given analysis. In fact, the DAG you provide indicates a good reason why: most variables we might imagine (whether fitting into $L$, $Y$, or $A$ in my nomenclature above) are themselves caused by some other variable… likely a variable in the set of unmeasured prior causes $U$. This is why the validity of causal inferences from observation studies are always subject to threats from unmeasured backdoor confounding (i.e. this quality is part of what we mean by 'observational study'), and why randomized control trials have a special kind of value (even though causal inferences from randomized control trials are just as subject to threats from selection bias as observational study designs). Many great examples of correlations existing between 'causally unrelated' variables and processes are provided in links in comments to Mir Henglin's question. I would argue that rather than falsifying my unqualified "No." at the start of my answer, these indicate merely that the DAG has not actually been expanded to cover all the causal variables at play: the set of causal beliefs is incomplete (for example, see Pearl's point about incorporating hidden variables into the DAG). @whuber also made an important comment along these lines: The whole point is that literally any two processes, even when completely independent of each other (causally and probabilistically), that undergo similar deterministic changes over time, will have non-zero correlations. If that's what you mean by "confounding," then so be it—but there doesn't seem to be a new question involved. There are competing interpretations about the appropriateness of time as a causal variable in counterfactual formal causal reasoning. I will point out that: DAG formalisms are explicit only about the qualitative temporal ordering of variables but DAGs are otherwise silent about quantitative lengths of time. So there is a case to be made that lengths of time can serve as a confounding variable in counterfactual formal causal reasoning. The upshot is to repeat my opening caveat: conditional on a DAG being true, then if a path from $A$ to $Y$ is d-separated, then $A$ cannot cause $Y$ if $\text{cor}(Y,A|L) = 0$.
Correlation without Causation No. With the caveat that the direct causal relationships embedded in a DAG are beliefs (or at least presuppositions of belief), so that the counterfactual formal causal analysis one performs is predic
23,956
Correlation without Causation
In short: can two d-separated variables have an expected non-zero correlation? No, it is not possible. More precisely: d-separation warrant us that, in a DAG $G$, if two variable $X$ and $Y$ are d-separated by a set of variables $Z$ it is implied that $X$ and $Y$ are independent conditional on $Z$. Note that $Z$ can be the empty set too. Now, you speak about "correlation" and not "conditional correlation", however you speak about d-speration too. From that I suppose that the two d-separated variables that you use are so for $Z$=empty set. Therefore, no correlation nor any kind of statistical association can appear in the population. For example in your DAG $$ X \leftarrow U \rightarrow Y $$ $X$ and $Y$ are d-separated given $U$ Moreover you write For example, the correlation between $X$ and $Y$ with $Y = X^2$ is $0$ I guess the idea in your mind but this statement is not true in general. Indeed if $X$ have distribution $U[0,1]$ this correlation is $>0$.
Correlation without Causation
In short: can two d-separated variables have an expected non-zero correlation? No, it is not possible. More precisely: d-separation warrant us that, in a DAG $G$, if two variable $X$ and $Y$ are d-se
Correlation without Causation In short: can two d-separated variables have an expected non-zero correlation? No, it is not possible. More precisely: d-separation warrant us that, in a DAG $G$, if two variable $X$ and $Y$ are d-separated by a set of variables $Z$ it is implied that $X$ and $Y$ are independent conditional on $Z$. Note that $Z$ can be the empty set too. Now, you speak about "correlation" and not "conditional correlation", however you speak about d-speration too. From that I suppose that the two d-separated variables that you use are so for $Z$=empty set. Therefore, no correlation nor any kind of statistical association can appear in the population. For example in your DAG $$ X \leftarrow U \rightarrow Y $$ $X$ and $Y$ are d-separated given $U$ Moreover you write For example, the correlation between $X$ and $Y$ with $Y = X^2$ is $0$ I guess the idea in your mind but this statement is not true in general. Indeed if $X$ have distribution $U[0,1]$ this correlation is $>0$.
Correlation without Causation In short: can two d-separated variables have an expected non-zero correlation? No, it is not possible. More precisely: d-separation warrant us that, in a DAG $G$, if two variable $X$ and $Y$ are d-se
23,957
How to interpret this shape of QQ plot of standardized residuals?
The set of examples in How to interpret a QQ plot includes the basic shape in your question. Namely, the ends of the line of points turn counter-clockwise relative to the middle. Given that sample quantiles (i.e., your data) are on the y-axis, and theoretical quantiles from a standard normal are on the x-axis, that means the tails of your distribution are fatter than what you would see from a true normal. In other words, those points are much further from the mean than you would expect if the data generating process were actually a normal distribution. There are lots of distributions that are symmetrical and have fatter tails than the normal. I would often start by looking at $t$-distributions, because they are well understood, and you can adjust the tail 'fatness' by modulating the degrees of freedom parameter. Your example is notable in that the middle is very straight, and the ends are also very straight and roughly parallel to each other, with fairly sharp corners in between. That suggests you have a mixture of two distributions with the same mean, but different standard deviations. I can generate a plot that looks pretty similar to yours pretty easily in R with the following code: set.seed(646) # this makes the example exactly reproducible s = 4 # this is the ratio of SDs x = c(rnorm(11600, mean=0, sd=1), # 99.7% of the data come from the 1st distribution rnorm( 400, mean=0, sd=s)) # small fraction comes from 2nd dist w/ greater SD qqnorm(x) # a basic qq-plot A better way to determine the mixing proportions and relative SDs would be to fit a Gaussian mixture model. In R, that can be done with the Mclust package, although any decent statistical software should be able to do it. I demonstrate a basic analysis in my answer to How to test if my distribution is multimodal? You might also simply make some boxplots of your residuals as a function of your categorical variables, either individually or in specified combinations. It may well be that the heteroscedasticity can be easily found and yield meaningful insights into your data. As @COOLserdash noted, I wouldn't worry about this for purposes of statistical inference, although if you can identify a heterogeneous subgroup, you can model your data using weighted least squares. For purposes of prediction, mean predictions should be unaffected by this, but prediction intervals based on normality will be incorrect and yield 'black swans' and occasionally cause problems. So long as you don't collapse the global financial system, it might not be so bad. You could just make the prediction intervals wider, or you could again model it, especially if the subgroups are identifiable.
How to interpret this shape of QQ plot of standardized residuals?
The set of examples in How to interpret a QQ plot includes the basic shape in your question. Namely, the ends of the line of points turn counter-clockwise relative to the middle. Given that sample q
How to interpret this shape of QQ plot of standardized residuals? The set of examples in How to interpret a QQ plot includes the basic shape in your question. Namely, the ends of the line of points turn counter-clockwise relative to the middle. Given that sample quantiles (i.e., your data) are on the y-axis, and theoretical quantiles from a standard normal are on the x-axis, that means the tails of your distribution are fatter than what you would see from a true normal. In other words, those points are much further from the mean than you would expect if the data generating process were actually a normal distribution. There are lots of distributions that are symmetrical and have fatter tails than the normal. I would often start by looking at $t$-distributions, because they are well understood, and you can adjust the tail 'fatness' by modulating the degrees of freedom parameter. Your example is notable in that the middle is very straight, and the ends are also very straight and roughly parallel to each other, with fairly sharp corners in between. That suggests you have a mixture of two distributions with the same mean, but different standard deviations. I can generate a plot that looks pretty similar to yours pretty easily in R with the following code: set.seed(646) # this makes the example exactly reproducible s = 4 # this is the ratio of SDs x = c(rnorm(11600, mean=0, sd=1), # 99.7% of the data come from the 1st distribution rnorm( 400, mean=0, sd=s)) # small fraction comes from 2nd dist w/ greater SD qqnorm(x) # a basic qq-plot A better way to determine the mixing proportions and relative SDs would be to fit a Gaussian mixture model. In R, that can be done with the Mclust package, although any decent statistical software should be able to do it. I demonstrate a basic analysis in my answer to How to test if my distribution is multimodal? You might also simply make some boxplots of your residuals as a function of your categorical variables, either individually or in specified combinations. It may well be that the heteroscedasticity can be easily found and yield meaningful insights into your data. As @COOLserdash noted, I wouldn't worry about this for purposes of statistical inference, although if you can identify a heterogeneous subgroup, you can model your data using weighted least squares. For purposes of prediction, mean predictions should be unaffected by this, but prediction intervals based on normality will be incorrect and yield 'black swans' and occasionally cause problems. So long as you don't collapse the global financial system, it might not be so bad. You could just make the prediction intervals wider, or you could again model it, especially if the subgroups are identifiable.
How to interpret this shape of QQ plot of standardized residuals? The set of examples in How to interpret a QQ plot includes the basic shape in your question. Namely, the ends of the line of points turn counter-clockwise relative to the middle. Given that sample q
23,958
How to interpret this shape of QQ plot of standardized residuals?
A comment with QQ-plots of data from $\mathsf{T}(3)$ and $\mathsf{Laplace}(0,1)$ (Wikipedia) distributions, both with heavy tails. Following up on @COOLSerdash's Comment, I'll show you QQ-plots of data sampled from a couple of distributions that have heavier tails than a normal distribution. set.seed(2020) v = rt(150, 3) # Student's t, DF = 3 plot(qqnorm(v)) points(qqline(v)) w = rexp(500)-rexp(500) # difference of exponentials is Laplace plot(qqnorm(w)) points(qqline(w))
How to interpret this shape of QQ plot of standardized residuals?
A comment with QQ-plots of data from $\mathsf{T}(3)$ and $\mathsf{Laplace}(0,1)$ (Wikipedia) distributions, both with heavy tails. Following up on @COOLSerdash's Comment, I'll show you QQ-plots of dat
How to interpret this shape of QQ plot of standardized residuals? A comment with QQ-plots of data from $\mathsf{T}(3)$ and $\mathsf{Laplace}(0,1)$ (Wikipedia) distributions, both with heavy tails. Following up on @COOLSerdash's Comment, I'll show you QQ-plots of data sampled from a couple of distributions that have heavier tails than a normal distribution. set.seed(2020) v = rt(150, 3) # Student's t, DF = 3 plot(qqnorm(v)) points(qqline(v)) w = rexp(500)-rexp(500) # difference of exponentials is Laplace plot(qqnorm(w)) points(qqline(w))
How to interpret this shape of QQ plot of standardized residuals? A comment with QQ-plots of data from $\mathsf{T}(3)$ and $\mathsf{Laplace}(0,1)$ (Wikipedia) distributions, both with heavy tails. Following up on @COOLSerdash's Comment, I'll show you QQ-plots of dat
23,959
How to interpret this shape of QQ plot of standardized residuals?
You should also draw a line using a qqline(), anyway, it will be always a straight line thus in your example it means that the distribution has a heavier tail in compare to normal distribution. You should consider a refit your model. However, if the effect is a strong and you fit the model to the big dataset you can also consider to left it, read more about this option here: https://www.biorxiv.org/content/10.1101/498931v1.abstract
How to interpret this shape of QQ plot of standardized residuals?
You should also draw a line using a qqline(), anyway, it will be always a straight line thus in your example it means that the distribution has a heavier tail in compare to normal distribution. You sh
How to interpret this shape of QQ plot of standardized residuals? You should also draw a line using a qqline(), anyway, it will be always a straight line thus in your example it means that the distribution has a heavier tail in compare to normal distribution. You should consider a refit your model. However, if the effect is a strong and you fit the model to the big dataset you can also consider to left it, read more about this option here: https://www.biorxiv.org/content/10.1101/498931v1.abstract
How to interpret this shape of QQ plot of standardized residuals? You should also draw a line using a qqline(), anyway, it will be always a straight line thus in your example it means that the distribution has a heavier tail in compare to normal distribution. You sh
23,960
Central Limit Theorem - Rule of thumb for repeated sampling
To facilitate accurate discussion of this issue, I am going to give a mathematical account of what you are doing. Suppose you have an infinite matrix $\mathbf{X} \equiv [X_{i,j} | i \in \mathbb{Z}, j \in \mathbb{Z} ]$ composed of IID random variables from some distribution with mean $\mu$ and finite variance $\sigma^2$ that is not a normal distribution:$^\dagger$ $$X_{i,j} \sim \text{IID Dist}(\mu, \sigma^2)$$ In your analysis you are forming repeated independent iterations of sample means based on a fixed sample size. If you use a sample size of $n$ and take $M$ iterations then you are forming the statistics $\bar{X}_n^{(1)},...,\bar{X}_n^{(M)}$ given by: $$\bar{X}_n^{(m)} \equiv \frac{1}{n} \sum_{i=1}^n X_{i,m} \quad \quad \quad \text{for } m = 1,...,M.$$ In your output you show histograms of the outcomes $\bar{X}_n^{(1)},...,\bar{X}_n^{(M)}$ for different values of $n$. It is clear that as $n$ gets bigger, we get closer to the normal distribution. Now, in terms of "convergence to the normal distribution" there are two issues here. The central limit theorem says that the true distribution of the sample mean will converge towards the normal distribution as $n \rightarrow \infty$ (when appropriately standardised). The law of large numbers says that your histograms will converge towards the true underlying distribution of the sample mean as $M \rightarrow \infty$. So, in those histograms we have two sources of "error" relative to a perfect normal distribution. For smaller $n$ the true distribution of the sample mean is further away from the normal distribution, and for smaller $M$ the histogram is further away from the true distribution (i.e., contains more random error). How big does $n$ need to be? The various "rules of thumb" for the requisite size of $n$ are not particularly useful in my view. It is true that some textbooks propagate the notion that $n=30$ is sufficient to ensure that the sample mean is well approximated by the normal distribution. The truth is that the "required sample size" for good approximation by the normal distribution is not a fixed quantity --- it depends on two factors: the degree to which the underlying distribution departs from the normal distribution; and the required level of accuracy needed for the approximation. The only real way to determine the appropriate sample size required for an "accurate" approximation by the normal distribution is to have a look at the convergence for a range of underlying distributions. The kinds of simulations you are doing are a good way to get a sense of this. How big does $M$ need to be? There are some useful mathematical results showing the rate of convergence of an empirical distribution to the true underlying distribution for IID data. To give a brief account of this, let us suppose that $F_n$ is the true distribution function for the sample mean with $n$ values, and define the empirical distribution of the simulated sample means as: $$\hat{F}_n (x) \equiv \frac{1}{M} \sum_{m=1}^M \mathbb{I}(\bar{X}_n^{(m)} \leqslant x) \quad \quad \quad \text{for } x \in \mathbb{R}.$$ It is trivial to show that $M \hat{F}_n(x) \sim \text{Bin}(M, F_n(x))$, so the "error" between the true distribution and the empirical distribution at any point $x \in \mathbb{R}$ has zero mean, and has variance: $$\mathbb{V} (\hat{F}_n(x) - F_n(x)) = \frac{F_n(x) (1-F_n(x))}{M}.$$ It is fairly simple to use standard confidence interval results for the binomial distribution to get an appropriate confidence intervale for the error in the simulated estimation of the distribution of the sample mean. $^\dagger$ Of course, it is possible to use a normal distribution, but that is not very interesting because convergence to normality is already achieved with a sample size of one.
Central Limit Theorem - Rule of thumb for repeated sampling
To facilitate accurate discussion of this issue, I am going to give a mathematical account of what you are doing. Suppose you have an infinite matrix $\mathbf{X} \equiv [X_{i,j} | i \in \mathbb{Z}, j
Central Limit Theorem - Rule of thumb for repeated sampling To facilitate accurate discussion of this issue, I am going to give a mathematical account of what you are doing. Suppose you have an infinite matrix $\mathbf{X} \equiv [X_{i,j} | i \in \mathbb{Z}, j \in \mathbb{Z} ]$ composed of IID random variables from some distribution with mean $\mu$ and finite variance $\sigma^2$ that is not a normal distribution:$^\dagger$ $$X_{i,j} \sim \text{IID Dist}(\mu, \sigma^2)$$ In your analysis you are forming repeated independent iterations of sample means based on a fixed sample size. If you use a sample size of $n$ and take $M$ iterations then you are forming the statistics $\bar{X}_n^{(1)},...,\bar{X}_n^{(M)}$ given by: $$\bar{X}_n^{(m)} \equiv \frac{1}{n} \sum_{i=1}^n X_{i,m} \quad \quad \quad \text{for } m = 1,...,M.$$ In your output you show histograms of the outcomes $\bar{X}_n^{(1)},...,\bar{X}_n^{(M)}$ for different values of $n$. It is clear that as $n$ gets bigger, we get closer to the normal distribution. Now, in terms of "convergence to the normal distribution" there are two issues here. The central limit theorem says that the true distribution of the sample mean will converge towards the normal distribution as $n \rightarrow \infty$ (when appropriately standardised). The law of large numbers says that your histograms will converge towards the true underlying distribution of the sample mean as $M \rightarrow \infty$. So, in those histograms we have two sources of "error" relative to a perfect normal distribution. For smaller $n$ the true distribution of the sample mean is further away from the normal distribution, and for smaller $M$ the histogram is further away from the true distribution (i.e., contains more random error). How big does $n$ need to be? The various "rules of thumb" for the requisite size of $n$ are not particularly useful in my view. It is true that some textbooks propagate the notion that $n=30$ is sufficient to ensure that the sample mean is well approximated by the normal distribution. The truth is that the "required sample size" for good approximation by the normal distribution is not a fixed quantity --- it depends on two factors: the degree to which the underlying distribution departs from the normal distribution; and the required level of accuracy needed for the approximation. The only real way to determine the appropriate sample size required for an "accurate" approximation by the normal distribution is to have a look at the convergence for a range of underlying distributions. The kinds of simulations you are doing are a good way to get a sense of this. How big does $M$ need to be? There are some useful mathematical results showing the rate of convergence of an empirical distribution to the true underlying distribution for IID data. To give a brief account of this, let us suppose that $F_n$ is the true distribution function for the sample mean with $n$ values, and define the empirical distribution of the simulated sample means as: $$\hat{F}_n (x) \equiv \frac{1}{M} \sum_{m=1}^M \mathbb{I}(\bar{X}_n^{(m)} \leqslant x) \quad \quad \quad \text{for } x \in \mathbb{R}.$$ It is trivial to show that $M \hat{F}_n(x) \sim \text{Bin}(M, F_n(x))$, so the "error" between the true distribution and the empirical distribution at any point $x \in \mathbb{R}$ has zero mean, and has variance: $$\mathbb{V} (\hat{F}_n(x) - F_n(x)) = \frac{F_n(x) (1-F_n(x))}{M}.$$ It is fairly simple to use standard confidence interval results for the binomial distribution to get an appropriate confidence intervale for the error in the simulated estimation of the distribution of the sample mean. $^\dagger$ Of course, it is possible to use a normal distribution, but that is not very interesting because convergence to normality is already achieved with a sample size of one.
Central Limit Theorem - Rule of thumb for repeated sampling To facilitate accurate discussion of this issue, I am going to give a mathematical account of what you are doing. Suppose you have an infinite matrix $\mathbf{X} \equiv [X_{i,j} | i \in \mathbb{Z}, j
23,961
Central Limit Theorem - Rule of thumb for repeated sampling
I think it may be helpful to think about your question a bit differently. Suppose that $X\sim F_X$ where $F_X$ is any arbitrary distribution, and let $\sigma^2 = Var(X)$. Now suppose I draw iid $X_1,\dots,X_n \sim F_X$, and let $\bar{X}_n = \frac{1}{n}\sum X_i$. The CLT says that under very weak assumptions, $\bar{X}_n \xrightarrow{d} N(\mu,\sigma^2/n)$ as $n$ gets arbitrarily large. Now suppose that for a fixed $n$, I observe $\bar{X}_{n1},\dots,\bar{X}_{nK}$ where for each $k$, I sample iid $X_{1k},\dots,X_{nk} \sim F_X$ and build $\bar{X}_{nk}$. But this is the exact same as sampling $\bar{X}_{ni}$ from the distribution $F_{\bar{X}_n}$. Your question can thus posed as follows: What is the distribution $F_{\bar{X}_n}$, and in particular, is it normal? The answer is no, and I'll focus on your exponential example. We can understand this problem by literally considering the sampling distribution of $\bar{X}_n$ given iid $X_1,\dots,X_n \sim Exp(\gamma)$. Note that $Exp(\gamma) = \text{Gamma}(\alpha=1,\gamma)$, and so $\sum X_i \sim \text{Gamma}(n,\gamma)$ and thus $$\frac{1}{n}\sum X_i \sim \text{Gamma}(n,\gamma/n)$$ As it turns out, for $n$ reasonably large, this distribution is very similar to a Normal distribution, but it will never be a normal distribution for any finite $n$ (the above is exactly what distribution it is!). What you did by replicating was simply drawing from this distribution and plotting (indeed, try plotting these and youll get the same result!). Depending on the distribution of $X_i$, the distribution of $\bar{X}_n$ can be anything. What the CLT says is that as $n$ goes to infinity, $\bar{X}_n$ will converge to a normal distribution, and similarly, $\text{Gamma}(n,\gamma/n)$ (or any $F_{\bar{X}_n}$ where $X$ satisfies the requisite requirements for CLT to kick in) will asymptotically equal a normal distribution. EDIT In response to your comments, maybe there's a misunderstand somewhere. It's helpful to emphasize that we can think of $\bar{X}_n$ as a random variable itself (often we think of it as the mean and thus a constant, but this is not true!). The point is that the random variable $\bar{X}_n$ that is the sample mean of $X_1,\dots,X_n \sim F_X$, and the random variable $Y \sim F_{\bar{X}_n}$ are the exact same random variable. So by drawing $K$ iid draws of $X_1,\dots,X_n \sim F_X$ and calculating $\bar{X}_n$, you're doing the equivalent of $K$ draws from $F_{\bar{X}_n}$. At the end of the day, regardless of whether $K = 100,1000,100000,\dots$, youre just drawing $K$ times from $F_{\bar{X}_n}$. So what is your goal here? Are you asking at what point does the empirical cdf of $K$ draws accurately represent the cdf of $F_{\bar{X}_N}$? Well forget about anything about sample means in that case, and simply ask how many times do I need to draw some random variable $W \sim F$ such that the empirical cdf $\hat{F}_n$ is 'approximately' $F$. Well there's a whole literature on that, and two basic results are (see the wiki link on empirical cdfs for more): By the Glivenko-Cantelli theorem, $\hat{F}_n$ uniformly converges to $F$ almost surely. By Donsker's theorem, The empirical process $\sqrt{n}(\hat{F}_n -F)$ converges in distribution to a mean-zero Gaussian process. What you are doing with your histograms in your post is really estimating the density (not the CDF) given $K$ draws. Histograms are a (discrete) example of kernel density estimation (KDE). There's a similar literature on KDEs, and again, you have properties like the sample KDE will converge to the true underlying density as you gather more draws (ie $K\to\infty$). It should be noted that histograms don't converge to the true density unless you also let the bin width go to zero, and this is one reason why kernel approaches are preferred: they allow smoothness and similar properties. But at the end of the day, what you can say is the following: For a fixed $n$, drawing iid $X_1,\dots,X_n$ and considering the random variable $\frac{1}{n}\sum_{X_i}$ is equivalent to considering the random variable with distribution $F_{\bar{X}_n}$. For any $K$ draws from $F_{\bar{X}_n}$, you can estimate the CDF (empirical CDF) and/or estimate the density (two approach are histogram or KDE). In either case, as $K\to\infty$, these two esimates will converge to the true CDF/density of the random variable $\bar{X}_n$, but these will never be the normal CDF/desntiy for any fixed $n$. However, as you let $n\to\infty$, $\bar{X}_n$ is asymptotically normal (under suitable conditions), and similarly, the CDF/density will also become normal. If you take $n\to\infty$, and then $K\to\infty$, then you will get the cdf/density of a normal rv.
Central Limit Theorem - Rule of thumb for repeated sampling
I think it may be helpful to think about your question a bit differently. Suppose that $X\sim F_X$ where $F_X$ is any arbitrary distribution, and let $\sigma^2 = Var(X)$. Now suppose I draw iid $X_1,\
Central Limit Theorem - Rule of thumb for repeated sampling I think it may be helpful to think about your question a bit differently. Suppose that $X\sim F_X$ where $F_X$ is any arbitrary distribution, and let $\sigma^2 = Var(X)$. Now suppose I draw iid $X_1,\dots,X_n \sim F_X$, and let $\bar{X}_n = \frac{1}{n}\sum X_i$. The CLT says that under very weak assumptions, $\bar{X}_n \xrightarrow{d} N(\mu,\sigma^2/n)$ as $n$ gets arbitrarily large. Now suppose that for a fixed $n$, I observe $\bar{X}_{n1},\dots,\bar{X}_{nK}$ where for each $k$, I sample iid $X_{1k},\dots,X_{nk} \sim F_X$ and build $\bar{X}_{nk}$. But this is the exact same as sampling $\bar{X}_{ni}$ from the distribution $F_{\bar{X}_n}$. Your question can thus posed as follows: What is the distribution $F_{\bar{X}_n}$, and in particular, is it normal? The answer is no, and I'll focus on your exponential example. We can understand this problem by literally considering the sampling distribution of $\bar{X}_n$ given iid $X_1,\dots,X_n \sim Exp(\gamma)$. Note that $Exp(\gamma) = \text{Gamma}(\alpha=1,\gamma)$, and so $\sum X_i \sim \text{Gamma}(n,\gamma)$ and thus $$\frac{1}{n}\sum X_i \sim \text{Gamma}(n,\gamma/n)$$ As it turns out, for $n$ reasonably large, this distribution is very similar to a Normal distribution, but it will never be a normal distribution for any finite $n$ (the above is exactly what distribution it is!). What you did by replicating was simply drawing from this distribution and plotting (indeed, try plotting these and youll get the same result!). Depending on the distribution of $X_i$, the distribution of $\bar{X}_n$ can be anything. What the CLT says is that as $n$ goes to infinity, $\bar{X}_n$ will converge to a normal distribution, and similarly, $\text{Gamma}(n,\gamma/n)$ (or any $F_{\bar{X}_n}$ where $X$ satisfies the requisite requirements for CLT to kick in) will asymptotically equal a normal distribution. EDIT In response to your comments, maybe there's a misunderstand somewhere. It's helpful to emphasize that we can think of $\bar{X}_n$ as a random variable itself (often we think of it as the mean and thus a constant, but this is not true!). The point is that the random variable $\bar{X}_n$ that is the sample mean of $X_1,\dots,X_n \sim F_X$, and the random variable $Y \sim F_{\bar{X}_n}$ are the exact same random variable. So by drawing $K$ iid draws of $X_1,\dots,X_n \sim F_X$ and calculating $\bar{X}_n$, you're doing the equivalent of $K$ draws from $F_{\bar{X}_n}$. At the end of the day, regardless of whether $K = 100,1000,100000,\dots$, youre just drawing $K$ times from $F_{\bar{X}_n}$. So what is your goal here? Are you asking at what point does the empirical cdf of $K$ draws accurately represent the cdf of $F_{\bar{X}_N}$? Well forget about anything about sample means in that case, and simply ask how many times do I need to draw some random variable $W \sim F$ such that the empirical cdf $\hat{F}_n$ is 'approximately' $F$. Well there's a whole literature on that, and two basic results are (see the wiki link on empirical cdfs for more): By the Glivenko-Cantelli theorem, $\hat{F}_n$ uniformly converges to $F$ almost surely. By Donsker's theorem, The empirical process $\sqrt{n}(\hat{F}_n -F)$ converges in distribution to a mean-zero Gaussian process. What you are doing with your histograms in your post is really estimating the density (not the CDF) given $K$ draws. Histograms are a (discrete) example of kernel density estimation (KDE). There's a similar literature on KDEs, and again, you have properties like the sample KDE will converge to the true underlying density as you gather more draws (ie $K\to\infty$). It should be noted that histograms don't converge to the true density unless you also let the bin width go to zero, and this is one reason why kernel approaches are preferred: they allow smoothness and similar properties. But at the end of the day, what you can say is the following: For a fixed $n$, drawing iid $X_1,\dots,X_n$ and considering the random variable $\frac{1}{n}\sum_{X_i}$ is equivalent to considering the random variable with distribution $F_{\bar{X}_n}$. For any $K$ draws from $F_{\bar{X}_n}$, you can estimate the CDF (empirical CDF) and/or estimate the density (two approach are histogram or KDE). In either case, as $K\to\infty$, these two esimates will converge to the true CDF/density of the random variable $\bar{X}_n$, but these will never be the normal CDF/desntiy for any fixed $n$. However, as you let $n\to\infty$, $\bar{X}_n$ is asymptotically normal (under suitable conditions), and similarly, the CDF/density will also become normal. If you take $n\to\infty$, and then $K\to\infty$, then you will get the cdf/density of a normal rv.
Central Limit Theorem - Rule of thumb for repeated sampling I think it may be helpful to think about your question a bit differently. Suppose that $X\sim F_X$ where $F_X$ is any arbitrary distribution, and let $\sigma^2 = Var(X)$. Now suppose I draw iid $X_1,\
23,962
What is the likelihood for this process?
In this case, I believe a path to a solution exists if we put on our survival analysis hat. Note that even though this model has no censored subjects (in the traditional sense), we can still use survival analysis and talk about hazards of subjects. We need to model three things in this order: i) the cumulative hazard, ii) the hazard, iii) the log likelihood. i) We'll do part i) in steps. What is the cumulative hazard, $H(t)$, of a Poisson random variable? For a discrete distribution, there are two ways to define it¹, but we will use the definition $H(t) = -\log{S(t)}$. So the cumulative hazard for $T \sim Poi(\lambda)$ is $$ H_T(t) = -\log{(1 - Q(t, \lambda))} = -\log{P(t, \lambda)} $$ where $Q, P$ is the upper, lower regularized gamma function respectively. Now we want to add the "hazards" of the insurance running out. The nice thing about cumulative hazards is that they are additive, so we simply need to add "risks" at the times 7, 14, 21: $$ H_{T'}(t) = -\log{P(t, \lambda)} + a\cdot\mathbb{1}_{(t>7)} + b\cdot\mathbb{1}_{(t>14)} + c\cdot\mathbb{1}_{(t>21)} $$ Heuristically, a patient is subject to a background "Poisson" risk, and then point-wise risks at 7, 14, and 21. (Because this is a cumulative hazard, we accumulate those point-wise risks, hence the $>$.) We don't know what $a, b$, and $c$ are, but we will later connect them to our probabilities of insurance running out. Actually, since we know 21 is the upper limit and all patients are removed after that, we can set $c$ to be infinity. $$ H_{T'}(t) = -\log{P(t, \lambda)} + a\cdot\mathbb{1}_{(t>7)} + b\cdot\mathbb{1}_{(t>14)} + \infty \cdot\mathbb{1}_{(t>21)} $$ ii) Next we use the cumulative hazard to get the hazard, $h(t)$. The formula for this is: $$h(t) = 1 - \exp{(H(t) - H(t+1))}$$ Plugging in our cumulative hazard, and simplifying: $$h_{T'}(t) = 1 - \frac{P(t+1, \lambda)}{P(t, \lambda)} \exp(-a\cdot\mathbb{1}_{(t=7)} - b\cdot\mathbb{1}_{(t=14)} - \infty \cdot\mathbb{1}_{(t=21)})$$ iii) Finally, writing the log likelihood for survival models (without censoring) is super easy once we have the hazard and cumulative hazard: $$ll(\lambda, a, b \;|\; t) = \sum_{i=1}^N \left(\log h(t_i) - H(t_i)\right)$$ And there it is! There exists the relationships that connects our point-wise hazard coefficients and the probabilities of insurance lengths: $a = -\log(1 - p_a), b = -\log(1 - p_a - p_b) - \log(1 - p_a), p_c = 1 - (p_a + p_b)$. The proof is in the pudding. Let's do some simulations and inference using lifelines' custom model semantics. from lifelines.fitters import ParametericUnivariateFitter from autograd_gamma import gammaincln, gammainc from autograd import numpy as np MAX = 1e10 class InsuranceDischargeModel(ParametericUnivariateFitter): """ parameters are related by a = -log(1 - p_a) b = -log(1 - p_a - p_b) - log(1 - p_a) p_c = 1 - (p_a + p_b) """ _fitted_parameter_names = ["lbd", "a", "b"] _bounds = [(0, None), (0, None), (0, None)] def _hazard(self, params, t): # from (1.64c) in http://geb.uni-giessen.de/geb/volltexte/2014/10793/pdf/RinneHorst_hazardrate_2014.pdf return 1 - np.exp(self._cumulative_hazard(params, t) - self._cumulative_hazard(params, t+1)) def _cumulative_hazard(self, params, t): lbd, a, b = params return -gammaincln(t, lbd) + a * (t > 7) + b * (t > 14) + MAX * (t > 21) def gen_data(): p_a, p_b = 0.4, 0.2 p = [p_a, p_b, 1 - p_a - p_b] lambda_ = 18 death_without_insurance = np.random.poisson(lambda_) insurance_covers_until = np.random.choice([7, 14, 21], p=p) if death_without_insurance < insurance_covers_until: return death_without_insurance else: return insurance_covers_until durations = np.array([gen_data() for _ in range(40000)]) model = InsuranceDischargeModel() model.fit(durations) model.print_summary(5) """ <lifelines.InsuranceDischargeModel:"InsuranceDischargeModel_estimate", fitted with 40000 total observations, 0 right-censored observations> number of observations = 40000 number of events observed = 40000 log-likelihood = -78754.92088 hypothesis = lbd != 1, a != 1, b != 1 --- coef se(coef) coef lower 95% coef upper 95% z p -log2(p) lbd 18.01220 0.03351 17.94652 18.07789 507.62368 <5e-06 inf a 0.51426 0.00411 0.50620 0.52232 -118.14024 <5e-06 inf b 0.40674 0.00557 0.39582 0.41767 -106.43953 <5e-06 inf --- """ ¹ see Section 1.2 here
What is the likelihood for this process?
In this case, I believe a path to a solution exists if we put on our survival analysis hat. Note that even though this model has no censored subjects (in the traditional sense), we can still use survi
What is the likelihood for this process? In this case, I believe a path to a solution exists if we put on our survival analysis hat. Note that even though this model has no censored subjects (in the traditional sense), we can still use survival analysis and talk about hazards of subjects. We need to model three things in this order: i) the cumulative hazard, ii) the hazard, iii) the log likelihood. i) We'll do part i) in steps. What is the cumulative hazard, $H(t)$, of a Poisson random variable? For a discrete distribution, there are two ways to define it¹, but we will use the definition $H(t) = -\log{S(t)}$. So the cumulative hazard for $T \sim Poi(\lambda)$ is $$ H_T(t) = -\log{(1 - Q(t, \lambda))} = -\log{P(t, \lambda)} $$ where $Q, P$ is the upper, lower regularized gamma function respectively. Now we want to add the "hazards" of the insurance running out. The nice thing about cumulative hazards is that they are additive, so we simply need to add "risks" at the times 7, 14, 21: $$ H_{T'}(t) = -\log{P(t, \lambda)} + a\cdot\mathbb{1}_{(t>7)} + b\cdot\mathbb{1}_{(t>14)} + c\cdot\mathbb{1}_{(t>21)} $$ Heuristically, a patient is subject to a background "Poisson" risk, and then point-wise risks at 7, 14, and 21. (Because this is a cumulative hazard, we accumulate those point-wise risks, hence the $>$.) We don't know what $a, b$, and $c$ are, but we will later connect them to our probabilities of insurance running out. Actually, since we know 21 is the upper limit and all patients are removed after that, we can set $c$ to be infinity. $$ H_{T'}(t) = -\log{P(t, \lambda)} + a\cdot\mathbb{1}_{(t>7)} + b\cdot\mathbb{1}_{(t>14)} + \infty \cdot\mathbb{1}_{(t>21)} $$ ii) Next we use the cumulative hazard to get the hazard, $h(t)$. The formula for this is: $$h(t) = 1 - \exp{(H(t) - H(t+1))}$$ Plugging in our cumulative hazard, and simplifying: $$h_{T'}(t) = 1 - \frac{P(t+1, \lambda)}{P(t, \lambda)} \exp(-a\cdot\mathbb{1}_{(t=7)} - b\cdot\mathbb{1}_{(t=14)} - \infty \cdot\mathbb{1}_{(t=21)})$$ iii) Finally, writing the log likelihood for survival models (without censoring) is super easy once we have the hazard and cumulative hazard: $$ll(\lambda, a, b \;|\; t) = \sum_{i=1}^N \left(\log h(t_i) - H(t_i)\right)$$ And there it is! There exists the relationships that connects our point-wise hazard coefficients and the probabilities of insurance lengths: $a = -\log(1 - p_a), b = -\log(1 - p_a - p_b) - \log(1 - p_a), p_c = 1 - (p_a + p_b)$. The proof is in the pudding. Let's do some simulations and inference using lifelines' custom model semantics. from lifelines.fitters import ParametericUnivariateFitter from autograd_gamma import gammaincln, gammainc from autograd import numpy as np MAX = 1e10 class InsuranceDischargeModel(ParametericUnivariateFitter): """ parameters are related by a = -log(1 - p_a) b = -log(1 - p_a - p_b) - log(1 - p_a) p_c = 1 - (p_a + p_b) """ _fitted_parameter_names = ["lbd", "a", "b"] _bounds = [(0, None), (0, None), (0, None)] def _hazard(self, params, t): # from (1.64c) in http://geb.uni-giessen.de/geb/volltexte/2014/10793/pdf/RinneHorst_hazardrate_2014.pdf return 1 - np.exp(self._cumulative_hazard(params, t) - self._cumulative_hazard(params, t+1)) def _cumulative_hazard(self, params, t): lbd, a, b = params return -gammaincln(t, lbd) + a * (t > 7) + b * (t > 14) + MAX * (t > 21) def gen_data(): p_a, p_b = 0.4, 0.2 p = [p_a, p_b, 1 - p_a - p_b] lambda_ = 18 death_without_insurance = np.random.poisson(lambda_) insurance_covers_until = np.random.choice([7, 14, 21], p=p) if death_without_insurance < insurance_covers_until: return death_without_insurance else: return insurance_covers_until durations = np.array([gen_data() for _ in range(40000)]) model = InsuranceDischargeModel() model.fit(durations) model.print_summary(5) """ <lifelines.InsuranceDischargeModel:"InsuranceDischargeModel_estimate", fitted with 40000 total observations, 0 right-censored observations> number of observations = 40000 number of events observed = 40000 log-likelihood = -78754.92088 hypothesis = lbd != 1, a != 1, b != 1 --- coef se(coef) coef lower 95% coef upper 95% z p -log2(p) lbd 18.01220 0.03351 17.94652 18.07789 507.62368 <5e-06 inf a 0.51426 0.00411 0.50620 0.52232 -118.14024 <5e-06 inf b 0.40674 0.00557 0.39582 0.41767 -106.43953 <5e-06 inf --- """ ¹ see Section 1.2 here
What is the likelihood for this process? In this case, I believe a path to a solution exists if we put on our survival analysis hat. Note that even though this model has no censored subjects (in the traditional sense), we can still use survi
23,963
Why aren't "error in X" models more widely used?
Your question (plus further commentary in the comments) appears to be mostly interested in the case where we have a randomised controlled trial where the researcher randomly assigns one or more of the explanatory variables, based on some randomisation design. In this context, you want to know why we use a model that treats the explanatory variables as known constants, rather than treating them as random variables from the sampling distribution imposed by the randomisation. (Your question is broader than this, but this seems to be the case of primary interest in the commentary, so this is the one I will address.) The reason that we condition on the explanatory variables, in this context, is that in a regression problem for an RCT, we are still interested in the conditional distribution of the response variable given the predictors. Indeed, in an RCT we are interested in determining the causal effects of an explanatory variable $X$ on the response variable $Y$, which we are going to determine via inference about the conditional distribution (subject to some protocols to prevent confounding). The randomisation is imposed to break dependence between the explanatory variable $X$ and any would-be confounding variables (i.e., prevent back-door associations).$^\dagger$ However, the object of inference in the problem is still the conditional distribution of the response variable given the explanatory variables. Thus, it still makes sense to estimate the parameters in this conditional distribution, using estimation methods that have good properties for inferring the conditional distribution. That is the normal case that applies for an RCT using regression techniques. Of course, there are some situations where we have other interests, and we might indeed want to incorporate uncertainty about the explanatory variables. Incorporating uncertainty in the explanatory variables generally occurs in two cases: (1) When we go beyond regression analysis and into multivariate analysis we are then interested is in the joint distribution of the explanatory and response variables, rather than just the conditional distribution of the latter given the former. There may be applications where this is our interest, and so we would then go beyond regression analysis, and incorporate information about the distribution of the explanatory variables. (2) In some regression applications our interest is in the conditional distribution of the response variable conditional on an underlying unobserved explanatory variable, where we assume that the observed explanatory variables was subject to error ("errors-in-variables"). In this case we incorporate uncertainty via "errors-in-variables". The reason for this is that our interest in these cases is in the conditional distribution, conditional on an unobserved underlying variable. Note that both of these cases are mathematically more complicated than regression analysis, so if we can get away with using regression analysis, that is generally preferable. In any case, in most applications of regression analysis, the goal is to make an inference about the conditional distribution of the response, given the observable explanatory variables, so these generalisations become unnecessary. $^\dagger$ Note that randomisation severs causal effects from confounding variables to the randomised variable, but it does not sever causal effects from the randomised variable to the confounding variables, and then to the response. This means that other protocols (e.g., placebos, blinding, etc.) may be required to fully sever all back-door associations in a causal analysis.
Why aren't "error in X" models more widely used?
Your question (plus further commentary in the comments) appears to be mostly interested in the case where we have a randomised controlled trial where the researcher randomly assigns one or more of the
Why aren't "error in X" models more widely used? Your question (plus further commentary in the comments) appears to be mostly interested in the case where we have a randomised controlled trial where the researcher randomly assigns one or more of the explanatory variables, based on some randomisation design. In this context, you want to know why we use a model that treats the explanatory variables as known constants, rather than treating them as random variables from the sampling distribution imposed by the randomisation. (Your question is broader than this, but this seems to be the case of primary interest in the commentary, so this is the one I will address.) The reason that we condition on the explanatory variables, in this context, is that in a regression problem for an RCT, we are still interested in the conditional distribution of the response variable given the predictors. Indeed, in an RCT we are interested in determining the causal effects of an explanatory variable $X$ on the response variable $Y$, which we are going to determine via inference about the conditional distribution (subject to some protocols to prevent confounding). The randomisation is imposed to break dependence between the explanatory variable $X$ and any would-be confounding variables (i.e., prevent back-door associations).$^\dagger$ However, the object of inference in the problem is still the conditional distribution of the response variable given the explanatory variables. Thus, it still makes sense to estimate the parameters in this conditional distribution, using estimation methods that have good properties for inferring the conditional distribution. That is the normal case that applies for an RCT using regression techniques. Of course, there are some situations where we have other interests, and we might indeed want to incorporate uncertainty about the explanatory variables. Incorporating uncertainty in the explanatory variables generally occurs in two cases: (1) When we go beyond regression analysis and into multivariate analysis we are then interested is in the joint distribution of the explanatory and response variables, rather than just the conditional distribution of the latter given the former. There may be applications where this is our interest, and so we would then go beyond regression analysis, and incorporate information about the distribution of the explanatory variables. (2) In some regression applications our interest is in the conditional distribution of the response variable conditional on an underlying unobserved explanatory variable, where we assume that the observed explanatory variables was subject to error ("errors-in-variables"). In this case we incorporate uncertainty via "errors-in-variables". The reason for this is that our interest in these cases is in the conditional distribution, conditional on an unobserved underlying variable. Note that both of these cases are mathematically more complicated than regression analysis, so if we can get away with using regression analysis, that is generally preferable. In any case, in most applications of regression analysis, the goal is to make an inference about the conditional distribution of the response, given the observable explanatory variables, so these generalisations become unnecessary. $^\dagger$ Note that randomisation severs causal effects from confounding variables to the randomised variable, but it does not sever causal effects from the randomised variable to the confounding variables, and then to the response. This means that other protocols (e.g., placebos, blinding, etc.) may be required to fully sever all back-door associations in a causal analysis.
Why aren't "error in X" models more widely used? Your question (plus further commentary in the comments) appears to be mostly interested in the case where we have a randomised controlled trial where the researcher randomly assigns one or more of the
23,964
Why aren't "error in X" models more widely used?
The title "errors in variables" and the content of the question seems different, as it asks about why we do not take into account the variation in $X$ when modelling the conditional response, that is, in inference for regression parameters. Those two preoccupations seems orthogonal to me, so here I respond to the content. I have answered to a similar question before, What is the difference between conditioning on regressors vs. treating them as fixed?, so here I will copy part of my answer there: I will try to flesh out an argument for conditioning on regressors somewhat more formally. Let $(Y,X)$ be a random vector, and interest is in regression $Y$ on $X$, where regression is taken to mean the conditional expectation of $Y$ on $X$. Under multinormal assumptions that will be a linear function, but our arguments do not depend on that. We start with factoring the joint density in the usual way $$ f(y,x) = f(y\mid x) f(x) $$ but those functions are not known so we use a parameterized model $$ f(y,x; \theta, \psi)=f_\theta(y \mid x) f_\psi(x) $$ where $\theta$ parameterizes the conditional distribution and $\psi$ the marginal distribution of $X$. In the normal linear model we can have $\theta=(\beta, \sigma^2)$ but that is not assumed. The full parameter space of $(\theta,\psi)$ is $\Theta \times \Psi$, a Cartesian product, and the two parameters have no part in common. This can be interpreted as a factorization of the statistical experiment, (or of the data generation process, DGP), first $X$ is generated according to $f_\psi(x)$, and as a second step, $Y$ is generated according to the conditional density $f_\theta(y \mid X=x)$. Note that the first step does not use any knowledge about $\theta$, that enters only in the second step. The statistic $X$ is ancillary for $\theta$, see https://en.wikipedia.org/wiki/Ancillary_statistic. But, depending on the results of the first step, the second step could be more or less informative about $\theta$. If the distribution given by $f_\psi(x)$ have very low variance, say, the observed $x$'s will be concentrated in a small region, so it will be more difficult to estimate $\theta$. So, the first part of this two-step experiment determines the precision with which $\theta$ can be estimated. Therefore it is natural to condition on $X=x$ in inference about the regression parameters. That is the conditionality argument, and the outline above makes clear its assumptions. In designed experiments its assumption will mostly hold, often with observational data not. Some examples of problems will be: regression with lagged responses as predictors. Conditioning on the predictors in this case will also condition on the response! (I will add more examples). One book which discusses this problems in a lot of detail is Information and exponential families: In statistical theory by O. E Barndorff-Nielsen. See especially chapter 4. The author says the separation logic in this situation is however seldom explicated but gives the following references: R A Fisher (1956) Statistical Methods and Scientific Inference $\S 4.3$ and Sverdrup (1966) The present state of the decision theory and the Neyman-Pearson theory. The factorization used here is somewhat similar in spirit to the factorization theorem of sufficient statistics. If focus is on the regression parameters $\theta$, and the distribution of $X$ do not depend on $\theta$, then how could the distribution of (or variation in) $X$ contain information about $\theta$? This separation argument is helpful also because it points to the cases where it cannot be used, for instance regression with lagged responses as predictors.
Why aren't "error in X" models more widely used?
The title "errors in variables" and the content of the question seems different, as it asks about why we do not take into account the variation in $X$ when modelling the conditional response, that is,
Why aren't "error in X" models more widely used? The title "errors in variables" and the content of the question seems different, as it asks about why we do not take into account the variation in $X$ when modelling the conditional response, that is, in inference for regression parameters. Those two preoccupations seems orthogonal to me, so here I respond to the content. I have answered to a similar question before, What is the difference between conditioning on regressors vs. treating them as fixed?, so here I will copy part of my answer there: I will try to flesh out an argument for conditioning on regressors somewhat more formally. Let $(Y,X)$ be a random vector, and interest is in regression $Y$ on $X$, where regression is taken to mean the conditional expectation of $Y$ on $X$. Under multinormal assumptions that will be a linear function, but our arguments do not depend on that. We start with factoring the joint density in the usual way $$ f(y,x) = f(y\mid x) f(x) $$ but those functions are not known so we use a parameterized model $$ f(y,x; \theta, \psi)=f_\theta(y \mid x) f_\psi(x) $$ where $\theta$ parameterizes the conditional distribution and $\psi$ the marginal distribution of $X$. In the normal linear model we can have $\theta=(\beta, \sigma^2)$ but that is not assumed. The full parameter space of $(\theta,\psi)$ is $\Theta \times \Psi$, a Cartesian product, and the two parameters have no part in common. This can be interpreted as a factorization of the statistical experiment, (or of the data generation process, DGP), first $X$ is generated according to $f_\psi(x)$, and as a second step, $Y$ is generated according to the conditional density $f_\theta(y \mid X=x)$. Note that the first step does not use any knowledge about $\theta$, that enters only in the second step. The statistic $X$ is ancillary for $\theta$, see https://en.wikipedia.org/wiki/Ancillary_statistic. But, depending on the results of the first step, the second step could be more or less informative about $\theta$. If the distribution given by $f_\psi(x)$ have very low variance, say, the observed $x$'s will be concentrated in a small region, so it will be more difficult to estimate $\theta$. So, the first part of this two-step experiment determines the precision with which $\theta$ can be estimated. Therefore it is natural to condition on $X=x$ in inference about the regression parameters. That is the conditionality argument, and the outline above makes clear its assumptions. In designed experiments its assumption will mostly hold, often with observational data not. Some examples of problems will be: regression with lagged responses as predictors. Conditioning on the predictors in this case will also condition on the response! (I will add more examples). One book which discusses this problems in a lot of detail is Information and exponential families: In statistical theory by O. E Barndorff-Nielsen. See especially chapter 4. The author says the separation logic in this situation is however seldom explicated but gives the following references: R A Fisher (1956) Statistical Methods and Scientific Inference $\S 4.3$ and Sverdrup (1966) The present state of the decision theory and the Neyman-Pearson theory. The factorization used here is somewhat similar in spirit to the factorization theorem of sufficient statistics. If focus is on the regression parameters $\theta$, and the distribution of $X$ do not depend on $\theta$, then how could the distribution of (or variation in) $X$ contain information about $\theta$? This separation argument is helpful also because it points to the cases where it cannot be used, for instance regression with lagged responses as predictors.
Why aren't "error in X" models more widely used? The title "errors in variables" and the content of the question seems different, as it asks about why we do not take into account the variation in $X$ when modelling the conditional response, that is,
23,965
Are there any “esoteric” statistic tests with very low power?
(Related to the comment by @Scortchi) Suppose $X \sim N(\mu, 1)$ and we want to test the hypothesis \begin{align*} H_0&: \mu = 0 \\ H_1&: \mu \neq 0 \end{align*} For the sake of esetoricism, let's augment our data with an independent "coin flip" $Z \sim Bernoulli(p)$ where $p$ is known and no smaller than the significance level $\alpha$ (i.e. $p \in [\alpha, 1]$). Consider rejection regions of the form: $$R = \left\{(X, Z) \ | \ z = 1 \ \wedge |x| > \Phi^{-1}\left(\frac{\alpha}{2p}\right) \right\}$$ By construction, this is a valid test of size $\alpha$. \begin{align*} P(X\in R \ | \ \mu=0) &= P\left(Z=1 \ , \ |X| > \Phi^{-1}\left(\frac{\alpha}{2p}\right)\right) \\ &= P(Z=1)P\left(|X| > \Phi^{-1}\left(\frac{\alpha}{2p}\right)\right) \\ &= p\frac{\alpha}{p} = \alpha \end{align*} The power of this test however can never be more than $p$. For instance, suppose that our observed data is $(x, z) = (1000000, 0)$. It is obvious that the null hypothesis should be rejected, but since our coin "shows tails" we fail to reject the null. Setting $p=\alpha$ leads to an even sillier example where the rejection region doesn't depend on $X$ at all, but is still a valid Rejection region with size $\alpha$. A similar question could be given as homework by changing intersection to union in the rejection region. This region is uniformly less powerful than the one without $Z$, but is more reasonable in the sense that power doesn't have an upper bound.
Are there any “esoteric” statistic tests with very low power?
(Related to the comment by @Scortchi) Suppose $X \sim N(\mu, 1)$ and we want to test the hypothesis \begin{align*} H_0&: \mu = 0 \\ H_1&: \mu \neq 0 \end{align*} For the sake of esetoricism, let's aug
Are there any “esoteric” statistic tests with very low power? (Related to the comment by @Scortchi) Suppose $X \sim N(\mu, 1)$ and we want to test the hypothesis \begin{align*} H_0&: \mu = 0 \\ H_1&: \mu \neq 0 \end{align*} For the sake of esetoricism, let's augment our data with an independent "coin flip" $Z \sim Bernoulli(p)$ where $p$ is known and no smaller than the significance level $\alpha$ (i.e. $p \in [\alpha, 1]$). Consider rejection regions of the form: $$R = \left\{(X, Z) \ | \ z = 1 \ \wedge |x| > \Phi^{-1}\left(\frac{\alpha}{2p}\right) \right\}$$ By construction, this is a valid test of size $\alpha$. \begin{align*} P(X\in R \ | \ \mu=0) &= P\left(Z=1 \ , \ |X| > \Phi^{-1}\left(\frac{\alpha}{2p}\right)\right) \\ &= P(Z=1)P\left(|X| > \Phi^{-1}\left(\frac{\alpha}{2p}\right)\right) \\ &= p\frac{\alpha}{p} = \alpha \end{align*} The power of this test however can never be more than $p$. For instance, suppose that our observed data is $(x, z) = (1000000, 0)$. It is obvious that the null hypothesis should be rejected, but since our coin "shows tails" we fail to reject the null. Setting $p=\alpha$ leads to an even sillier example where the rejection region doesn't depend on $X$ at all, but is still a valid Rejection region with size $\alpha$. A similar question could be given as homework by changing intersection to union in the rejection region. This region is uniformly less powerful than the one without $Z$, but is more reasonable in the sense that power doesn't have an upper bound.
Are there any “esoteric” statistic tests with very low power? (Related to the comment by @Scortchi) Suppose $X \sim N(\mu, 1)$ and we want to test the hypothesis \begin{align*} H_0&: \mu = 0 \\ H_1&: \mu \neq 0 \end{align*} For the sake of esetoricism, let's aug
23,966
Are there any “esoteric” statistic tests with very low power?
There's a little-remarked-on corollary to the Neyman–Pearson lemma (proof in Geisser (2006), Modes of Parametric Statistical Inference, Ch 4.4): $$ \operatorname{E}\phi(X)=\alpha $$ $$ \phi(x) = \begin{cases} 0\ & \text{when $f_0(x) < kf_1(x)$} \\ 1\ & \text{when $f_0(x) > kf_1(x)$} \end{cases} $$ defines the least powerful level-$\alpha$ test, $\phi$, of the null hypothesis $H_0:$ density $f_0$ vs $H_1:$ density $f_1$ from data $x$. From this result you can derive uniformly least powerful, locally least powerful, uniformly least powerful similar, & least powerful "totally biased" tests (I mean those with lower power under any alternative than under the null). If you already have a uniformly most powerful, &c. test, simply multiply your test statistic by -1 to maintain the partitioning of the sample space it induces while reversing the ordering of the partitions. Perhaps, as @user54038 suggests, "failure of a general method of test construction" might be more interesting. Lehmann (1950), "Some principles of the theory of testing statistical hypotheses", Ann. Math. Statist., 21, 1, attributes the following example to Stein: Let $X$ be a random variable capable of taking on the values $0, \pm 1, \pm 2$ with probabilities as indicated: $$ \begin{array}{r c c c c c} & -2 & 2 & -1 & 1 & 0 \\ \hline \text{Hypothesis $H$:} & \frac{\alpha}{2} & \frac{\alpha}{2} & \frac{1}{2} - \alpha & \frac{1}{2} - \alpha & \alpha\\ \hline \text{Alternatives:} & pC & (1-p)C & \frac{1-C}{1-\alpha}\left(\frac{1}{2}-\alpha\right) & \frac{1-C}{1-\alpha}\left(\frac{1}{2}-\alpha\right) & \alpha\frac{1-c}{1-\alpha}\\ \end{array} $$ Here, $\alpha$, $C$, are constants $0 < \alpha \leq \frac{1}{2}$, $\frac{\alpha}{2-\alpha}< C <\alpha$, and $p$ ranges over the interval $[0,1]$. It is desired to test the hypothesis $H$ at significance level $\alpha$. The likelihood ratio test rejects when $X=\pm2$, and hence its power is $C$ against each alternative. Since $C<\alpha$, this test is literally worse than useless, for a test with power $\alpha$ can be obtained without observing $X$ at all, simply by the use of a table of random numbers. Note that it's the generalized likelihood ratio test he's considering, with $p$ in the role of a nuisance parameter to be maximized over. So when $X=-2$ or $X=2$, $\hat p=1$ or $\hat p=0$ respectively, & the likelihood ratio comes to $\frac{2C}{\alpha}$ in either case; for any other value of $X$ it's the lower value of $\frac{1-C}{1-\alpha}$.
Are there any “esoteric” statistic tests with very low power?
There's a little-remarked-on corollary to the Neyman–Pearson lemma (proof in Geisser (2006), Modes of Parametric Statistical Inference, Ch 4.4): $$ \operatorname{E}\phi(X)=\alpha $$ $$ \phi(x) = \beg
Are there any “esoteric” statistic tests with very low power? There's a little-remarked-on corollary to the Neyman–Pearson lemma (proof in Geisser (2006), Modes of Parametric Statistical Inference, Ch 4.4): $$ \operatorname{E}\phi(X)=\alpha $$ $$ \phi(x) = \begin{cases} 0\ & \text{when $f_0(x) < kf_1(x)$} \\ 1\ & \text{when $f_0(x) > kf_1(x)$} \end{cases} $$ defines the least powerful level-$\alpha$ test, $\phi$, of the null hypothesis $H_0:$ density $f_0$ vs $H_1:$ density $f_1$ from data $x$. From this result you can derive uniformly least powerful, locally least powerful, uniformly least powerful similar, & least powerful "totally biased" tests (I mean those with lower power under any alternative than under the null). If you already have a uniformly most powerful, &c. test, simply multiply your test statistic by -1 to maintain the partitioning of the sample space it induces while reversing the ordering of the partitions. Perhaps, as @user54038 suggests, "failure of a general method of test construction" might be more interesting. Lehmann (1950), "Some principles of the theory of testing statistical hypotheses", Ann. Math. Statist., 21, 1, attributes the following example to Stein: Let $X$ be a random variable capable of taking on the values $0, \pm 1, \pm 2$ with probabilities as indicated: $$ \begin{array}{r c c c c c} & -2 & 2 & -1 & 1 & 0 \\ \hline \text{Hypothesis $H$:} & \frac{\alpha}{2} & \frac{\alpha}{2} & \frac{1}{2} - \alpha & \frac{1}{2} - \alpha & \alpha\\ \hline \text{Alternatives:} & pC & (1-p)C & \frac{1-C}{1-\alpha}\left(\frac{1}{2}-\alpha\right) & \frac{1-C}{1-\alpha}\left(\frac{1}{2}-\alpha\right) & \alpha\frac{1-c}{1-\alpha}\\ \end{array} $$ Here, $\alpha$, $C$, are constants $0 < \alpha \leq \frac{1}{2}$, $\frac{\alpha}{2-\alpha}< C <\alpha$, and $p$ ranges over the interval $[0,1]$. It is desired to test the hypothesis $H$ at significance level $\alpha$. The likelihood ratio test rejects when $X=\pm2$, and hence its power is $C$ against each alternative. Since $C<\alpha$, this test is literally worse than useless, for a test with power $\alpha$ can be obtained without observing $X$ at all, simply by the use of a table of random numbers. Note that it's the generalized likelihood ratio test he's considering, with $p$ in the role of a nuisance parameter to be maximized over. So when $X=-2$ or $X=2$, $\hat p=1$ or $\hat p=0$ respectively, & the likelihood ratio comes to $\frac{2C}{\alpha}$ in either case; for any other value of $X$ it's the lower value of $\frac{1-C}{1-\alpha}$.
Are there any “esoteric” statistic tests with very low power? There's a little-remarked-on corollary to the Neyman–Pearson lemma (proof in Geisser (2006), Modes of Parametric Statistical Inference, Ch 4.4): $$ \operatorname{E}\phi(X)=\alpha $$ $$ \phi(x) = \beg
23,967
Handling missing data for a neural network
The problem of missing data has in data analysis obtained considerable attention. In their reference book [1] Rubin and Little define three mechanisms behind data becoming missing (definitions from from https://en.wikipedia.org/wiki/Missing_data): MCAR: Values in a data set are missing completely at random (MCAR) if the events that lead to any particular data-item being missing are independent both of observable variables and of unobservable parameters of interest, and occur entirely at random MAR: Missing at random occurs when the missingness is not random, but where missingness can be fully accounted for by variables where there are completely observed MNAR: the value of the variable that's missing is related to the reason it's missing In the example you give, whether a subject smokes or not, tends often to be missing. I would believe that MNAR is the case for smoking. Non-smokers have no problem with filling in this fact, whereas some (perhaps light) smokers can be reluctant to indicate a 'Yes'. So the missingness of 'Smoking' is most likely to indicate a smoker, but we don't know. When MNAR is the case, you need to model the missing data mechanism as well. Being creative, it is possible to model a simple missing data mechanism with a neural network. You can represent the boolean variable (like smoker, yes/no) by one input neuron, with encoded input $1$ for smoker and $-1$ for non-smoker. Give the value $0$ as input to this neuron when the smoker variable is missing. Any weights connecting with the 'smoker input neuron' will have no influence on the further computation, because $0 \times w_{i\,j}=0$. You don't have to adapt the training algorithm or the network topology for this solution to work for boolean and enumerated variables. [1] Rubin, Donald B.; Little, Roderick J. A. (2002). Statistical analysis with missing data (2nd ed.). New York: Wiley
Handling missing data for a neural network
The problem of missing data has in data analysis obtained considerable attention. In their reference book [1] Rubin and Little define three mechanisms behind data becoming missing (definitions from fr
Handling missing data for a neural network The problem of missing data has in data analysis obtained considerable attention. In their reference book [1] Rubin and Little define three mechanisms behind data becoming missing (definitions from from https://en.wikipedia.org/wiki/Missing_data): MCAR: Values in a data set are missing completely at random (MCAR) if the events that lead to any particular data-item being missing are independent both of observable variables and of unobservable parameters of interest, and occur entirely at random MAR: Missing at random occurs when the missingness is not random, but where missingness can be fully accounted for by variables where there are completely observed MNAR: the value of the variable that's missing is related to the reason it's missing In the example you give, whether a subject smokes or not, tends often to be missing. I would believe that MNAR is the case for smoking. Non-smokers have no problem with filling in this fact, whereas some (perhaps light) smokers can be reluctant to indicate a 'Yes'. So the missingness of 'Smoking' is most likely to indicate a smoker, but we don't know. When MNAR is the case, you need to model the missing data mechanism as well. Being creative, it is possible to model a simple missing data mechanism with a neural network. You can represent the boolean variable (like smoker, yes/no) by one input neuron, with encoded input $1$ for smoker and $-1$ for non-smoker. Give the value $0$ as input to this neuron when the smoker variable is missing. Any weights connecting with the 'smoker input neuron' will have no influence on the further computation, because $0 \times w_{i\,j}=0$. You don't have to adapt the training algorithm or the network topology for this solution to work for boolean and enumerated variables. [1] Rubin, Donald B.; Little, Roderick J. A. (2002). Statistical analysis with missing data (2nd ed.). New York: Wiley
Handling missing data for a neural network The problem of missing data has in data analysis obtained considerable attention. In their reference book [1] Rubin and Little define three mechanisms behind data becoming missing (definitions from fr
23,968
Baseline adjustment in mixed models
I assume this is a parallel group experiment (randomized assignment to just one group or the other, not some kind of cross-over). Assuming this is a continuous outcome, the purpose of baseline adjustment is to reduce the variability of the treatment difference. Some discussion around this and the conditions under which it is helpful are found in Senn, S. (2006). Change from baseline and analysis of covariance revisited. Statistics in medicine, 25(24), 4334-4344. It does not really make a difference whether it is a mixed model or not. However, if one does include the baseline, it is usually recommended to have a time (as a factor) by baseline interaction, because the importance of the baseline will usually decrease over time. Personally, I would by default include the baseline in any model in the type of trial you are describing (and for the type of mixed model you describe always include the time by baseline interaction). The most usual way is to indeed include the baseline as a fixed covariate. Using it as a random effect is of course also possible, but less common. The final option is to use the baseline as yet another observation instead of as a model term. This assumes joint (multivariate-)normality (assuming you are using a normal model) of the error terms across the visits including the baseline. This can work nicely, but tends to be problematic if subjects were only included in the study, if their values were above (or below) a certain threshold (because that induces very strong non-normality due to the truncated distribution of the baseline values). For that reason including it as a model term is more common.
Baseline adjustment in mixed models
I assume this is a parallel group experiment (randomized assignment to just one group or the other, not some kind of cross-over). Assuming this is a continuous outcome, the purpose of baseline adjustm
Baseline adjustment in mixed models I assume this is a parallel group experiment (randomized assignment to just one group or the other, not some kind of cross-over). Assuming this is a continuous outcome, the purpose of baseline adjustment is to reduce the variability of the treatment difference. Some discussion around this and the conditions under which it is helpful are found in Senn, S. (2006). Change from baseline and analysis of covariance revisited. Statistics in medicine, 25(24), 4334-4344. It does not really make a difference whether it is a mixed model or not. However, if one does include the baseline, it is usually recommended to have a time (as a factor) by baseline interaction, because the importance of the baseline will usually decrease over time. Personally, I would by default include the baseline in any model in the type of trial you are describing (and for the type of mixed model you describe always include the time by baseline interaction). The most usual way is to indeed include the baseline as a fixed covariate. Using it as a random effect is of course also possible, but less common. The final option is to use the baseline as yet another observation instead of as a model term. This assumes joint (multivariate-)normality (assuming you are using a normal model) of the error terms across the visits including the baseline. This can work nicely, but tends to be problematic if subjects were only included in the study, if their values were above (or below) a certain threshold (because that induces very strong non-normality due to the truncated distribution of the baseline values). For that reason including it as a model term is more common.
Baseline adjustment in mixed models I assume this is a parallel group experiment (randomized assignment to just one group or the other, not some kind of cross-over). Assuming this is a continuous outcome, the purpose of baseline adjustm
23,969
How the probability threshold of a classifier can be adjusted in case of multiple classes? [duplicate]
You can use a prior distribution over the classes. Let us assume that your model computes a vector of class probabilities $v$. You can define a vector of prior probabilities $\pi$ and then compute your class probabilities to be proportional to $v \circ \pi$, where $\circ$ denotes an element-wise product. So the probability that your observation belongs to class $c$ is proportional to $v_c\pi_c$. If you want a proper distribution you just need to renormalize. In your example, if you want your predictions to be slightly biased to class 1, you can define $\pi=(0.4, 0.3, 0.3)$, for instance. If you think about it, in the binary case this is what you are implicitly doing this when you change the threshold. Let us say you establish the following rule: if your probability vector is $v$ and your decision function is $f(x)$, then $$ f(x)= \begin{cases} 2 & v_2\geq \theta \\ 1 & \mbox{otherwise} \end{cases} $$ for some $\theta \in (0,1)$. Then this is equivalent (at least when it comes to making the decision) to computing the class probabilities to be proportional to $(\frac{v_1}{1-\theta}, \frac{v_2}{\theta})$, so you would be defining $\pi=(\frac{1}{1-\theta}, \frac{1}{\theta})$. You can also learn the value of $\pi$ from your data. For instance, you can compute the proportion of each class and use that as prior probabilities. For a more principled way of incorporating this kind of prior assumptions into your model, you might want to look at Bayesian inference.
How the probability threshold of a classifier can be adjusted in case of multiple classes? [duplicat
You can use a prior distribution over the classes. Let us assume that your model computes a vector of class probabilities $v$. You can define a vector of prior probabilities $\pi$ and then compute you
How the probability threshold of a classifier can be adjusted in case of multiple classes? [duplicate] You can use a prior distribution over the classes. Let us assume that your model computes a vector of class probabilities $v$. You can define a vector of prior probabilities $\pi$ and then compute your class probabilities to be proportional to $v \circ \pi$, where $\circ$ denotes an element-wise product. So the probability that your observation belongs to class $c$ is proportional to $v_c\pi_c$. If you want a proper distribution you just need to renormalize. In your example, if you want your predictions to be slightly biased to class 1, you can define $\pi=(0.4, 0.3, 0.3)$, for instance. If you think about it, in the binary case this is what you are implicitly doing this when you change the threshold. Let us say you establish the following rule: if your probability vector is $v$ and your decision function is $f(x)$, then $$ f(x)= \begin{cases} 2 & v_2\geq \theta \\ 1 & \mbox{otherwise} \end{cases} $$ for some $\theta \in (0,1)$. Then this is equivalent (at least when it comes to making the decision) to computing the class probabilities to be proportional to $(\frac{v_1}{1-\theta}, \frac{v_2}{\theta})$, so you would be defining $\pi=(\frac{1}{1-\theta}, \frac{1}{\theta})$. You can also learn the value of $\pi$ from your data. For instance, you can compute the proportion of each class and use that as prior probabilities. For a more principled way of incorporating this kind of prior assumptions into your model, you might want to look at Bayesian inference.
How the probability threshold of a classifier can be adjusted in case of multiple classes? [duplicat You can use a prior distribution over the classes. Let us assume that your model computes a vector of class probabilities $v$. You can define a vector of prior probabilities $\pi$ and then compute you
23,970
Why is bias equal to zero for OLS estimator with respect to linear regression?
We can think of any supervised learning task, be it regression or classification, as attempting to learn an underlying signal from noisy data. Consider the follwoing simple example: Our goal is to estimate the true signal $f(x)$ based on a set of observed pairs $\{x_i, y_i\}$ where the $y_i = f(x_i) + \epsilon_i$ and $\epsilon_i$ is some random noise with mean 0. To this end, we fit a model $\hat{f}(x)$ using our favorite machine-learning algorithm. When we say that the OLS estimator is unbiased, what we really mean is that if the true form of the model is $f(x) = \beta_0 + \beta_1 x$, then the OLS estimates $\hat{\beta}_0$ and $\hat{\beta}_1$ have the lovely properties that $E(\hat{\beta}_0) = \beta_0$ and $E(\hat{\beta}_1) = \beta_1$. This is true for our simple example, but it is very strong assumption! In general, and to the extent that no model is really correct, we can't make such assumptions about $f(x)$. So a model of the form $\hat{f}(x) = \hat{\beta}_0 + \hat{\beta}_1 x$ will be biased. What if our data look like this instead? (spoiler alert: $f(x) = sin(x)$) Now, if we fit the naive model $\hat{f}(x) = \hat{\beta}_0 + \hat{\beta}_1 x$, it is woefully inadequate at estimating $f(x)$ (high bias). But on the other hand, it is relatively insensitive to noise (low variance). If we add more terms to the model, say $\hat{f}(x) = \hat{\beta}_0 + \hat{\beta}_1x + \hat{\beta}_2x^2 + ... \hat{\beta}_p x^p$, we can capture more of the "unkown" signal by virtue of the added complexity in our model's structure. We lower the bias on the observed data, but the added complexity necessarily increases the variance. (Note, if $f(x)$ is truly periodic, polynomial expansion is a poor choice!) But again, unless we know that the true $f(x) = \beta_0 + \beta_1 sin(x)$, our model will never be unbiased, even if we use OLS to fit the parameters.
Why is bias equal to zero for OLS estimator with respect to linear regression?
We can think of any supervised learning task, be it regression or classification, as attempting to learn an underlying signal from noisy data. Consider the follwoing simple example: Our goal is to es
Why is bias equal to zero for OLS estimator with respect to linear regression? We can think of any supervised learning task, be it regression or classification, as attempting to learn an underlying signal from noisy data. Consider the follwoing simple example: Our goal is to estimate the true signal $f(x)$ based on a set of observed pairs $\{x_i, y_i\}$ where the $y_i = f(x_i) + \epsilon_i$ and $\epsilon_i$ is some random noise with mean 0. To this end, we fit a model $\hat{f}(x)$ using our favorite machine-learning algorithm. When we say that the OLS estimator is unbiased, what we really mean is that if the true form of the model is $f(x) = \beta_0 + \beta_1 x$, then the OLS estimates $\hat{\beta}_0$ and $\hat{\beta}_1$ have the lovely properties that $E(\hat{\beta}_0) = \beta_0$ and $E(\hat{\beta}_1) = \beta_1$. This is true for our simple example, but it is very strong assumption! In general, and to the extent that no model is really correct, we can't make such assumptions about $f(x)$. So a model of the form $\hat{f}(x) = \hat{\beta}_0 + \hat{\beta}_1 x$ will be biased. What if our data look like this instead? (spoiler alert: $f(x) = sin(x)$) Now, if we fit the naive model $\hat{f}(x) = \hat{\beta}_0 + \hat{\beta}_1 x$, it is woefully inadequate at estimating $f(x)$ (high bias). But on the other hand, it is relatively insensitive to noise (low variance). If we add more terms to the model, say $\hat{f}(x) = \hat{\beta}_0 + \hat{\beta}_1x + \hat{\beta}_2x^2 + ... \hat{\beta}_p x^p$, we can capture more of the "unkown" signal by virtue of the added complexity in our model's structure. We lower the bias on the observed data, but the added complexity necessarily increases the variance. (Note, if $f(x)$ is truly periodic, polynomial expansion is a poor choice!) But again, unless we know that the true $f(x) = \beta_0 + \beta_1 sin(x)$, our model will never be unbiased, even if we use OLS to fit the parameters.
Why is bias equal to zero for OLS estimator with respect to linear regression? We can think of any supervised learning task, be it regression or classification, as attempting to learn an underlying signal from noisy data. Consider the follwoing simple example: Our goal is to es
23,971
Why is bias equal to zero for OLS estimator with respect to linear regression?
Bias based on my understanding, represents the error because of using a simple classifer(eg: linear) to capture a complex non-linear decision boundary. So I expected OLS estimator to have high bias and low variance. G-M Theorem states that OLS estimator is unbiased if the true data-generating process is linear in observables. So OLS is not guaranteed to be unbiased if you already presume that the true data-generating process is "a complex non-linear decision boundary".
Why is bias equal to zero for OLS estimator with respect to linear regression?
Bias based on my understanding, represents the error because of using a simple classifer(eg: linear) to capture a complex non-linear decision boundary. So I expected OLS estimator to have high bias an
Why is bias equal to zero for OLS estimator with respect to linear regression? Bias based on my understanding, represents the error because of using a simple classifer(eg: linear) to capture a complex non-linear decision boundary. So I expected OLS estimator to have high bias and low variance. G-M Theorem states that OLS estimator is unbiased if the true data-generating process is linear in observables. So OLS is not guaranteed to be unbiased if you already presume that the true data-generating process is "a complex non-linear decision boundary".
Why is bias equal to zero for OLS estimator with respect to linear regression? Bias based on my understanding, represents the error because of using a simple classifer(eg: linear) to capture a complex non-linear decision boundary. So I expected OLS estimator to have high bias an
23,972
Intuitive explanation for inverse probability of treatment weights (IPTWs) in propensity score weighting?
The propensity score $p(x_i)$ calculated is the probability of subject $i$ to receive a treatment given the information in $X$. The IPTW procedure tries to make counter-factual inference more prominent using the propensity scores. Having a high-probability to receive treatment and then to actually receive treatment is expected, no counterfactual information there. Having a low-probability to receive treatment and actually receiving treatment is unusual and therefore more informative of how treatment would affect subjects with low probability of receiving it; ie. characteristics mostly associated with control subjects. Therefore the weighting for treatment subject is $\text{w}_{i,j=\text{treat}} = \frac{1}{p(x_i)}$ adding more weight to unlikely/highly-informative treatment subjects. Following the same idea, if a control subject has a large probability of receiving treatment it is an informative indicator of how subjects in the treatment would be behave if they were in the control group. In this case the weighting for control subjects is $\text{w}_{i,j=\text{control}} = \frac{1}{1-p(x_i)}$ adding more weight to unlikely/highly-informative control subjects. Indeed, the equations at first instance can appear somewhat arbitrary but I think that they are easily explained under a counter-factual rationale. Ultimately all matching/PSM/weighting routines try to sketch out a quasi-experimental framework in our observational data; a new ideal experiment. In case you have not come across them I strongly suggest you read Stuart (2010): Matching Methods for Causal Inference: A Review and a Look Forward and Thoemmes and Kim (2011): A Systematic Review of Propensity Score Methods in the Social Sciences; both are nicely written and serve as good entries papers on the matter. Also check this excellent 2015 lecture on Why Propensity Scores Should Not Be Used for Matching by King. They really helped me build my intuition on the subject.
Intuitive explanation for inverse probability of treatment weights (IPTWs) in propensity score weigh
The propensity score $p(x_i)$ calculated is the probability of subject $i$ to receive a treatment given the information in $X$. The IPTW procedure tries to make counter-factual inference more prominen
Intuitive explanation for inverse probability of treatment weights (IPTWs) in propensity score weighting? The propensity score $p(x_i)$ calculated is the probability of subject $i$ to receive a treatment given the information in $X$. The IPTW procedure tries to make counter-factual inference more prominent using the propensity scores. Having a high-probability to receive treatment and then to actually receive treatment is expected, no counterfactual information there. Having a low-probability to receive treatment and actually receiving treatment is unusual and therefore more informative of how treatment would affect subjects with low probability of receiving it; ie. characteristics mostly associated with control subjects. Therefore the weighting for treatment subject is $\text{w}_{i,j=\text{treat}} = \frac{1}{p(x_i)}$ adding more weight to unlikely/highly-informative treatment subjects. Following the same idea, if a control subject has a large probability of receiving treatment it is an informative indicator of how subjects in the treatment would be behave if they were in the control group. In this case the weighting for control subjects is $\text{w}_{i,j=\text{control}} = \frac{1}{1-p(x_i)}$ adding more weight to unlikely/highly-informative control subjects. Indeed, the equations at first instance can appear somewhat arbitrary but I think that they are easily explained under a counter-factual rationale. Ultimately all matching/PSM/weighting routines try to sketch out a quasi-experimental framework in our observational data; a new ideal experiment. In case you have not come across them I strongly suggest you read Stuart (2010): Matching Methods for Causal Inference: A Review and a Look Forward and Thoemmes and Kim (2011): A Systematic Review of Propensity Score Methods in the Social Sciences; both are nicely written and serve as good entries papers on the matter. Also check this excellent 2015 lecture on Why Propensity Scores Should Not Be Used for Matching by King. They really helped me build my intuition on the subject.
Intuitive explanation for inverse probability of treatment weights (IPTWs) in propensity score weigh The propensity score $p(x_i)$ calculated is the probability of subject $i$ to receive a treatment given the information in $X$. The IPTW procedure tries to make counter-factual inference more prominen
23,973
Why use linear regression instead of average y per x
It comes down to how you would judge the quality of your model. The general approach most would agree on is that a good prediction model minimizes the unexplained portion, or errors (predicted - observed value). You could define a model that minimizes the errors overall. Or you could define a model that minimized the sum of squared errors ($\hat{\epsilon}$) overall $\sum_{i=1}^N\hat{\epsilon}_i^2 \rightarrow Minimum$. This last version is the least squares method and, if all assumptions are met, it will come up with the best linear unbiased estimator (instead of e.g. your means ratio). Basically, taking the average of the house price by square meters will not minimize your prediction error as it can not accommodate large departures from your average house price per square meter. Only least squares, i.e. minimal sum of all squared deviations of your predicted value minus the observed values, comes up with a line that fits your data cloud best. For a minimal example in R consider this: hp = c(500, 750, 800, 900, 1000, 1000, 1100) sm = c(100, 120, 130, 130, 150, 160, 165) with house prices (hp) and square meters (sm). When plotting, you obtain a figure where increasing sm goes hand-in-hand with increasing hp Now, you could do what you suggested: apsm = mean(hp/sm) That is, you divide hp by its sm and take the average to obtain the average per squre meters (apsm). To predict a the house price you could obtain a vector of predicted values pred ($\hat{hp}$) pred = apsm*sm Your predicted line now looks like this: The problem with this line is that it is not the line that minimizes the error (hp-pred = error). Or to be more precise, it does not minimize the sum of all your squared errors. If you were to run a linear model with eg. lm(hp ~ sm) your fitted line (red) would be different and it would be more efficient and unbiased:
Why use linear regression instead of average y per x
It comes down to how you would judge the quality of your model. The general approach most would agree on is that a good prediction model minimizes the unexplained portion, or errors (predicted - obser
Why use linear regression instead of average y per x It comes down to how you would judge the quality of your model. The general approach most would agree on is that a good prediction model minimizes the unexplained portion, or errors (predicted - observed value). You could define a model that minimizes the errors overall. Or you could define a model that minimized the sum of squared errors ($\hat{\epsilon}$) overall $\sum_{i=1}^N\hat{\epsilon}_i^2 \rightarrow Minimum$. This last version is the least squares method and, if all assumptions are met, it will come up with the best linear unbiased estimator (instead of e.g. your means ratio). Basically, taking the average of the house price by square meters will not minimize your prediction error as it can not accommodate large departures from your average house price per square meter. Only least squares, i.e. minimal sum of all squared deviations of your predicted value minus the observed values, comes up with a line that fits your data cloud best. For a minimal example in R consider this: hp = c(500, 750, 800, 900, 1000, 1000, 1100) sm = c(100, 120, 130, 130, 150, 160, 165) with house prices (hp) and square meters (sm). When plotting, you obtain a figure where increasing sm goes hand-in-hand with increasing hp Now, you could do what you suggested: apsm = mean(hp/sm) That is, you divide hp by its sm and take the average to obtain the average per squre meters (apsm). To predict a the house price you could obtain a vector of predicted values pred ($\hat{hp}$) pred = apsm*sm Your predicted line now looks like this: The problem with this line is that it is not the line that minimizes the error (hp-pred = error). Or to be more precise, it does not minimize the sum of all your squared errors. If you were to run a linear model with eg. lm(hp ~ sm) your fitted line (red) would be different and it would be more efficient and unbiased:
Why use linear regression instead of average y per x It comes down to how you would judge the quality of your model. The general approach most would agree on is that a good prediction model minimizes the unexplained portion, or errors (predicted - obser
23,974
Why use linear regression instead of average y per x
There are two issues; the first has to do with a potential intercept, and the second has to do with the variability about the mean. If the model should go through the origin (in effect, if there are no fixed costs and the true model is really linear (perfectly proportional to area) across the whole range, then it may make sense to force the fit through the origin. But if there are costs that affect the price are not proportional to the area then you will probably need an intercept. In the case that you choose to model the relationship as a line through the origin, it might make sense to consider the mean of the ratios ($r_i= y_i/x_i$) -- it depends on whether the spread of prices about the line is proportional to the size of house (equivalently, proportional to the mean price) : If this is the case then taking logs of both variables should leave you with a constant spread about a line with slope 1: -- if this is the case, then the average ratio might make some sense (though there are other ways to estimate the slope - such as the geometric mean of those ratios for example - that might sometimes be better choices). If the spread is not proportional to area (/proportional to expected price), then it's not the best way to estimate the coefficient and some form of (possibly weighted) regression through the origin might be better
Why use linear regression instead of average y per x
There are two issues; the first has to do with a potential intercept, and the second has to do with the variability about the mean. If the model should go through the origin (in effect, if there are n
Why use linear regression instead of average y per x There are two issues; the first has to do with a potential intercept, and the second has to do with the variability about the mean. If the model should go through the origin (in effect, if there are no fixed costs and the true model is really linear (perfectly proportional to area) across the whole range, then it may make sense to force the fit through the origin. But if there are costs that affect the price are not proportional to the area then you will probably need an intercept. In the case that you choose to model the relationship as a line through the origin, it might make sense to consider the mean of the ratios ($r_i= y_i/x_i$) -- it depends on whether the spread of prices about the line is proportional to the size of house (equivalently, proportional to the mean price) : If this is the case then taking logs of both variables should leave you with a constant spread about a line with slope 1: -- if this is the case, then the average ratio might make some sense (though there are other ways to estimate the slope - such as the geometric mean of those ratios for example - that might sometimes be better choices). If the spread is not proportional to area (/proportional to expected price), then it's not the best way to estimate the coefficient and some form of (possibly weighted) regression through the origin might be better
Why use linear regression instead of average y per x There are two issues; the first has to do with a potential intercept, and the second has to do with the variability about the mean. If the model should go through the origin (in effect, if there are n
23,975
Are graphical models and Boltzmann machines related mathematically?
Boltzmann machines vs. restricted Boltzmann machines AFAIK the Boltzmann machines is a type of graphical model, and the model that's related to neural networks is the restricted Boltzmann machines (RBM). The difference between Boltzmann machines and restricted Boltzmann machines, from the book Machine Learning A Probabilistic Perspective RBMs vs. neural netowrks For RBMs (ref: A Practical Guide to Training Restricted Boltzmann Machines by Geoffrey Hinton) $$p(\mathbf{v},\mathbf{h})=\frac{1}{Z}\exp(\sum a_iv_i+\sum b_jh_j + \sum v_ih_jw_{ij})$$ $$p(h_j=1|\mathbf{v})=\sigma(b_j+\sum v_iw_{ij})$$ $$p(v_i=1|\mathbf{h})=\sigma(a_i+\sum h_jw_{ij})$$ where $\mathbf{v}$ and $\mathbf{h}$ correspond to the visible and hidden units in the above figure, and $\sigma()$ is the Sigmoid function. The conditional probabilities are computed in the same form of network layers, so the trained weights of RBMs can be used directly as the weights of neural networks or as a starting point of training. I think the RBM itself is more of a graphical model than a type of neural network, since it is undirected, it has well defined conditional independencies, and it uses its own training algorithms (e.g. contrastive divergence).
Are graphical models and Boltzmann machines related mathematically?
Boltzmann machines vs. restricted Boltzmann machines AFAIK the Boltzmann machines is a type of graphical model, and the model that's related to neural networks is the restricted Boltzmann machines (R
Are graphical models and Boltzmann machines related mathematically? Boltzmann machines vs. restricted Boltzmann machines AFAIK the Boltzmann machines is a type of graphical model, and the model that's related to neural networks is the restricted Boltzmann machines (RBM). The difference between Boltzmann machines and restricted Boltzmann machines, from the book Machine Learning A Probabilistic Perspective RBMs vs. neural netowrks For RBMs (ref: A Practical Guide to Training Restricted Boltzmann Machines by Geoffrey Hinton) $$p(\mathbf{v},\mathbf{h})=\frac{1}{Z}\exp(\sum a_iv_i+\sum b_jh_j + \sum v_ih_jw_{ij})$$ $$p(h_j=1|\mathbf{v})=\sigma(b_j+\sum v_iw_{ij})$$ $$p(v_i=1|\mathbf{h})=\sigma(a_i+\sum h_jw_{ij})$$ where $\mathbf{v}$ and $\mathbf{h}$ correspond to the visible and hidden units in the above figure, and $\sigma()$ is the Sigmoid function. The conditional probabilities are computed in the same form of network layers, so the trained weights of RBMs can be used directly as the weights of neural networks or as a starting point of training. I think the RBM itself is more of a graphical model than a type of neural network, since it is undirected, it has well defined conditional independencies, and it uses its own training algorithms (e.g. contrastive divergence).
Are graphical models and Boltzmann machines related mathematically? Boltzmann machines vs. restricted Boltzmann machines AFAIK the Boltzmann machines is a type of graphical model, and the model that's related to neural networks is the restricted Boltzmann machines (R
23,976
Are graphical models and Boltzmann machines related mathematically?
This just confirms/verifies the accepted answer, that Boltzmann machines are indeed a special case of graphical model. Specifically, this question is addressed on pp. 127-127 of Koller, Friedman, Probabilistic Graphical Models: Principles and Techniques, in Box 4.C. One of the earliest types of Markov network models is the Ising model which first arose in statistical physics as a model for the energy of a physical system involving a system of interacting atoms... Related to the Ising model is the Boltzmann machine distribution... the resulting energy can be reformulated in terms of an Ising model (Exercise 4.12). How the Ising model, originally a concept from the statistical mechanics literature, can be formulated as a graphical model is given in much detail in Example 3.1., Section 3.3., on pp. 41-43 of Wainwright, Jordan, Graphical Models, Exponential Families, and Variational Inference. Apparently the Ising model was instrumental in the foundation of the field of graphical models during the late 1970's and early 1980's, at least based on what Steffen Lauritzen says in both the preface and introduction to his book, Graphical Models. This interpretation also seems supported by Section 4.8 in Koller and Friedman above-cited book. The development of Boltzmann machines from the Ising model may have been an independent occurrence, based on that same section of Koller and Friedman as well, which claims that "Boltzmann machines were first proposed by Hinton and Sejnowski (1983)", which seems to have occurred after the initial work in developing Markov random fields as generalizations of the Ising model, although the work behind that paper could have begun much earlier than 1983. My confusion regarding this relationship, when I wrote this question more than a year ago, stemmed from the fact that I first encountered both the Ising model, and the Boltzmann machine model for neurons, in the physics literature. As Koller and Friedman mention, the literature within the statistical physics community about the Ising model and related notions is truly vast. In my experience it is also fairly insular, in the sense that while statisticians and computer scientists studying graphical models will mention how the field is related to statistical mechanics, no reference I have ever found from the statistical physics literature mentions the connections to other fields or tries to exploit it. (Hence causing me to doubt and be confused by the notion that there could be any such connections to other fields.) For an example of the physicist's perspective on both the Ising model and the Boltzmann machine, see the textbook from the course where I first learned of it. It also mentions mean field methods, if I remember correctly, something discussed as well in the Jordan and Wainwright article cited above.
Are graphical models and Boltzmann machines related mathematically?
This just confirms/verifies the accepted answer, that Boltzmann machines are indeed a special case of graphical model. Specifically, this question is addressed on pp. 127-127 of Koller, Friedman, Prob
Are graphical models and Boltzmann machines related mathematically? This just confirms/verifies the accepted answer, that Boltzmann machines are indeed a special case of graphical model. Specifically, this question is addressed on pp. 127-127 of Koller, Friedman, Probabilistic Graphical Models: Principles and Techniques, in Box 4.C. One of the earliest types of Markov network models is the Ising model which first arose in statistical physics as a model for the energy of a physical system involving a system of interacting atoms... Related to the Ising model is the Boltzmann machine distribution... the resulting energy can be reformulated in terms of an Ising model (Exercise 4.12). How the Ising model, originally a concept from the statistical mechanics literature, can be formulated as a graphical model is given in much detail in Example 3.1., Section 3.3., on pp. 41-43 of Wainwright, Jordan, Graphical Models, Exponential Families, and Variational Inference. Apparently the Ising model was instrumental in the foundation of the field of graphical models during the late 1970's and early 1980's, at least based on what Steffen Lauritzen says in both the preface and introduction to his book, Graphical Models. This interpretation also seems supported by Section 4.8 in Koller and Friedman above-cited book. The development of Boltzmann machines from the Ising model may have been an independent occurrence, based on that same section of Koller and Friedman as well, which claims that "Boltzmann machines were first proposed by Hinton and Sejnowski (1983)", which seems to have occurred after the initial work in developing Markov random fields as generalizations of the Ising model, although the work behind that paper could have begun much earlier than 1983. My confusion regarding this relationship, when I wrote this question more than a year ago, stemmed from the fact that I first encountered both the Ising model, and the Boltzmann machine model for neurons, in the physics literature. As Koller and Friedman mention, the literature within the statistical physics community about the Ising model and related notions is truly vast. In my experience it is also fairly insular, in the sense that while statisticians and computer scientists studying graphical models will mention how the field is related to statistical mechanics, no reference I have ever found from the statistical physics literature mentions the connections to other fields or tries to exploit it. (Hence causing me to doubt and be confused by the notion that there could be any such connections to other fields.) For an example of the physicist's perspective on both the Ising model and the Boltzmann machine, see the textbook from the course where I first learned of it. It also mentions mean field methods, if I remember correctly, something discussed as well in the Jordan and Wainwright article cited above.
Are graphical models and Boltzmann machines related mathematically? This just confirms/verifies the accepted answer, that Boltzmann machines are indeed a special case of graphical model. Specifically, this question is addressed on pp. 127-127 of Koller, Friedman, Prob
23,977
Box plot notches vs. Tukey-Kramer interval
As far as the notched boxplot goes, the McGill et al [1] reference mentioned in your question contains pretty complete details (not everything I say here is explicitly mentioned there, but nevertheless it's sufficiently detailed to figure it out). The interval is a robustified but Gaussian-based one The paper quotes the following interval for notches (where $M$ is the sample median and $R$ is the sample interquartile range): $$M\pm 1.7 \times 1.25R/(1.35\sqrt{N})$$ where: $1.35$ is an asymptotic conversion factor to turn IQRs into estimates of $\sigma$ -- specifically, it's approximately the difference between the 0.75 quantile and the 0.25 quantile of a standard normal; the population quartiles are about 1.35 $\sigma$ apart, so a value of around $R/1.35$ should be a consistent (asymptotically unbiased) estimate of $\sigma$ (more accurately, about 1.349). $1.25$ comes in because we're dealing with the asymptotic standard error of the median rather than the mean. Specifically, the asymptotic variance of the sample median is $\frac{1}{4nf_0^2}$ where $f_0$ is the density-height at the median. For a normal distribution, $f_0$ is $\frac{1}{\sqrt{2\pi}\sigma}\approx \frac{0.3989}{\sigma}$, so the asymptotic standard error of the sample median is $\frac{1}{2\sqrt{N}f_0}= \sqrt{\pi/2}\sigma/\sqrt{N}\approx 1.253\sigma/\sqrt{N}$. As StasK mentions here, the smaller $N$ is, the the more dubious this would be (replacing his third reason with one about the reasonableness of using the normal distribution in the first place. Combining the above two, we obtain an asymptotic estimate of the standard error of the median of about $1.25R/(1.35\sqrt{N})$. McGill et al credit this to Kendall and Stuart (I don't recall whether the particular formula occurs there or not, but the components will be). So all that's left to discuss is the factor of 1.7. Note that if we were comparing one sample to a fixed value (say a hypothesized median) we'd use 1.96 for a 5% test; consequently, if we had two very different standard errors (one relatively large, one very small), that would be about the factor to use (since if the null were true, the difference would be almost entirely due to variation in the one with larger standard error, and the small one could - approximately - be treated as effectively fixed). On the other hand, if the two standard errors were the same, 1.96 would be much too large a factor, since both sets of notches come into it -- for the two sets of notches to fail to overlap we are adding one of each. This would make the right factor $1.96/\sqrt{2}\approx 1.386$ asymptotically. Somewhere in between , we have 1.7 as a rough compromise factor. McGill et al describe it as "empirically selected". It does come quite close to assuming a particular ratio of variances, so my guess (and it's nothing more than that) is that the empirical selection (presumably based on some simulation) was between a set of round-value ratios for the variances (like 1:1, 2:1,3:1,... ), of which the "best compromise" $r$ from the $r:1$ ratio was then plugged into $1.96/\sqrt{1+1/r}$ rounded to two figures. At least it's a plausible way to end up very close to 1.7. Putting them all (1.35,1.25 and 1.7) together gives about 1.57. Some sources get 1.58 by computing the 1.35 or the 1.25 (or both) more accurately but as a compromise between 1.386 and 1.96, that 1.7 is not even accurate to two significant figures (it's just a ballpark compromise value), so the additional precision is pointless (they might as well have just rounded the whole thing to 1.6 and be done with it). Note that there's no adjustment for multiple comparisons anywhere here. There's some distinct analogies in the confidence limits for a difference in the Tukey-Kramer HSD: $$\bar{y}_{i\bullet}-\bar{y}_{j\bullet} \pm \frac{q_{\alpha;k;N-k}}{\sqrt{2}}\widehat{\sigma}_\varepsilon \sqrt{\frac{1}{n_i} + \frac{1}{n_j}}$$ But note that this is a combined interval, not two separate contributions to a difference (so we have a term in $c.\sqrt{\frac{1}{n_i} + \frac{1}{n_j}}$ rather than the two contributing separately $k.\sqrt{\frac{1}{n_{i}}}$ and $k.\sqrt{\frac{1}{n_j}}$ and we assume constant variance (so we're not dealing with the compromise with the $1.96$ - when we might have very different variances - rather than the asymptotic $1.96/\sqrt{2}$ case) it's based on means, not medians (so no 1.35) it's based on $q$, which is based in turn on the largest difference in means (so there's not even any 1.96 part in this one, even one divided by $\sqrt{2}$). By contrast in comparing multiple box plots, there's no consideration of basing the notches on the largest difference in medians, it's all purely pairwise. So while several of the ideas behind the form of components are somewhat analogous, they're actually quite different in what they're doing. [1] McGill, R., Tukey, J. W. and Larsen, W. A. (1978) Variations of box plots. The American Statistician 32, 12–16.
Box plot notches vs. Tukey-Kramer interval
As far as the notched boxplot goes, the McGill et al [1] reference mentioned in your question contains pretty complete details (not everything I say here is explicitly mentioned there, but nevertheles
Box plot notches vs. Tukey-Kramer interval As far as the notched boxplot goes, the McGill et al [1] reference mentioned in your question contains pretty complete details (not everything I say here is explicitly mentioned there, but nevertheless it's sufficiently detailed to figure it out). The interval is a robustified but Gaussian-based one The paper quotes the following interval for notches (where $M$ is the sample median and $R$ is the sample interquartile range): $$M\pm 1.7 \times 1.25R/(1.35\sqrt{N})$$ where: $1.35$ is an asymptotic conversion factor to turn IQRs into estimates of $\sigma$ -- specifically, it's approximately the difference between the 0.75 quantile and the 0.25 quantile of a standard normal; the population quartiles are about 1.35 $\sigma$ apart, so a value of around $R/1.35$ should be a consistent (asymptotically unbiased) estimate of $\sigma$ (more accurately, about 1.349). $1.25$ comes in because we're dealing with the asymptotic standard error of the median rather than the mean. Specifically, the asymptotic variance of the sample median is $\frac{1}{4nf_0^2}$ where $f_0$ is the density-height at the median. For a normal distribution, $f_0$ is $\frac{1}{\sqrt{2\pi}\sigma}\approx \frac{0.3989}{\sigma}$, so the asymptotic standard error of the sample median is $\frac{1}{2\sqrt{N}f_0}= \sqrt{\pi/2}\sigma/\sqrt{N}\approx 1.253\sigma/\sqrt{N}$. As StasK mentions here, the smaller $N$ is, the the more dubious this would be (replacing his third reason with one about the reasonableness of using the normal distribution in the first place. Combining the above two, we obtain an asymptotic estimate of the standard error of the median of about $1.25R/(1.35\sqrt{N})$. McGill et al credit this to Kendall and Stuart (I don't recall whether the particular formula occurs there or not, but the components will be). So all that's left to discuss is the factor of 1.7. Note that if we were comparing one sample to a fixed value (say a hypothesized median) we'd use 1.96 for a 5% test; consequently, if we had two very different standard errors (one relatively large, one very small), that would be about the factor to use (since if the null were true, the difference would be almost entirely due to variation in the one with larger standard error, and the small one could - approximately - be treated as effectively fixed). On the other hand, if the two standard errors were the same, 1.96 would be much too large a factor, since both sets of notches come into it -- for the two sets of notches to fail to overlap we are adding one of each. This would make the right factor $1.96/\sqrt{2}\approx 1.386$ asymptotically. Somewhere in between , we have 1.7 as a rough compromise factor. McGill et al describe it as "empirically selected". It does come quite close to assuming a particular ratio of variances, so my guess (and it's nothing more than that) is that the empirical selection (presumably based on some simulation) was between a set of round-value ratios for the variances (like 1:1, 2:1,3:1,... ), of which the "best compromise" $r$ from the $r:1$ ratio was then plugged into $1.96/\sqrt{1+1/r}$ rounded to two figures. At least it's a plausible way to end up very close to 1.7. Putting them all (1.35,1.25 and 1.7) together gives about 1.57. Some sources get 1.58 by computing the 1.35 or the 1.25 (or both) more accurately but as a compromise between 1.386 and 1.96, that 1.7 is not even accurate to two significant figures (it's just a ballpark compromise value), so the additional precision is pointless (they might as well have just rounded the whole thing to 1.6 and be done with it). Note that there's no adjustment for multiple comparisons anywhere here. There's some distinct analogies in the confidence limits for a difference in the Tukey-Kramer HSD: $$\bar{y}_{i\bullet}-\bar{y}_{j\bullet} \pm \frac{q_{\alpha;k;N-k}}{\sqrt{2}}\widehat{\sigma}_\varepsilon \sqrt{\frac{1}{n_i} + \frac{1}{n_j}}$$ But note that this is a combined interval, not two separate contributions to a difference (so we have a term in $c.\sqrt{\frac{1}{n_i} + \frac{1}{n_j}}$ rather than the two contributing separately $k.\sqrt{\frac{1}{n_{i}}}$ and $k.\sqrt{\frac{1}{n_j}}$ and we assume constant variance (so we're not dealing with the compromise with the $1.96$ - when we might have very different variances - rather than the asymptotic $1.96/\sqrt{2}$ case) it's based on means, not medians (so no 1.35) it's based on $q$, which is based in turn on the largest difference in means (so there's not even any 1.96 part in this one, even one divided by $\sqrt{2}$). By contrast in comparing multiple box plots, there's no consideration of basing the notches on the largest difference in medians, it's all purely pairwise. So while several of the ideas behind the form of components are somewhat analogous, they're actually quite different in what they're doing. [1] McGill, R., Tukey, J. W. and Larsen, W. A. (1978) Variations of box plots. The American Statistician 32, 12–16.
Box plot notches vs. Tukey-Kramer interval As far as the notched boxplot goes, the McGill et al [1] reference mentioned in your question contains pretty complete details (not everything I say here is explicitly mentioned there, but nevertheles
23,978
Periodic splines to fit periodic data
Splines are used in regression modeling to model possibly complex, non-linear functional forms. A spline smoothed trend consists of piecewise continuous polynomials whose leading coefficient changes at each breakpoint or knot. The spline may be specified in terms of the polynomial degree of the trend as well as the breakpoints. A spline representation of a covariate extends a single vector of observed values into a matrix whose dimension is the polynomial degree plus the number of knots. A periodic version of splines is merely a periodic version of any regression: the data are cut into replicates of the length of the period. So for instance, modeling a diurnal trend in a multiday experiment on rats would require recoding time of experiment into 24 hour increments, so the 154th hour would be the modulo 24 value of 10 (154 = 6*24 + 10). If you fit a linear regression on the cut data, it would estimate a saw-tooth waveform for the trend. If you fit a step function somewhere in the period, it would be a square waveform that fits the series. The spline is capable of expressing a much more sophisticated wavelet. For what it's worth, in the splines package, there is a function periodicSpline which does exactly this. I don't find R's default spline "bs" implementation useful for interpretation. So I wrote my own script below. For a spline of degree $p$ with $n_k$ knots, this representation gives the first $p$ columns the standard polynomial representation, the $p+i$-th columns ($i \le n_k$) are simply evaluated as $S_{p+i} = (X - k_i)^p\mathcal{I}(X<k_i)$ where $k$ is the actual vector of knots. myspline <- function(x, degree, knots) { knots <- sort(knots) val <- cbind(x, outer(x, knots, `-`)) val[val < 0] <- 0 val <- val^degree if(degree > 1) val <- cbind(outer(x, 1:{degree-1}, `^`), val) colnames(val) <- c( paste0('spline', 1:{degree-1}, '.1'), paste0('spline', degree, '.', seq(length(knots)+1)) ) val } For a little case study, interpolate a sinusoidal trend on the domain of 0 to $2\pi$ (or $\tau$) like so: x <- seq(0, 2*pi, by=pi/2^8) y <- sin(x) plot(x,y, type='l') s <- myspline(x, 2, pi) fit <- lm(y ~ s) yhat <- predict(fit) lines(x,yhat) You'll see they're quite concordant. Further, the naming convention enables interpretation. In the regression output you see: > summary(fit) Call: lm(formula = y ~ s) Residuals: Min 1Q Median 3Q Max -0.04564 -0.02050 0.00000 0.02050 0.04564 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.033116 0.003978 -8.326 7.78e-16 *** sspline1.1 1.268812 0.004456 284.721 < 2e-16 *** sspline2.1 -0.400520 0.001031 -388.463 < 2e-16 *** sspline2.2 0.801040 0.001931 414.878 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.02422 on 509 degrees of freedom Multiple R-squared: 0.9988, Adjusted R-squared: 0.9988 F-statistic: 1.453e+05 on 3 and 509 DF, p-value: < 2.2e-16 The first set of covariates for my spline1.1-degree is the polynomial trend for the first domain behind the first breakpoint. The linear term is the slope of the tangent at the origin, X=0. This is nearly 1 which would be indicated by the derivative of the sinusoidal curve (cos(0) = 1), but we must bear in mind that these are approximations, and the error of extrapolating the quadratic trend out $\pi/2$ is prone to error. The quadratic term indicates a negative, concave shape. The spline2.2 term indicates a difference from the first quadratic slope, leading to a 0.4 positive leading coefficient indicating an upward, convex shape. So we now have interpretation available for spline output and can judge the inference and estimates accordingly. I'm going to assume that you know the periodicity of the data at hand. If the data lack a growth or moving average component, you may transform a long time series into replicates of a short series of a duration of 1 period. You now have replicates and can use data analysis to estimate the recurrent trend. Suppose I generate the following somewhat noisey, very long time series: x <- seq(1, 100, by=0.01) y <- sin(x) + rnorm(length(x), 0, 10) xp <- x %% (2*pi) s <- myspline(xp, degree=2, knots=pi) lm(y ~ s) The resulting output shows reasonable performance. > summary(fit) Call: lm(formula = y ~ s) Residuals: Min 1Q Median 3Q Max -39.585 -6.736 0.013 6.750 37.389 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.48266 0.38155 -1.265 0.205894 sspline1.1 1.52798 0.42237 3.618 0.000299 *** sspline2.1 -0.44380 0.09725 -4.564 5.09e-06 *** sspline2.2 0.76553 0.18198 4.207 2.61e-05 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 9.949 on 9897 degrees of freedom Multiple R-squared: 0.006406, Adjusted R-squared: 0.006105 F-statistic: 21.27 on 3 and 9897 DF, p-value: 9.959e-14
Periodic splines to fit periodic data
Splines are used in regression modeling to model possibly complex, non-linear functional forms. A spline smoothed trend consists of piecewise continuous polynomials whose leading coefficient changes a
Periodic splines to fit periodic data Splines are used in regression modeling to model possibly complex, non-linear functional forms. A spline smoothed trend consists of piecewise continuous polynomials whose leading coefficient changes at each breakpoint or knot. The spline may be specified in terms of the polynomial degree of the trend as well as the breakpoints. A spline representation of a covariate extends a single vector of observed values into a matrix whose dimension is the polynomial degree plus the number of knots. A periodic version of splines is merely a periodic version of any regression: the data are cut into replicates of the length of the period. So for instance, modeling a diurnal trend in a multiday experiment on rats would require recoding time of experiment into 24 hour increments, so the 154th hour would be the modulo 24 value of 10 (154 = 6*24 + 10). If you fit a linear regression on the cut data, it would estimate a saw-tooth waveform for the trend. If you fit a step function somewhere in the period, it would be a square waveform that fits the series. The spline is capable of expressing a much more sophisticated wavelet. For what it's worth, in the splines package, there is a function periodicSpline which does exactly this. I don't find R's default spline "bs" implementation useful for interpretation. So I wrote my own script below. For a spline of degree $p$ with $n_k$ knots, this representation gives the first $p$ columns the standard polynomial representation, the $p+i$-th columns ($i \le n_k$) are simply evaluated as $S_{p+i} = (X - k_i)^p\mathcal{I}(X<k_i)$ where $k$ is the actual vector of knots. myspline <- function(x, degree, knots) { knots <- sort(knots) val <- cbind(x, outer(x, knots, `-`)) val[val < 0] <- 0 val <- val^degree if(degree > 1) val <- cbind(outer(x, 1:{degree-1}, `^`), val) colnames(val) <- c( paste0('spline', 1:{degree-1}, '.1'), paste0('spline', degree, '.', seq(length(knots)+1)) ) val } For a little case study, interpolate a sinusoidal trend on the domain of 0 to $2\pi$ (or $\tau$) like so: x <- seq(0, 2*pi, by=pi/2^8) y <- sin(x) plot(x,y, type='l') s <- myspline(x, 2, pi) fit <- lm(y ~ s) yhat <- predict(fit) lines(x,yhat) You'll see they're quite concordant. Further, the naming convention enables interpretation. In the regression output you see: > summary(fit) Call: lm(formula = y ~ s) Residuals: Min 1Q Median 3Q Max -0.04564 -0.02050 0.00000 0.02050 0.04564 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.033116 0.003978 -8.326 7.78e-16 *** sspline1.1 1.268812 0.004456 284.721 < 2e-16 *** sspline2.1 -0.400520 0.001031 -388.463 < 2e-16 *** sspline2.2 0.801040 0.001931 414.878 < 2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 0.02422 on 509 degrees of freedom Multiple R-squared: 0.9988, Adjusted R-squared: 0.9988 F-statistic: 1.453e+05 on 3 and 509 DF, p-value: < 2.2e-16 The first set of covariates for my spline1.1-degree is the polynomial trend for the first domain behind the first breakpoint. The linear term is the slope of the tangent at the origin, X=0. This is nearly 1 which would be indicated by the derivative of the sinusoidal curve (cos(0) = 1), but we must bear in mind that these are approximations, and the error of extrapolating the quadratic trend out $\pi/2$ is prone to error. The quadratic term indicates a negative, concave shape. The spline2.2 term indicates a difference from the first quadratic slope, leading to a 0.4 positive leading coefficient indicating an upward, convex shape. So we now have interpretation available for spline output and can judge the inference and estimates accordingly. I'm going to assume that you know the periodicity of the data at hand. If the data lack a growth or moving average component, you may transform a long time series into replicates of a short series of a duration of 1 period. You now have replicates and can use data analysis to estimate the recurrent trend. Suppose I generate the following somewhat noisey, very long time series: x <- seq(1, 100, by=0.01) y <- sin(x) + rnorm(length(x), 0, 10) xp <- x %% (2*pi) s <- myspline(xp, degree=2, knots=pi) lm(y ~ s) The resulting output shows reasonable performance. > summary(fit) Call: lm(formula = y ~ s) Residuals: Min 1Q Median 3Q Max -39.585 -6.736 0.013 6.750 37.389 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) -0.48266 0.38155 -1.265 0.205894 sspline1.1 1.52798 0.42237 3.618 0.000299 *** sspline2.1 -0.44380 0.09725 -4.564 5.09e-06 *** sspline2.2 0.76553 0.18198 4.207 2.61e-05 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 Residual standard error: 9.949 on 9897 degrees of freedom Multiple R-squared: 0.006406, Adjusted R-squared: 0.006105 F-statistic: 21.27 on 3 and 9897 DF, p-value: 9.959e-14
Periodic splines to fit periodic data Splines are used in regression modeling to model possibly complex, non-linear functional forms. A spline smoothed trend consists of piecewise continuous polynomials whose leading coefficient changes a
23,979
Periodic splines to fit periodic data
I was looking for an answer to this question recently and found the following solution, using the recent package splines2. There is a function to compute periodic m-splines (m-splines are normalized b-splines). Usage is very similar to the bs function. Let's say we have a 24-h noisy stationary signal, measured at fixed intervals over 2 days: library(ggplot2) library(splines2) t <- seq(0, 48, length.out = 500) y <- sin(time/2*pi/6) + rnorm(500, sd = 0.5) df <- data.frame(t = t, y = y) ggplot(df, aes(x = t, y = y)) + geom_point() + theme_minimal() Now we can fit a periodic spline on this data and create predictions for our regular intervals: # (boundary knots determine the period) pspline_fit <- lm(y ~ mSpline(x = t, df = 4, periodic = TRUE, Boundary.knots = c(0, 24)), data = df) df <- cbind(df, as.data.frame(predict(pspline_fit, interval = "prediction"))) pred_plot <- ggplot(df, aes(x = t, y = y)) + geom_ribbon(aes(ymin = lwr, ymax = upr), alpha = 0.4) + geom_line(aes(y = fit), size = 1, colour = "blue") + geom_point() + theme_minimal() pred_plot And what's nice about the periodic spline is that there is no discontinuity at the 24h mark, which you can visualise using polar coordinates: pred_plot + xlim(0, 24) + coord_polar()
Periodic splines to fit periodic data
I was looking for an answer to this question recently and found the following solution, using the recent package splines2. There is a function to compute periodic m-splines (m-splines are normalized b
Periodic splines to fit periodic data I was looking for an answer to this question recently and found the following solution, using the recent package splines2. There is a function to compute periodic m-splines (m-splines are normalized b-splines). Usage is very similar to the bs function. Let's say we have a 24-h noisy stationary signal, measured at fixed intervals over 2 days: library(ggplot2) library(splines2) t <- seq(0, 48, length.out = 500) y <- sin(time/2*pi/6) + rnorm(500, sd = 0.5) df <- data.frame(t = t, y = y) ggplot(df, aes(x = t, y = y)) + geom_point() + theme_minimal() Now we can fit a periodic spline on this data and create predictions for our regular intervals: # (boundary knots determine the period) pspline_fit <- lm(y ~ mSpline(x = t, df = 4, periodic = TRUE, Boundary.knots = c(0, 24)), data = df) df <- cbind(df, as.data.frame(predict(pspline_fit, interval = "prediction"))) pred_plot <- ggplot(df, aes(x = t, y = y)) + geom_ribbon(aes(ymin = lwr, ymax = upr), alpha = 0.4) + geom_line(aes(y = fit), size = 1, colour = "blue") + geom_point() + theme_minimal() pred_plot And what's nice about the periodic spline is that there is no discontinuity at the 24h mark, which you can visualise using polar coordinates: pred_plot + xlim(0, 24) + coord_polar()
Periodic splines to fit periodic data I was looking for an answer to this question recently and found the following solution, using the recent package splines2. There is a function to compute periodic m-splines (m-splines are normalized b
23,980
What is the moment of a joint random variable?
There isn't a "the" with respect to moments, since there are many of them, but moments of bivariate variables are indexed by two indices, not one. So rather than $k$-th moment, $\mu_k$ you have $(j,k)$-th moments, $\mu_{j,k}$ (sometimes written $\mu_{jk}$ when that's not ambiguous). We might speak of $\mu_{1,1}$, the $(1,1)$ moment or $\mu_{1,2}$, the $(1,2)$ moment, or $\mu_{2,2}$, and so on. These are sometimes called mixed moments. So generalizing your one-dimensional continuous example, $\mu_{j,k} = \int\int x^j y^k f(x,y) \, dx dy$ This generalizes to higher dimensions.
What is the moment of a joint random variable?
There isn't a "the" with respect to moments, since there are many of them, but moments of bivariate variables are indexed by two indices, not one. So rather than $k$-th moment, $\mu_k$ you have $(j,k)
What is the moment of a joint random variable? There isn't a "the" with respect to moments, since there are many of them, but moments of bivariate variables are indexed by two indices, not one. So rather than $k$-th moment, $\mu_k$ you have $(j,k)$-th moments, $\mu_{j,k}$ (sometimes written $\mu_{jk}$ when that's not ambiguous). We might speak of $\mu_{1,1}$, the $(1,1)$ moment or $\mu_{1,2}$, the $(1,2)$ moment, or $\mu_{2,2}$, and so on. These are sometimes called mixed moments. So generalizing your one-dimensional continuous example, $\mu_{j,k} = \int\int x^j y^k f(x,y) \, dx dy$ This generalizes to higher dimensions.
What is the moment of a joint random variable? There isn't a "the" with respect to moments, since there are many of them, but moments of bivariate variables are indexed by two indices, not one. So rather than $k$-th moment, $\mu_k$ you have $(j,k)
23,981
What is the moment of a joint random variable?
As @Glen_b♦ has mentioned, moment generalized to cross-moment (relate concepts: joint moment generating function, joint characteristic function and cumulant) in higher dimensions. That said, to me this definition doesn't feel like an equivalent to the univariate moment, because cross-moment evaluates to a real number, but for, say, a multivariate normal vector, the mean is a vector and the variance is a matrix. I speculate that one might define higher-dimensional "moments" using derivatives of the joint characteristic function $\varphi_\mathbf{X}(\mathbf{t})=E[e^{i\mathbf{t}'\mathbf{X}}]$, here derivatives are generalized using rank-$k$ tensors (so the second order derivative would be a Hessian matrix). There are many other interesting related topics, such as: Measures of Multivariate Skewness and Kurtosis with Applications.
What is the moment of a joint random variable?
As @Glen_b♦ has mentioned, moment generalized to cross-moment (relate concepts: joint moment generating function, joint characteristic function and cumulant) in higher dimensions. That said, to me th
What is the moment of a joint random variable? As @Glen_b♦ has mentioned, moment generalized to cross-moment (relate concepts: joint moment generating function, joint characteristic function and cumulant) in higher dimensions. That said, to me this definition doesn't feel like an equivalent to the univariate moment, because cross-moment evaluates to a real number, but for, say, a multivariate normal vector, the mean is a vector and the variance is a matrix. I speculate that one might define higher-dimensional "moments" using derivatives of the joint characteristic function $\varphi_\mathbf{X}(\mathbf{t})=E[e^{i\mathbf{t}'\mathbf{X}}]$, here derivatives are generalized using rank-$k$ tensors (so the second order derivative would be a Hessian matrix). There are many other interesting related topics, such as: Measures of Multivariate Skewness and Kurtosis with Applications.
What is the moment of a joint random variable? As @Glen_b♦ has mentioned, moment generalized to cross-moment (relate concepts: joint moment generating function, joint characteristic function and cumulant) in higher dimensions. That said, to me th
23,982
Johansen test for cointegration
In Johansen cointegration test, the alternative hypothesis for the eigenvalue test is that there are $r+1$ cointegration relations. The test is therefore sequential: you test first for $r=0$, then $r=1$, etc. The test concludes on the value of $r$ when the test fails to reject $H_0$ for the first time. In your case, the test fails to reject the null hypothesis for the first time when $r=1$. Therefore, you have one cointegration relationship.
Johansen test for cointegration
In Johansen cointegration test, the alternative hypothesis for the eigenvalue test is that there are $r+1$ cointegration relations. The test is therefore sequential: you test first for $r=0$, then $r=
Johansen test for cointegration In Johansen cointegration test, the alternative hypothesis for the eigenvalue test is that there are $r+1$ cointegration relations. The test is therefore sequential: you test first for $r=0$, then $r=1$, etc. The test concludes on the value of $r$ when the test fails to reject $H_0$ for the first time. In your case, the test fails to reject the null hypothesis for the first time when $r=1$. Therefore, you have one cointegration relationship.
Johansen test for cointegration In Johansen cointegration test, the alternative hypothesis for the eigenvalue test is that there are $r+1$ cointegration relations. The test is therefore sequential: you test first for $r=0$, then $r=
23,983
How to use decision stump as weak learner in Adaboost?
The typical way of training a (1-level) Decision Tree is finding such an attribute that gives the purest split. I.e. if we split our dataset into two subsets, we want the labels inside these subsets to be as homogeneous as possible. So it can also be seen as building many trees - a tree for each attribute - and then selecting the tree that produces the best split. In some cases it also makes sense to select a subset of attributes and then train trees on the subset. For example, this is used in Random Forest for reducing correlation between individual trees. But when it comes to AdaBoost, typically it is enough to make sure the base classifier can be trained on weighed data points, and random feature selection is less important. Decision trees can handle weights (see e.g. here or here). It may be done by weighting the contribution of each data point to the total subset impurity. For reference I'll also add my AdaBoost implementation in python using numpy and sklearn's DecisionTreeClassifier with max_depth=1: # input: dataset X and labels y (in {+1, -1}) hypotheses = [] hypothesis_weights = [] N, _ = X.shape d = np.ones(N) / N for t in range(num_iterations): h = DecisionTreeClassifier(max_depth=1) h.fit(X, y, sample_weight=d) pred = h.predict(X) eps = d.dot(pred != y) alpha = (np.log(1 - eps) - np.log(eps)) / 2 d = d * np.exp(- alpha * y * pred) d = d / d.sum() hypotheses.append(h) hypothesis_weights.append(alpha) For predicting the labels: # X input, y output y = np.zeros(N) for (h, alpha) in zip(hypotheses, hypotheses_weight): y = y + alpha * h.predict(X) y = np.sign(y)
How to use decision stump as weak learner in Adaboost?
The typical way of training a (1-level) Decision Tree is finding such an attribute that gives the purest split. I.e. if we split our dataset into two subsets, we want the labels inside these subsets t
How to use decision stump as weak learner in Adaboost? The typical way of training a (1-level) Decision Tree is finding such an attribute that gives the purest split. I.e. if we split our dataset into two subsets, we want the labels inside these subsets to be as homogeneous as possible. So it can also be seen as building many trees - a tree for each attribute - and then selecting the tree that produces the best split. In some cases it also makes sense to select a subset of attributes and then train trees on the subset. For example, this is used in Random Forest for reducing correlation between individual trees. But when it comes to AdaBoost, typically it is enough to make sure the base classifier can be trained on weighed data points, and random feature selection is less important. Decision trees can handle weights (see e.g. here or here). It may be done by weighting the contribution of each data point to the total subset impurity. For reference I'll also add my AdaBoost implementation in python using numpy and sklearn's DecisionTreeClassifier with max_depth=1: # input: dataset X and labels y (in {+1, -1}) hypotheses = [] hypothesis_weights = [] N, _ = X.shape d = np.ones(N) / N for t in range(num_iterations): h = DecisionTreeClassifier(max_depth=1) h.fit(X, y, sample_weight=d) pred = h.predict(X) eps = d.dot(pred != y) alpha = (np.log(1 - eps) - np.log(eps)) / 2 d = d * np.exp(- alpha * y * pred) d = d / d.sum() hypotheses.append(h) hypothesis_weights.append(alpha) For predicting the labels: # X input, y output y = np.zeros(N) for (h, alpha) in zip(hypotheses, hypotheses_weight): y = y + alpha * h.predict(X) y = np.sign(y)
How to use decision stump as weak learner in Adaboost? The typical way of training a (1-level) Decision Tree is finding such an attribute that gives the purest split. I.e. if we split our dataset into two subsets, we want the labels inside these subsets t
23,984
Friendly tutorial or introduction to reduced-rank regression
I agree that Section 3.7 in The Elements of Statistical Learning is quite confusing. I love this book but hate this section. The PDF available online has this nice Scream pictogram near the section title, so don't worry if you find the math there too complicated. Here are some other references. There is one book specifically focusing on RRR: Reinsel & Velu, 1998, Multivariate Reduced-Rank Regression: Theory and Applications And there is a textbook on multivariate statistics with good coverage of RRR: Izenman, 2008/2013, Modern Multivariate Statistical Techniques: Regression, Classification, and Manifold Learning Note that this is the same Izenman who has originally coined the term "reduced-rank regression" in Izenman, 1975, Reduced-rank regression for the multivariate linear model I don't know of a good tutorial paper, but have recently come across this PhD dissertation (that is essentially a composition of three separate papers, available elsewhere too): Mukherjee, 2013, Topics on Reduced Rank Methods for Multivariate Regression Most of it is quite technical, but it can be useful to read the introduction and the beginning of the first main chapter. There is also lots of further references there, in the literature review in the beginning. I am only giving references here, but I might post a more substantial answer about RRR in your other question What is "reduced-rank regression" all about? when I have some spare time and if there is any interest.
Friendly tutorial or introduction to reduced-rank regression
I agree that Section 3.7 in The Elements of Statistical Learning is quite confusing. I love this book but hate this section. The PDF available online has this nice Scream pictogram near the section ti
Friendly tutorial or introduction to reduced-rank regression I agree that Section 3.7 in The Elements of Statistical Learning is quite confusing. I love this book but hate this section. The PDF available online has this nice Scream pictogram near the section title, so don't worry if you find the math there too complicated. Here are some other references. There is one book specifically focusing on RRR: Reinsel & Velu, 1998, Multivariate Reduced-Rank Regression: Theory and Applications And there is a textbook on multivariate statistics with good coverage of RRR: Izenman, 2008/2013, Modern Multivariate Statistical Techniques: Regression, Classification, and Manifold Learning Note that this is the same Izenman who has originally coined the term "reduced-rank regression" in Izenman, 1975, Reduced-rank regression for the multivariate linear model I don't know of a good tutorial paper, but have recently come across this PhD dissertation (that is essentially a composition of three separate papers, available elsewhere too): Mukherjee, 2013, Topics on Reduced Rank Methods for Multivariate Regression Most of it is quite technical, but it can be useful to read the introduction and the beginning of the first main chapter. There is also lots of further references there, in the literature review in the beginning. I am only giving references here, but I might post a more substantial answer about RRR in your other question What is "reduced-rank regression" all about? when I have some spare time and if there is any interest.
Friendly tutorial or introduction to reduced-rank regression I agree that Section 3.7 in The Elements of Statistical Learning is quite confusing. I love this book but hate this section. The PDF available online has this nice Scream pictogram near the section ti
23,985
Constructing a continuous distribution to match $m$ moments
I do not know about any exact matching technique. If an approximated technique could work for you (meaning getting a density which approximately matches m given moments), then you could consider using an orthogonal polynomial series approach. You would choose a polynomial basis (Laguerre, Hermite, etc) depending on the range of your data. I describe below a technique that I have used in Arbel et al. (1) for a compactly supported distribution (see details in Section 3). In order to set the notation, let us consider a generic continuous random variable $X$ on $[0,1]$, and denote by $f$ its density (to be approximated), and its raw moments by $\gamma_r=\mathbb{E}\big[X^r\big]$, with $r\in\mathbb{N}$. Denote the basis of Jacobi polynomials by $$G_i(s) = \sum_{r=0}^i G_{i,r}s^r,\,i\geq 1.$$ Such polynomials are orthogonal with respect to the $L^2$-product $$\langle F,G \rangle=\int_0^1 F(s) G(s) w_{a,b}(s)d s,$$ where $w_{a,b}(s)=s^{a-1}(1-s)^{b-1}$ is named the weight function of the basis and is proportional to a beta density in the case of Jacobi polynomials. Any univariate density $f$ supported on $[0,1]$ can be uniquely decomposed on such a basis and therefore there is a unique sequence of real numbers $(\lambda_i)_{i \geq 0}$ such that $$f(s)=w_{a,b}(s)\sum_{i=0}^\infty \lambda_i G_i(s).$$ From the evaluation of $\int_0^1 f(s)\, G_i(s)\,d s$ it follows that each $\lambda_i$ coincides with a linear combination of the first $i$ moments of $S$, specifically $\lambda_i=\sum_{r=0}^i G_{i,r}\gamma_r$. Then, truncate the representation of $f$ in the Jacobi basis at a given level $N$, providing the approximation $$f_N(s)=w_{a,b}(s)\sum_{i=0}^N \left(\sum_{r=0}^i G_{i,r}\mu_r\right) G_i(s).$$ That polynomial approximation is not necessarily a density as it might fail to be positive or to integrate to 1. In order to overcome this problem, one can consider the density $\pi$ proportional to its positive part, thus defined by $\pi(s)\propto\max(f_N(s),0)$. If sampling from $\pi$ is needed, one can resort to the rejection sampler, see for instance Robert & Casella (2). There is a companion R package called momentify that provides the approximated distribution given moments, and allows to sample from it, available at this link: https://www.researchgate.net/publication/287608666_momentify_R_package, and discussed at this blog post. Below are two examples with increasing the number of moments involved. Note that the fit is much better for the unimodal density than the multimodal one. References (1) Julyan Arbel, Antonio Lijoi, and Bernardo Nipoti. Full Bayesian inference with hazard mixture models. Computational Statistics & Data Analysis 93 (2016): 359-372. arXiv link, journal link. (2) Christian Robert, and George Casella. "Monte Carlo Statistical Methods Springer-Verlag." New York (2004).
Constructing a continuous distribution to match $m$ moments
I do not know about any exact matching technique. If an approximated technique could work for you (meaning getting a density which approximately matches m given moments), then you could consider using
Constructing a continuous distribution to match $m$ moments I do not know about any exact matching technique. If an approximated technique could work for you (meaning getting a density which approximately matches m given moments), then you could consider using an orthogonal polynomial series approach. You would choose a polynomial basis (Laguerre, Hermite, etc) depending on the range of your data. I describe below a technique that I have used in Arbel et al. (1) for a compactly supported distribution (see details in Section 3). In order to set the notation, let us consider a generic continuous random variable $X$ on $[0,1]$, and denote by $f$ its density (to be approximated), and its raw moments by $\gamma_r=\mathbb{E}\big[X^r\big]$, with $r\in\mathbb{N}$. Denote the basis of Jacobi polynomials by $$G_i(s) = \sum_{r=0}^i G_{i,r}s^r,\,i\geq 1.$$ Such polynomials are orthogonal with respect to the $L^2$-product $$\langle F,G \rangle=\int_0^1 F(s) G(s) w_{a,b}(s)d s,$$ where $w_{a,b}(s)=s^{a-1}(1-s)^{b-1}$ is named the weight function of the basis and is proportional to a beta density in the case of Jacobi polynomials. Any univariate density $f$ supported on $[0,1]$ can be uniquely decomposed on such a basis and therefore there is a unique sequence of real numbers $(\lambda_i)_{i \geq 0}$ such that $$f(s)=w_{a,b}(s)\sum_{i=0}^\infty \lambda_i G_i(s).$$ From the evaluation of $\int_0^1 f(s)\, G_i(s)\,d s$ it follows that each $\lambda_i$ coincides with a linear combination of the first $i$ moments of $S$, specifically $\lambda_i=\sum_{r=0}^i G_{i,r}\gamma_r$. Then, truncate the representation of $f$ in the Jacobi basis at a given level $N$, providing the approximation $$f_N(s)=w_{a,b}(s)\sum_{i=0}^N \left(\sum_{r=0}^i G_{i,r}\mu_r\right) G_i(s).$$ That polynomial approximation is not necessarily a density as it might fail to be positive or to integrate to 1. In order to overcome this problem, one can consider the density $\pi$ proportional to its positive part, thus defined by $\pi(s)\propto\max(f_N(s),0)$. If sampling from $\pi$ is needed, one can resort to the rejection sampler, see for instance Robert & Casella (2). There is a companion R package called momentify that provides the approximated distribution given moments, and allows to sample from it, available at this link: https://www.researchgate.net/publication/287608666_momentify_R_package, and discussed at this blog post. Below are two examples with increasing the number of moments involved. Note that the fit is much better for the unimodal density than the multimodal one. References (1) Julyan Arbel, Antonio Lijoi, and Bernardo Nipoti. Full Bayesian inference with hazard mixture models. Computational Statistics & Data Analysis 93 (2016): 359-372. arXiv link, journal link. (2) Christian Robert, and George Casella. "Monte Carlo Statistical Methods Springer-Verlag." New York (2004).
Constructing a continuous distribution to match $m$ moments I do not know about any exact matching technique. If an approximated technique could work for you (meaning getting a density which approximately matches m given moments), then you could consider using
23,986
Deciding between a linear regression model or non-linear regression model
This is a realm of statistics called model selection. A lot of research is done in this area and there's no definitive and easy answer. Let's assume you have $X_1, X_2$, and $X_3$ and you want to know if you should include an $X_3^2$ term in the model. In a situation like this your more parsimonious model is nested in your more complex model. In other words, the variables $X_1, X_2$, and $X_3$ (parsimonious model) are a subset of the variables $X_1, X_2, X_3$, and $X_3^2$ (complex model). In model building you have (at least) one of the following two main goals: Explain the data: you are trying to understand how some set of variables affect your response variable or you are interested in how $X_1$ effects $Y$ while controlling for the effects of $X_2,...X_p$ Predict $Y$: you want to accurately predict $Y$, without caring about what or how many variables are in your model If your goal is number 1, then I recommend the Likelihood Ratio Test (LRT). LRT is used when you have nested models and you want to know "are the data significantly more likely to come from the complex model than the parsimonous model?". This will give you insight into which model better explains the relationship between your data. If your goal is number 2, then I recommend some sort of cross-validation (CV) technique ($k$-fold CV, leave-one-out CV, test-training CV) depending on the size of your data. In summary, these methods build a model on a subset of your data and predict the results on the remaining data. Pick the model that does the best job predicting on the remaining data according to cross-validation.
Deciding between a linear regression model or non-linear regression model
This is a realm of statistics called model selection. A lot of research is done in this area and there's no definitive and easy answer. Let's assume you have $X_1, X_2$, and $X_3$ and you want to know
Deciding between a linear regression model or non-linear regression model This is a realm of statistics called model selection. A lot of research is done in this area and there's no definitive and easy answer. Let's assume you have $X_1, X_2$, and $X_3$ and you want to know if you should include an $X_3^2$ term in the model. In a situation like this your more parsimonious model is nested in your more complex model. In other words, the variables $X_1, X_2$, and $X_3$ (parsimonious model) are a subset of the variables $X_1, X_2, X_3$, and $X_3^2$ (complex model). In model building you have (at least) one of the following two main goals: Explain the data: you are trying to understand how some set of variables affect your response variable or you are interested in how $X_1$ effects $Y$ while controlling for the effects of $X_2,...X_p$ Predict $Y$: you want to accurately predict $Y$, without caring about what or how many variables are in your model If your goal is number 1, then I recommend the Likelihood Ratio Test (LRT). LRT is used when you have nested models and you want to know "are the data significantly more likely to come from the complex model than the parsimonous model?". This will give you insight into which model better explains the relationship between your data. If your goal is number 2, then I recommend some sort of cross-validation (CV) technique ($k$-fold CV, leave-one-out CV, test-training CV) depending on the size of your data. In summary, these methods build a model on a subset of your data and predict the results on the remaining data. Pick the model that does the best job predicting on the remaining data according to cross-validation.
Deciding between a linear regression model or non-linear regression model This is a realm of statistics called model selection. A lot of research is done in this area and there's no definitive and easy answer. Let's assume you have $X_1, X_2$, and $X_3$ and you want to know
23,987
Deciding between a linear regression model or non-linear regression model
When I google for "linearn or non-linear model for regression" I get some links which lead to this book: http://www.graphpad.com/manuals/prism4/RegressionBook.pdf This book is not interesting, and I don't trust it in 100% (for some reasons). I found also this article: http://hunch.net/?p=524 with title: Nearly all natural problems require nonlinearity I also found similar question with pretty good explanation: https://stackoverflow.com/questions/1148513/difference-between-a-linear-problem-and-a-non-linear-problem-essence-of-dot-pro Based on my experience, when you don't know which model use, use both and try another features.
Deciding between a linear regression model or non-linear regression model
When I google for "linearn or non-linear model for regression" I get some links which lead to this book: http://www.graphpad.com/manuals/prism4/RegressionBook.pdf This book is not interesting, and I d
Deciding between a linear regression model or non-linear regression model When I google for "linearn or non-linear model for regression" I get some links which lead to this book: http://www.graphpad.com/manuals/prism4/RegressionBook.pdf This book is not interesting, and I don't trust it in 100% (for some reasons). I found also this article: http://hunch.net/?p=524 with title: Nearly all natural problems require nonlinearity I also found similar question with pretty good explanation: https://stackoverflow.com/questions/1148513/difference-between-a-linear-problem-and-a-non-linear-problem-essence-of-dot-pro Based on my experience, when you don't know which model use, use both and try another features.
Deciding between a linear regression model or non-linear regression model When I google for "linearn or non-linear model for regression" I get some links which lead to this book: http://www.graphpad.com/manuals/prism4/RegressionBook.pdf This book is not interesting, and I d
23,988
Deciding between a linear regression model or non-linear regression model
As you state, linear models are typically simpler than non-linear models, meaning they run faster (building and predicting), are easier to interpret and explain, and usually straight-forward in error measurements. So the goal is to find out if the assumptions of a linear regression hold with your data (if you fail to support linear, then just go with non-linear). Usually you would repeat your single-variable plot with all variables individually, holding all other variables constant. Perhaps more importantly, though, you want to know if you can apply some sort of transformation, variable interaction, or dummy variable to move your data to linear space. If you are able to validate the assumptions, or if you know your data well enough to apply well-motivated or otherwise intelligently informed transformations or modifications, then you want to proceed with that transform and use linear regression. Once you have the residuals, you can plot them versus predicted values or independent variables to further decide if you need to move on to non-linear methods. There is an excellent breakdown of the assumptions of linear regression here at Duke. The four main assumptions are listed, and each one is broken down into the effects on the model, how to diagnose it in the data, and potential ways to "fix" (i.e. transform or add to) the data to make the assumption hold. Here is a small excerpt from the top summarizing the four assumptions addressed, but you should go there and read the breakdowns. There are four principal assumptions which justify the use of linear regression models for purposes of inference or prediction: (i) linearity and additivity of the relationship between dependent and independent variables: (a) The expected value of dependent variable is a straight-line function of each independent variable, holding the others fixed. (b) The slope of that line does not depend on the values of the other variables. (c) The effects of different independent variables on the expected value of the dependent variable are additive. (ii) statistical independence of the errors (in particular, no correlation between >consecutive errors in the case of time series data) (iii) homoscedasticity (constant variance) of the errors (a) versus time (in the case of time series data) (b) versus the predictions (c) versus any independent variable (iv) normality of the error distribution.
Deciding between a linear regression model or non-linear regression model
As you state, linear models are typically simpler than non-linear models, meaning they run faster (building and predicting), are easier to interpret and explain, and usually straight-forward in error
Deciding between a linear regression model or non-linear regression model As you state, linear models are typically simpler than non-linear models, meaning they run faster (building and predicting), are easier to interpret and explain, and usually straight-forward in error measurements. So the goal is to find out if the assumptions of a linear regression hold with your data (if you fail to support linear, then just go with non-linear). Usually you would repeat your single-variable plot with all variables individually, holding all other variables constant. Perhaps more importantly, though, you want to know if you can apply some sort of transformation, variable interaction, or dummy variable to move your data to linear space. If you are able to validate the assumptions, or if you know your data well enough to apply well-motivated or otherwise intelligently informed transformations or modifications, then you want to proceed with that transform and use linear regression. Once you have the residuals, you can plot them versus predicted values or independent variables to further decide if you need to move on to non-linear methods. There is an excellent breakdown of the assumptions of linear regression here at Duke. The four main assumptions are listed, and each one is broken down into the effects on the model, how to diagnose it in the data, and potential ways to "fix" (i.e. transform or add to) the data to make the assumption hold. Here is a small excerpt from the top summarizing the four assumptions addressed, but you should go there and read the breakdowns. There are four principal assumptions which justify the use of linear regression models for purposes of inference or prediction: (i) linearity and additivity of the relationship between dependent and independent variables: (a) The expected value of dependent variable is a straight-line function of each independent variable, holding the others fixed. (b) The slope of that line does not depend on the values of the other variables. (c) The effects of different independent variables on the expected value of the dependent variable are additive. (ii) statistical independence of the errors (in particular, no correlation between >consecutive errors in the case of time series data) (iii) homoscedasticity (constant variance) of the errors (a) versus time (in the case of time series data) (b) versus the predictions (c) versus any independent variable (iv) normality of the error distribution.
Deciding between a linear regression model or non-linear regression model As you state, linear models are typically simpler than non-linear models, meaning they run faster (building and predicting), are easier to interpret and explain, and usually straight-forward in error
23,989
Sufficiency or Insufficiency
I had a discussion with "whuber" and maybe I got a (correct?) hint to look at any sample point: evaluate $\dfrac{P(X=x)}{P(T(X)=T(x))}$ at that sample point $x$ and check if this ratio is independent of the parameter, in this case $p$. So take $x=(1,0,1)$ then $T(1,0,1)=2$. So we evaluate $\dfrac{P(X=(1,0,1))}{P(T(X)=2)}$.Now, $$T(X)=2 \text{ iff } X\in\{(1,0,1),(0,1,0)\}.$$ Due to the i.i.d. property, $$P(X=(1,0,1))=p^2(1-p)\text{ and }P(X=(0,1,0))=p(1-p)^2.$$ Also $$P(T(X)=2)=P(X=(1,0,1))+P(X=(0,1,0))=p(1-p).$$ Hence $$\dfrac{P(X=(1,0,1))}{P(T(X)=2)}=\dfrac{p^2(1-p)}{p(1-p)}=p$$ which is clearly dependent on $p$, and therefore $T$ is not a sufficient statistic.
Sufficiency or Insufficiency
I had a discussion with "whuber" and maybe I got a (correct?) hint to look at any sample point: evaluate $\dfrac{P(X=x)}{P(T(X)=T(x))}$ at that sample point $x$ and check if this ratio is independent
Sufficiency or Insufficiency I had a discussion with "whuber" and maybe I got a (correct?) hint to look at any sample point: evaluate $\dfrac{P(X=x)}{P(T(X)=T(x))}$ at that sample point $x$ and check if this ratio is independent of the parameter, in this case $p$. So take $x=(1,0,1)$ then $T(1,0,1)=2$. So we evaluate $\dfrac{P(X=(1,0,1))}{P(T(X)=2)}$.Now, $$T(X)=2 \text{ iff } X\in\{(1,0,1),(0,1,0)\}.$$ Due to the i.i.d. property, $$P(X=(1,0,1))=p^2(1-p)\text{ and }P(X=(0,1,0))=p(1-p)^2.$$ Also $$P(T(X)=2)=P(X=(1,0,1))+P(X=(0,1,0))=p(1-p).$$ Hence $$\dfrac{P(X=(1,0,1))}{P(T(X)=2)}=\dfrac{p^2(1-p)}{p(1-p)}=p$$ which is clearly dependent on $p$, and therefore $T$ is not a sufficient statistic.
Sufficiency or Insufficiency I had a discussion with "whuber" and maybe I got a (correct?) hint to look at any sample point: evaluate $\dfrac{P(X=x)}{P(T(X)=T(x))}$ at that sample point $x$ and check if this ratio is independent
23,990
Does every semi-positive definite matrix correspond to a covariance matrix?
Going by the definitions of PD and PSD here, yes, I think so, since we can do this by construction. I'll assume for a slightly simpler argument that you mean for matrices with real elements, but with appropriate changes it would extend to complex matrices. Let $A$ be some real PSD matrix; from the definition I linked to, it will be symmetric. Any real symmetric positive definite matrix $A$ can be written as $A = LL^T$. This can be done by $L=Q\sqrt{D}Q^T$ if $A=QDQ^T$ with orthogonal $Q$ and diagonal $D$ and $\sqrt{D}$ as matrix of component wise square roots of $D$. Thus, it needn't be full rank. Let $Z$ be some vector random variable, of the appropriate dimension, with covariance matrix $I$ (which is easy to create). Then $LZ$ has covariance matrix $A$. [At least that's in theory. In practice there'd be various numerical issues to deal with if you wanted good results, and - because of the usual problems with floating point calculation - you'd only approximately get what you need; that is, the population variance of a computed $LZ$ usually wouldn't be exactly $A$. But this sort of thing is always an issue when we come to actually calculate things]
Does every semi-positive definite matrix correspond to a covariance matrix?
Going by the definitions of PD and PSD here, yes, I think so, since we can do this by construction. I'll assume for a slightly simpler argument that you mean for matrices with real elements, but with
Does every semi-positive definite matrix correspond to a covariance matrix? Going by the definitions of PD and PSD here, yes, I think so, since we can do this by construction. I'll assume for a slightly simpler argument that you mean for matrices with real elements, but with appropriate changes it would extend to complex matrices. Let $A$ be some real PSD matrix; from the definition I linked to, it will be symmetric. Any real symmetric positive definite matrix $A$ can be written as $A = LL^T$. This can be done by $L=Q\sqrt{D}Q^T$ if $A=QDQ^T$ with orthogonal $Q$ and diagonal $D$ and $\sqrt{D}$ as matrix of component wise square roots of $D$. Thus, it needn't be full rank. Let $Z$ be some vector random variable, of the appropriate dimension, with covariance matrix $I$ (which is easy to create). Then $LZ$ has covariance matrix $A$. [At least that's in theory. In practice there'd be various numerical issues to deal with if you wanted good results, and - because of the usual problems with floating point calculation - you'd only approximately get what you need; that is, the population variance of a computed $LZ$ usually wouldn't be exactly $A$. But this sort of thing is always an issue when we come to actually calculate things]
Does every semi-positive definite matrix correspond to a covariance matrix? Going by the definitions of PD and PSD here, yes, I think so, since we can do this by construction. I'll assume for a slightly simpler argument that you mean for matrices with real elements, but with
23,991
Generalized log likelihood ratio test for non-nested models
The paper Vuong, Q. H. (1989). Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica, 307-333. has the full theoretical treatment and test procedures. It distinguishes between three situations, "Strictly Non-nested Models", "Overlapping Models", "Nested Models", and also examines cases of misspecification. It is therefore no-accident that it finds that for some cases, the test statistic is distributed as a linear combination of chi-squares. The paper is not light, neither it proposes an "off-the-shelf" testing procedure. But, for once, its (close to) 3,000 citations speak of its merits, being an inspired combination of classical testing framework and the information-theoretic approach.
Generalized log likelihood ratio test for non-nested models
The paper Vuong, Q. H. (1989). Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica, 307-333. has the full theoretical treatment and test procedures. It distinguishes bet
Generalized log likelihood ratio test for non-nested models The paper Vuong, Q. H. (1989). Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica, 307-333. has the full theoretical treatment and test procedures. It distinguishes between three situations, "Strictly Non-nested Models", "Overlapping Models", "Nested Models", and also examines cases of misspecification. It is therefore no-accident that it finds that for some cases, the test statistic is distributed as a linear combination of chi-squares. The paper is not light, neither it proposes an "off-the-shelf" testing procedure. But, for once, its (close to) 3,000 citations speak of its merits, being an inspired combination of classical testing framework and the information-theoretic approach.
Generalized log likelihood ratio test for non-nested models The paper Vuong, Q. H. (1989). Likelihood ratio tests for model selection and non-nested hypotheses. Econometrica, 307-333. has the full theoretical treatment and test procedures. It distinguishes bet
23,992
Generalized log likelihood ratio test for non-nested models
The Generalised likelihood ratio test DOES NOT work the way you are saying. See for example the following lecture notes: http://www.maths.manchester.ac.uk/~peterf/MATH38062/MATH38062%20GLRT.pdf http://www.maths.qmul.ac.uk/~bb/MS_Lectures_12b.pdf The GLRT is defined for hypothesis of the type: $$H_0: \theta\in\Theta_0 \,\,\, vs. \,\,\, H_1: \theta\in\Theta_1,$$ where $\Theta_0\cap\Theta_1=\emptyset$ and $\Theta_0\cup\Theta_1=\Theta$. For the framework you describe, you can compare the models using other tools such as AIC and BIC. Also Bayes factors, if you are willing to go full Bayesian.
Generalized log likelihood ratio test for non-nested models
The Generalised likelihood ratio test DOES NOT work the way you are saying. See for example the following lecture notes: http://www.maths.manchester.ac.uk/~peterf/MATH38062/MATH38062%20GLRT.pdf http:/
Generalized log likelihood ratio test for non-nested models The Generalised likelihood ratio test DOES NOT work the way you are saying. See for example the following lecture notes: http://www.maths.manchester.ac.uk/~peterf/MATH38062/MATH38062%20GLRT.pdf http://www.maths.qmul.ac.uk/~bb/MS_Lectures_12b.pdf The GLRT is defined for hypothesis of the type: $$H_0: \theta\in\Theta_0 \,\,\, vs. \,\,\, H_1: \theta\in\Theta_1,$$ where $\Theta_0\cap\Theta_1=\emptyset$ and $\Theta_0\cup\Theta_1=\Theta$. For the framework you describe, you can compare the models using other tools such as AIC and BIC. Also Bayes factors, if you are willing to go full Bayesian.
Generalized log likelihood ratio test for non-nested models The Generalised likelihood ratio test DOES NOT work the way you are saying. See for example the following lecture notes: http://www.maths.manchester.ac.uk/~peterf/MATH38062/MATH38062%20GLRT.pdf http:/
23,993
Interpreting CCF correlation in R
How can I extract only acf value for lag=0? The acf at lag 0 ($\text{corr}(X_t,X_t)$) is always 1. Do interpret it correctly that there is a cross-correlation for the lag=0, as for this lag the cross-correlation is above the dotted line? If you mean "would I conclude the population cross-correlation is non-zero?" then yes, if that dotted line is for the same significance level as you would use (and the assumptions hold). If it is not outside the lines, this doesn't actually imply that the population cross correlation is actually zero (that would seem astonishing). However, if the interval for it is quite tight around zero, it may sometimes be reasonable to treat it as if it were. How should I interpret the level of cross-correlation in this example, is this significant (as I interpret it right now, there is a small cross-correlation)? 0.3 isn't necessarily small, that depends on your yardstick. In some applications it might be fairly large, in others moderate, in still others small.
Interpreting CCF correlation in R
How can I extract only acf value for lag=0? The acf at lag 0 ($\text{corr}(X_t,X_t)$) is always 1. Do interpret it correctly that there is a cross-correlation for the lag=0, as for this lag the cros
Interpreting CCF correlation in R How can I extract only acf value for lag=0? The acf at lag 0 ($\text{corr}(X_t,X_t)$) is always 1. Do interpret it correctly that there is a cross-correlation for the lag=0, as for this lag the cross-correlation is above the dotted line? If you mean "would I conclude the population cross-correlation is non-zero?" then yes, if that dotted line is for the same significance level as you would use (and the assumptions hold). If it is not outside the lines, this doesn't actually imply that the population cross correlation is actually zero (that would seem astonishing). However, if the interval for it is quite tight around zero, it may sometimes be reasonable to treat it as if it were. How should I interpret the level of cross-correlation in this example, is this significant (as I interpret it right now, there is a small cross-correlation)? 0.3 isn't necessarily small, that depends on your yardstick. In some applications it might be fairly large, in others moderate, in still others small.
Interpreting CCF correlation in R How can I extract only acf value for lag=0? The acf at lag 0 ($\text{corr}(X_t,X_t)$) is always 1. Do interpret it correctly that there is a cross-correlation for the lag=0, as for this lag the cros
23,994
Interpreting CCF correlation in R
Your interpretation of the plot is correct. The only significant cross-correlation at the $5\%$ level of significance is at lag zero. Thus, we cannot say that one variable leads the other variable (that is, we cannot foresee or anticipate the movements in one variable by looking at the other). Both variables evolve concurrently. The correlation is positive, when one increases the other increases as well, and vice versa. The correlation is nonetheless not too strong (around $0.3$). You can get the exact values of the cross-correlations simply by storing the output in an object and looking at the element acf. res <- ccf(x, y, lag.max = 30) res # information stored in the output object names(res) [1] "acf" "type" "n.used" "lag" "series" "snames" res$acf
Interpreting CCF correlation in R
Your interpretation of the plot is correct. The only significant cross-correlation at the $5\%$ level of significance is at lag zero. Thus, we cannot say that one variable leads the other variable (th
Interpreting CCF correlation in R Your interpretation of the plot is correct. The only significant cross-correlation at the $5\%$ level of significance is at lag zero. Thus, we cannot say that one variable leads the other variable (that is, we cannot foresee or anticipate the movements in one variable by looking at the other). Both variables evolve concurrently. The correlation is positive, when one increases the other increases as well, and vice versa. The correlation is nonetheless not too strong (around $0.3$). You can get the exact values of the cross-correlations simply by storing the output in an object and looking at the element acf. res <- ccf(x, y, lag.max = 30) res # information stored in the output object names(res) [1] "acf" "type" "n.used" "lag" "series" "snames" res$acf
Interpreting CCF correlation in R Your interpretation of the plot is correct. The only significant cross-correlation at the $5\%$ level of significance is at lag zero. Thus, we cannot say that one variable leads the other variable (th
23,995
What is the best way to Reshape/Restructure Data?
As I noted in my comment, there isn't enough detail in the question for a real answer to be formulated. Since you need help even finding the right terms and formulating your question, I can speak briefly in generalities. The term you are looking for is data cleaning. This is the process of taking raw, poorly formatted (dirty) data and getting it into shape for analyses. Changing and regularizing formats ("two" $\rightarrow 2$) and reorganizing rows and columns are typical data cleaning tasks. In some sense, data cleaning can be done in any software and can be done with Excel or with R. There will be pros and cons to both choices: Excel: Excel is almost certainly the most common choice for data cleaning (see R fortunes #59 pdf). It is also considered a poor choice by statisticians. The primary reason is that it is hard to ensure that you have caught everything, or that you have treated everything identically, and there is no record of the changes that you have made, so you can't revisit those changes later. The upside of using Excel is that it will be easier to see what you are doing, and you don't have to know much to make changes. (Statisticians will consider the latter an additional con.) R: R will require a steep learning curve. If you aren't very familiar with R or programming, things that can be done quite quickly and easily in Excel will be frustrating to attempt in R. On the other hand, if you ever have to do this again, that learning will have been time well spent. In addition, the ability to write and save your code for cleaning the data in R will alleviate the cons listed above. The following are some links that will help you get started with these tasks in R: You can get a lot of good information on Stack Overflow: How does one reorder colums in R? R: How can I reorder the rows of a matrix, data.frame or vector acording to another one? Quick-R is also a valuable resource: sorting Getting numbers into numerical mode: Convert written number to number in R ?strtoi is a specialized function for converting from hexidicimal, etc., if necessary Another invaluable source for learning about R is UCLA's stats help website: working with factor variables (for your "mostly agree", etc.) Lastly, you can always find a lot of information with good old Google: This search: data cleaning in r, brings up a number of tutorials (none of which I've worked through, FTR). Update: This is a common issue regarding the structure of your dataset when you have multiple measurements per 'study unit' (in your case, a person). If you have one row for every person, your data are said to be in 'wide' form, but then you will necessarily have multiple columns for your response variable, for example. On the other hand, you can have just one column for your response variable (but have multiple rows per person, as a result), in which case your data are said to be in 'long' form. Moving between these two formats is often called 'reshaping' your data, especially in the R world. The standard R function for this is ?reshape. There is a guide to using reshape() on UCLA's stats help website. Many people think reshape is hard to work with. Hadley Wickham has contributed a package called reshape2, which is intended to simplify the process. Hadley's personal website for reshape2 is here, the Quick-R overview is here, and there is a nice-looking tutorial here. There are very many questions on SO about how to reshape data. Most of them are about going from wide to long, because that is typically what data analysts are faced with. Your question is about going from long to wide, which is much less common, but there are still many threads about that, you can look through them with this search. If your heart is set on trying to do this with Excel, there is a thread about writing a VBA macro for Excel to replicate the reshape functionality here: melt / rehshape in Excel using VBA?
What is the best way to Reshape/Restructure Data?
As I noted in my comment, there isn't enough detail in the question for a real answer to be formulated. Since you need help even finding the right terms and formulating your question, I can speak bri
What is the best way to Reshape/Restructure Data? As I noted in my comment, there isn't enough detail in the question for a real answer to be formulated. Since you need help even finding the right terms and formulating your question, I can speak briefly in generalities. The term you are looking for is data cleaning. This is the process of taking raw, poorly formatted (dirty) data and getting it into shape for analyses. Changing and regularizing formats ("two" $\rightarrow 2$) and reorganizing rows and columns are typical data cleaning tasks. In some sense, data cleaning can be done in any software and can be done with Excel or with R. There will be pros and cons to both choices: Excel: Excel is almost certainly the most common choice for data cleaning (see R fortunes #59 pdf). It is also considered a poor choice by statisticians. The primary reason is that it is hard to ensure that you have caught everything, or that you have treated everything identically, and there is no record of the changes that you have made, so you can't revisit those changes later. The upside of using Excel is that it will be easier to see what you are doing, and you don't have to know much to make changes. (Statisticians will consider the latter an additional con.) R: R will require a steep learning curve. If you aren't very familiar with R or programming, things that can be done quite quickly and easily in Excel will be frustrating to attempt in R. On the other hand, if you ever have to do this again, that learning will have been time well spent. In addition, the ability to write and save your code for cleaning the data in R will alleviate the cons listed above. The following are some links that will help you get started with these tasks in R: You can get a lot of good information on Stack Overflow: How does one reorder colums in R? R: How can I reorder the rows of a matrix, data.frame or vector acording to another one? Quick-R is also a valuable resource: sorting Getting numbers into numerical mode: Convert written number to number in R ?strtoi is a specialized function for converting from hexidicimal, etc., if necessary Another invaluable source for learning about R is UCLA's stats help website: working with factor variables (for your "mostly agree", etc.) Lastly, you can always find a lot of information with good old Google: This search: data cleaning in r, brings up a number of tutorials (none of which I've worked through, FTR). Update: This is a common issue regarding the structure of your dataset when you have multiple measurements per 'study unit' (in your case, a person). If you have one row for every person, your data are said to be in 'wide' form, but then you will necessarily have multiple columns for your response variable, for example. On the other hand, you can have just one column for your response variable (but have multiple rows per person, as a result), in which case your data are said to be in 'long' form. Moving between these two formats is often called 'reshaping' your data, especially in the R world. The standard R function for this is ?reshape. There is a guide to using reshape() on UCLA's stats help website. Many people think reshape is hard to work with. Hadley Wickham has contributed a package called reshape2, which is intended to simplify the process. Hadley's personal website for reshape2 is here, the Quick-R overview is here, and there is a nice-looking tutorial here. There are very many questions on SO about how to reshape data. Most of them are about going from wide to long, because that is typically what data analysts are faced with. Your question is about going from long to wide, which is much less common, but there are still many threads about that, you can look through them with this search. If your heart is set on trying to do this with Excel, there is a thread about writing a VBA macro for Excel to replicate the reshape functionality here: melt / rehshape in Excel using VBA?
What is the best way to Reshape/Restructure Data? As I noted in my comment, there isn't enough detail in the question for a real answer to be formulated. Since you need help even finding the right terms and formulating your question, I can speak bri
23,996
What is the best way to Reshape/Restructure Data?
Try following using R: > ddf sess_id user_id quest response 1 1 a age 29 2 1 a satisfied st_agree 3 1 a gender male 4 1 a phone iphone 5 2 a age 29 6 2 a satisfied not_agree 7 2 a gender female 8 2 a phone iphone 9 3 b age 29 10 3 b satisfied agree 11 3 b gender male 12 3 b phone android > > library(reshape2) > dcast(ddf, sess_id+user_id ~ quest, value.var='response') sess_id user_id age gender phone satisfied 1 1 a 29 male iphone st_agree 2 2 a 29 female iphone not_agree 3 3 b 29 male android agree
What is the best way to Reshape/Restructure Data?
Try following using R: > ddf sess_id user_id quest response 1 1 a age 29 2 1 a satisfied st_agree 3 1 a gender male 4 1
What is the best way to Reshape/Restructure Data? Try following using R: > ddf sess_id user_id quest response 1 1 a age 29 2 1 a satisfied st_agree 3 1 a gender male 4 1 a phone iphone 5 2 a age 29 6 2 a satisfied not_agree 7 2 a gender female 8 2 a phone iphone 9 3 b age 29 10 3 b satisfied agree 11 3 b gender male 12 3 b phone android > > library(reshape2) > dcast(ddf, sess_id+user_id ~ quest, value.var='response') sess_id user_id age gender phone satisfied 1 1 a 29 male iphone st_agree 2 2 a 29 female iphone not_agree 3 3 b 29 male android agree
What is the best way to Reshape/Restructure Data? Try following using R: > ddf sess_id user_id quest response 1 1 a age 29 2 1 a satisfied st_agree 3 1 a gender male 4 1
23,997
What is the best way to Reshape/Restructure Data?
In scala this is called an "explode" operation and can be done on a dataFrame. If your data is an rdd, you first convert to dataFrame via toDF command and then use the .explode method.
What is the best way to Reshape/Restructure Data?
In scala this is called an "explode" operation and can be done on a dataFrame. If your data is an rdd, you first convert to dataFrame via toDF command and then use the .explode method.
What is the best way to Reshape/Restructure Data? In scala this is called an "explode" operation and can be done on a dataFrame. If your data is an rdd, you first convert to dataFrame via toDF command and then use the .explode method.
What is the best way to Reshape/Restructure Data? In scala this is called an "explode" operation and can be done on a dataFrame. If your data is an rdd, you first convert to dataFrame via toDF command and then use the .explode method.
23,998
Gibbs sampling for Ising model
Look at this case first. Dropping terms that do not depend on $x_1$, we have. $$ \pi(x_1\mid x_2,\dots,x_d) = \frac{\pi(x_1,x_2,\dots,x_d)}{\pi(x_2,\dots,x_d)} \propto e^{x_1 x_2} $$ $$ P(X_1=-1\mid X_2 = x_2, \dots, X_n=x_n) = \frac{e^{-x_2}}{C} $$ $$ P(X_1=1\mid X_2 = x_2, \dots, X_n=x_n) = \frac{e^{x_2}}{C} $$ $$ \frac{e^{-x_2}}{C} + \frac{e^{x_2}}{C} = 1 \Rightarrow C = 2 \cosh x_2 $$ x_1 <- sample(c(-1, 1), 1, prob = c(exp(-x_2), exp(x_2)) / (2*cosh(x_2))) Generalize it to $x_2,\dots,x_{40}$ (take notice of the differences; see Ilmari's comment bellow). Can you use Ising's analytic results to check your simulation?
Gibbs sampling for Ising model
Look at this case first. Dropping terms that do not depend on $x_1$, we have. $$ \pi(x_1\mid x_2,\dots,x_d) = \frac{\pi(x_1,x_2,\dots,x_d)}{\pi(x_2,\dots,x_d)} \propto e^{x_1 x_2} $$ $$ P(X_1=-1\m
Gibbs sampling for Ising model Look at this case first. Dropping terms that do not depend on $x_1$, we have. $$ \pi(x_1\mid x_2,\dots,x_d) = \frac{\pi(x_1,x_2,\dots,x_d)}{\pi(x_2,\dots,x_d)} \propto e^{x_1 x_2} $$ $$ P(X_1=-1\mid X_2 = x_2, \dots, X_n=x_n) = \frac{e^{-x_2}}{C} $$ $$ P(X_1=1\mid X_2 = x_2, \dots, X_n=x_n) = \frac{e^{x_2}}{C} $$ $$ \frac{e^{-x_2}}{C} + \frac{e^{x_2}}{C} = 1 \Rightarrow C = 2 \cosh x_2 $$ x_1 <- sample(c(-1, 1), 1, prob = c(exp(-x_2), exp(x_2)) / (2*cosh(x_2))) Generalize it to $x_2,\dots,x_{40}$ (take notice of the differences; see Ilmari's comment bellow). Can you use Ising's analytic results to check your simulation?
Gibbs sampling for Ising model Look at this case first. Dropping terms that do not depend on $x_1$, we have. $$ \pi(x_1\mid x_2,\dots,x_d) = \frac{\pi(x_1,x_2,\dots,x_d)}{\pi(x_2,\dots,x_d)} \propto e^{x_1 x_2} $$ $$ P(X_1=-1\m
23,999
Compare the variances of several groups
To assess whether variance differs between groups, you need to use something like Levene's test or the Brown-Forsythe test. I discuss these in my answer here: Why Levene test of equality of variances rather than F ratio? You can perform these tests in R with ?leveneTest in the car package (which actually performs the B-F test by default).
Compare the variances of several groups
To assess whether variance differs between groups, you need to use something like Levene's test or the Brown-Forsythe test. I discuss these in my answer here: Why Levene test of equality of variances
Compare the variances of several groups To assess whether variance differs between groups, you need to use something like Levene's test or the Brown-Forsythe test. I discuss these in my answer here: Why Levene test of equality of variances rather than F ratio? You can perform these tests in R with ?leveneTest in the car package (which actually performs the B-F test by default).
Compare the variances of several groups To assess whether variance differs between groups, you need to use something like Levene's test or the Brown-Forsythe test. I discuss these in my answer here: Why Levene test of equality of variances
24,000
Compare the variances of several groups
Levene's test or the Brown Forsythe test is certainly appropriate, but if the data are normally distributed, I believe Bartlett's test might have greater statistical power. You may want to run a Bartlett test on the data and see if it comes up significant.
Compare the variances of several groups
Levene's test or the Brown Forsythe test is certainly appropriate, but if the data are normally distributed, I believe Bartlett's test might have greater statistical power. You may want to run a Bart
Compare the variances of several groups Levene's test or the Brown Forsythe test is certainly appropriate, but if the data are normally distributed, I believe Bartlett's test might have greater statistical power. You may want to run a Bartlett test on the data and see if it comes up significant.
Compare the variances of several groups Levene's test or the Brown Forsythe test is certainly appropriate, but if the data are normally distributed, I believe Bartlett's test might have greater statistical power. You may want to run a Bart