idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
30,901
How do you plot an interaction between a factor and a continous covariate?
If you're talking about an interaction in a general linear model (e.g., ANCOVA), and if your categorical moderator has a reasonably small number of levels, you can plot separate regression lines for each level of the moderator. If you want these on the same plot, superimpose them, code by color or line type, and provide a legend. One of your plot's axes will represent the continuous predictor (presumably the horizontal "$x$" axis), and the other will represent the dependent variable, which I'm assuming is continuous. If your categorical predictor (moderator) has more than four levels, that might get a little too busy for one plot, but I'm not aware of a better method for such circumstances that doesn't resort to separate plots for each level.
How do you plot an interaction between a factor and a continous covariate?
If you're talking about an interaction in a general linear model (e.g., ANCOVA), and if your categorical moderator has a reasonably small number of levels, you can plot separate regression lines for e
How do you plot an interaction between a factor and a continous covariate? If you're talking about an interaction in a general linear model (e.g., ANCOVA), and if your categorical moderator has a reasonably small number of levels, you can plot separate regression lines for each level of the moderator. If you want these on the same plot, superimpose them, code by color or line type, and provide a legend. One of your plot's axes will represent the continuous predictor (presumably the horizontal "$x$" axis), and the other will represent the dependent variable, which I'm assuming is continuous. If your categorical predictor (moderator) has more than four levels, that might get a little too busy for one plot, but I'm not aware of a better method for such circumstances that doesn't resort to separate plots for each level.
How do you plot an interaction between a factor and a continous covariate? If you're talking about an interaction in a general linear model (e.g., ANCOVA), and if your categorical moderator has a reasonably small number of levels, you can plot separate regression lines for e
30,902
How do you plot an interaction between a factor and a continous covariate?
Just to address the following comment: thanks! just to clarify, which graph plot exactly do I need to produce for this? Is it a scatter plot with regression line? If so, then I would need to produce 3 different graphs for the 3 different levels of my moderator...how do I put it on the same graph? Also just to clarify that the predicted values take into consideration the adjusted regression with covariates? Here is how to do it in SPSS. I use the Employee.sav data as example. Suppose we'd like to use salary as outcome, beginning salary as the continuous predictor and job category as the categorical predictor: Go to Graph > Legacy > Scatter: Choose just simple scatter plot is fine. Then, fill in the variables: You'll then see the scatter plot. Double click on the scatter plot to open the chart editor. At the top, click the icon to "fit lines to subgroups." See pic below: Done: Now, whether you use the original salary variable as outcome or the predicted salary as outcome adjusted for the other third or more predictors is a matter of your purpose. The original salary will fit better as exploration, while the predicted salary will be more suitable as presenting your regression results.
How do you plot an interaction between a factor and a continous covariate?
Just to address the following comment: thanks! just to clarify, which graph plot exactly do I need to produce for this? Is it a scatter plot with regression line? If so, then I would need to prod
How do you plot an interaction between a factor and a continous covariate? Just to address the following comment: thanks! just to clarify, which graph plot exactly do I need to produce for this? Is it a scatter plot with regression line? If so, then I would need to produce 3 different graphs for the 3 different levels of my moderator...how do I put it on the same graph? Also just to clarify that the predicted values take into consideration the adjusted regression with covariates? Here is how to do it in SPSS. I use the Employee.sav data as example. Suppose we'd like to use salary as outcome, beginning salary as the continuous predictor and job category as the categorical predictor: Go to Graph > Legacy > Scatter: Choose just simple scatter plot is fine. Then, fill in the variables: You'll then see the scatter plot. Double click on the scatter plot to open the chart editor. At the top, click the icon to "fit lines to subgroups." See pic below: Done: Now, whether you use the original salary variable as outcome or the predicted salary as outcome adjusted for the other third or more predictors is a matter of your purpose. The original salary will fit better as exploration, while the predicted salary will be more suitable as presenting your regression results.
How do you plot an interaction between a factor and a continous covariate? Just to address the following comment: thanks! just to clarify, which graph plot exactly do I need to produce for this? Is it a scatter plot with regression line? If so, then I would need to prod
30,903
Assessing the power of a normality test (in R)
You are simulating under the null hypothesis (normal distribution), therefore the rejection rate will tend to the significance level as expected. To assess the power, you need to simulate under any non-normal distribution. There are infinite possibilities/scenarios (e.g. gamma distributions with increasing skewness, t-distribution with decreasing df etc.) to choose from, depending on the scope of your study.
Assessing the power of a normality test (in R)
You are simulating under the null hypothesis (normal distribution), therefore the rejection rate will tend to the significance level as expected. To assess the power, you need to simulate under any no
Assessing the power of a normality test (in R) You are simulating under the null hypothesis (normal distribution), therefore the rejection rate will tend to the significance level as expected. To assess the power, you need to simulate under any non-normal distribution. There are infinite possibilities/scenarios (e.g. gamma distributions with increasing skewness, t-distribution with decreasing df etc.) to choose from, depending on the scope of your study.
Assessing the power of a normality test (in R) You are simulating under the null hypothesis (normal distribution), therefore the rejection rate will tend to the significance level as expected. To assess the power, you need to simulate under any no
30,904
Assessing the power of a normality test (in R)
Understanding power analysis of statistical hypothesis tests can be enhanced by carrying some out and looking closely at the results. By design, a test of size $\alpha$ is intended to reject the null hypothesis with a chance of at least $\alpha$ when the null is true (its expected false positive rate). When we have the ability (or luxury) of choosing among alternative procedures with this property we would prefer those that (a) actually come close to the nominal false positive rate and (b) have relatively higher chances of rejecting the null hypothesis when it is not true. The second criterion requires us to stipulate in what way(s) and by how much the null fails to be true. In textbook cases this is easy, because the alternatives are limited in scope and clearly specified. With distribution tests like the Shapiro-Wilk, the alternative are much more vague: they are "non-normal." When choosing among distribution tests, then, the analyst is likely to have to conduct their own one-off power study to assess how well the tests work against more specific alternative hypotheses that are of concern in the problem at hand. An example motivated by Michael Mayer's answer posits that the alternative distribution may have qualities similar to those of the family of Student t distributions. This family, parameterized by a number $\nu\ge 1$ (as well as by location and scale) includes in the limit of large $\nu$ the Normal distributions. In either situation--whether evaluating the actual test size or its power--we must generate independent samples from a specified distribution, run the test on each sample, and find the rate at which it rejects the null hypothesis. However, there is more information available in any test result: its P-value. By retaining the set of P-values produced during such a simulation, we can later assess the rate at which the test would reject the null for any value of $\alpha$ we might care about. The heart of the power analysis, then, is a subroutine that generates this P-value distribution (either by simulation, as just described, or--occasionally--with a theoretical formula). Here is an example coded in R. Its arguments include rdist, the name of a function to produce a random sample from some distribution n, the size of samples to request of rdist n.iter, the number of such samples to obtain ..., any optional parameters to be passed on to rdist (such as the degrees of freedom $\nu$). The remaining parameters control the display of the results; they are included mainly as a convenience for generating the figures in this answer. sim <- function(rdist, n, n.iter, prefix="", breaks=seq(0, 1, length.out=20), alpha=0.05, plot=TRUE, ...) { # The simulated P-values. # NB: The optional arguments "..." are passed to `rdist` to specify # its parameters (if any). x <- apply(matrix(rdist(n*n.iter, ...), ncol=n.iter), 2, function(y) shapiro.test(y)$p.value) # The histogram of P-values, if requested. if (plot) { power <- mean(x <= alpha) round.n <- 1+ceiling(log(1 + n.iter * power * (1-power), base=10) / 2) hist(x[x <= max(breaks)], xlab=paste("P value (n=", n, ")", sep=""), breaks=breaks, main=paste(prefix, "(power=", format(power, digits=round.n), ")", sep="")) # Specially color the "significant" part of the histogram hist(x[x <= alpha], breaks=breaks, col="#e0404080", add=TRUE) } # Return the array of P-values for any further processing. return(x) } You can see the computation actually takes just one line; the rest of the code plots the histogram. To illustrate, let's use it to compute the expected false positive rates. "Rates" is in the plural because the properties of a test usually vary with the sample size. Since it is well-known that distributional tests have high power against qualitatively small alternatives when sample sizes are large, this study focuses on a range of small sample sizes where such tests of often applied in practice: typically about $5$ to $100.$ To save computation time, I report only on values of $n$ from $5$ to $20.$ n.iter <- 10^5 # Number of samples to generate n.spec <- c(5, 10, 20) # Sample sizes to study par(mfrow=c(1,length(n.spec))) # Organize subsequent plots into a tableau system.time( invisible(sapply(n.spec, function(n) sim(rnorm, n, n.iter, prefix="DF = Inf "))) ) After specifying the parameters, this code also is just one line. It yields the following output: This is the expected appearance: the histograms show nearly uniform distributions of P-values across the full range from $0$ to $1$. With the nominal size set at $\alpha=0.05,$ the simulations report between $.0481$ and $0.0499$ of the P-values were actually less than that threshold: these are the results highlighted in red. The closeness of these frequencies to the nominal value attests that the Shapiro-Wilk test does perform as advertised. (There does seem to be a tendency towards an unusually high frequency of P-values near $1$. This is of little concern, because in almost all applications the only P-values one looks at are $0.2$ or less.) Let's turn now to assessing the power. The full range of values of $\nu$ for the Student t distribution can adequately be studied by assessing a few instances from around $\nu=100$ down to $\nu=1$. How do I know that? I performed some preliminary runs using very small numbers of iterations (from $100$ to $1000$), which takes no time at all. The code now requires a double loop (and in more complex situations we often need triple or quadruple loops to accommodate all the aspects we need to vary): one to study how the power varies with the sample size and another to study how it varies with the degrees of freedom. Once again, though, everything is done in just one line of code (the third and final): df.spec <- c(64, 16, 4, 2, 1) par(mfrow=c(length(n.spec), length(df.spec))) for (n in n.spec) for (df in df.spec) tmp <- sim(rt, n, n.iter, prefix=paste("DF =", df, ""), df=df) A little study of this tableau provides good intuition about power. I would like to draw attention to its most salient and useful aspects: As the degrees of freedom reduce from $\nu=64$ on the left to $\nu=1$ on the right, more and more of the P-values are small, showing that the power to discriminate these distributions from a Normal distribution increases. (The power is quantified in each plot title: it equals the proportion of the histogram's area that is red.) As the sample size increase from $n=5$ on the top row to $n=20$ on the bottom, the power also increases. Notice how as the alternative distribution differs more from the null distribution and the sample size increases, the P-values start collecting to the left, but there is still a "tail" of them stretching all the way to $1$. This is characteristic of power studies. It shows that testing is a gamble: even when the null hypothesis is flagrantly violated and even when our sample size is reasonably large, our formal test may fail to produce a significant result. Even in the extreme case at the bottom right, where a sample of $20$ is drawn from a Student t distribution with $1$ degree of freedom (a Cauchy distribution), the power is not $1$: there is a $100 - 86.57 = 13\%$ chance that a sample of $20$ iid Cauchy variates will not be considered significantly different from Normal at a level of $5\%$ (that is, with $95\%$ confidence). We could assess the power at any value of $\alpha$ we choose by coloring more or fewer of the bars on these histograms. For instance, to evaluate the power at $\alpha=0.10$, color in the left two bars on each histogram and estimate its area as a fraction of the total. (This won't work too well for values of $\alpha$ smaller than $0.05$ with this figure. In practice, one would limit the histograms to P-values only in the range that would be used, perhaps from $0$ to $20\%$, and show them in enough detail to enable visual assessment of power down to $\alpha=0.01$ or even $\alpha=0.005$. (That is what the breaks option to sim is for.) Post-processing of the simulation results can provide even more detail.) It is amusing that so much can be gleaned from what, in effect, amounts to three lines of code: one to simulate i.i.d. samples from a specified distribution, one to apply that to an array of null distributions, and the third to apply it to an array of alternative distributions. These are the three steps that go into any power analysis: the rest is just summarizing and interpreting the results.
Assessing the power of a normality test (in R)
Understanding power analysis of statistical hypothesis tests can be enhanced by carrying some out and looking closely at the results. By design, a test of size $\alpha$ is intended to reject the null
Assessing the power of a normality test (in R) Understanding power analysis of statistical hypothesis tests can be enhanced by carrying some out and looking closely at the results. By design, a test of size $\alpha$ is intended to reject the null hypothesis with a chance of at least $\alpha$ when the null is true (its expected false positive rate). When we have the ability (or luxury) of choosing among alternative procedures with this property we would prefer those that (a) actually come close to the nominal false positive rate and (b) have relatively higher chances of rejecting the null hypothesis when it is not true. The second criterion requires us to stipulate in what way(s) and by how much the null fails to be true. In textbook cases this is easy, because the alternatives are limited in scope and clearly specified. With distribution tests like the Shapiro-Wilk, the alternative are much more vague: they are "non-normal." When choosing among distribution tests, then, the analyst is likely to have to conduct their own one-off power study to assess how well the tests work against more specific alternative hypotheses that are of concern in the problem at hand. An example motivated by Michael Mayer's answer posits that the alternative distribution may have qualities similar to those of the family of Student t distributions. This family, parameterized by a number $\nu\ge 1$ (as well as by location and scale) includes in the limit of large $\nu$ the Normal distributions. In either situation--whether evaluating the actual test size or its power--we must generate independent samples from a specified distribution, run the test on each sample, and find the rate at which it rejects the null hypothesis. However, there is more information available in any test result: its P-value. By retaining the set of P-values produced during such a simulation, we can later assess the rate at which the test would reject the null for any value of $\alpha$ we might care about. The heart of the power analysis, then, is a subroutine that generates this P-value distribution (either by simulation, as just described, or--occasionally--with a theoretical formula). Here is an example coded in R. Its arguments include rdist, the name of a function to produce a random sample from some distribution n, the size of samples to request of rdist n.iter, the number of such samples to obtain ..., any optional parameters to be passed on to rdist (such as the degrees of freedom $\nu$). The remaining parameters control the display of the results; they are included mainly as a convenience for generating the figures in this answer. sim <- function(rdist, n, n.iter, prefix="", breaks=seq(0, 1, length.out=20), alpha=0.05, plot=TRUE, ...) { # The simulated P-values. # NB: The optional arguments "..." are passed to `rdist` to specify # its parameters (if any). x <- apply(matrix(rdist(n*n.iter, ...), ncol=n.iter), 2, function(y) shapiro.test(y)$p.value) # The histogram of P-values, if requested. if (plot) { power <- mean(x <= alpha) round.n <- 1+ceiling(log(1 + n.iter * power * (1-power), base=10) / 2) hist(x[x <= max(breaks)], xlab=paste("P value (n=", n, ")", sep=""), breaks=breaks, main=paste(prefix, "(power=", format(power, digits=round.n), ")", sep="")) # Specially color the "significant" part of the histogram hist(x[x <= alpha], breaks=breaks, col="#e0404080", add=TRUE) } # Return the array of P-values for any further processing. return(x) } You can see the computation actually takes just one line; the rest of the code plots the histogram. To illustrate, let's use it to compute the expected false positive rates. "Rates" is in the plural because the properties of a test usually vary with the sample size. Since it is well-known that distributional tests have high power against qualitatively small alternatives when sample sizes are large, this study focuses on a range of small sample sizes where such tests of often applied in practice: typically about $5$ to $100.$ To save computation time, I report only on values of $n$ from $5$ to $20.$ n.iter <- 10^5 # Number of samples to generate n.spec <- c(5, 10, 20) # Sample sizes to study par(mfrow=c(1,length(n.spec))) # Organize subsequent plots into a tableau system.time( invisible(sapply(n.spec, function(n) sim(rnorm, n, n.iter, prefix="DF = Inf "))) ) After specifying the parameters, this code also is just one line. It yields the following output: This is the expected appearance: the histograms show nearly uniform distributions of P-values across the full range from $0$ to $1$. With the nominal size set at $\alpha=0.05,$ the simulations report between $.0481$ and $0.0499$ of the P-values were actually less than that threshold: these are the results highlighted in red. The closeness of these frequencies to the nominal value attests that the Shapiro-Wilk test does perform as advertised. (There does seem to be a tendency towards an unusually high frequency of P-values near $1$. This is of little concern, because in almost all applications the only P-values one looks at are $0.2$ or less.) Let's turn now to assessing the power. The full range of values of $\nu$ for the Student t distribution can adequately be studied by assessing a few instances from around $\nu=100$ down to $\nu=1$. How do I know that? I performed some preliminary runs using very small numbers of iterations (from $100$ to $1000$), which takes no time at all. The code now requires a double loop (and in more complex situations we often need triple or quadruple loops to accommodate all the aspects we need to vary): one to study how the power varies with the sample size and another to study how it varies with the degrees of freedom. Once again, though, everything is done in just one line of code (the third and final): df.spec <- c(64, 16, 4, 2, 1) par(mfrow=c(length(n.spec), length(df.spec))) for (n in n.spec) for (df in df.spec) tmp <- sim(rt, n, n.iter, prefix=paste("DF =", df, ""), df=df) A little study of this tableau provides good intuition about power. I would like to draw attention to its most salient and useful aspects: As the degrees of freedom reduce from $\nu=64$ on the left to $\nu=1$ on the right, more and more of the P-values are small, showing that the power to discriminate these distributions from a Normal distribution increases. (The power is quantified in each plot title: it equals the proportion of the histogram's area that is red.) As the sample size increase from $n=5$ on the top row to $n=20$ on the bottom, the power also increases. Notice how as the alternative distribution differs more from the null distribution and the sample size increases, the P-values start collecting to the left, but there is still a "tail" of them stretching all the way to $1$. This is characteristic of power studies. It shows that testing is a gamble: even when the null hypothesis is flagrantly violated and even when our sample size is reasonably large, our formal test may fail to produce a significant result. Even in the extreme case at the bottom right, where a sample of $20$ is drawn from a Student t distribution with $1$ degree of freedom (a Cauchy distribution), the power is not $1$: there is a $100 - 86.57 = 13\%$ chance that a sample of $20$ iid Cauchy variates will not be considered significantly different from Normal at a level of $5\%$ (that is, with $95\%$ confidence). We could assess the power at any value of $\alpha$ we choose by coloring more or fewer of the bars on these histograms. For instance, to evaluate the power at $\alpha=0.10$, color in the left two bars on each histogram and estimate its area as a fraction of the total. (This won't work too well for values of $\alpha$ smaller than $0.05$ with this figure. In practice, one would limit the histograms to P-values only in the range that would be used, perhaps from $0$ to $20\%$, and show them in enough detail to enable visual assessment of power down to $\alpha=0.01$ or even $\alpha=0.005$. (That is what the breaks option to sim is for.) Post-processing of the simulation results can provide even more detail.) It is amusing that so much can be gleaned from what, in effect, amounts to three lines of code: one to simulate i.i.d. samples from a specified distribution, one to apply that to an array of null distributions, and the third to apply it to an array of alternative distributions. These are the three steps that go into any power analysis: the rest is just summarizing and interpreting the results.
Assessing the power of a normality test (in R) Understanding power analysis of statistical hypothesis tests can be enhanced by carrying some out and looking closely at the results. By design, a test of size $\alpha$ is intended to reject the null
30,905
Assessing the power of a normality test (in R)
(More than a comment, perhaps not a complete answer) [I] would expect that as the sample size increases the probability of rejecting the null decreases Leaving aside considerations of biased tests (which are not uncommon in goodness of fit, so it's worth a mention), there are three situations relating to rejection rate one might want to consider: 1) the rejection rate when simulating from the null (as you seem to be doing in your question) Here, the rejection rate should be at or near the significance level, so, no, if you hold the significance level constant, the rejection rate doesn't decrease as n increases, but stays at/near $\alpha$. 2) the rejection rate when simulating from some alternative Here the rejection rate should increase as n increases. 3) the rejection rate for some collection of real data Practically, the null is never actually true, and real data will have some mixture of amounts of non-normality (as measured by the test statistic). If the degree of non-normality is not related to sample size, the rejection rate should increase as n increases. So in fact, in none of these situations should we see the rejection rate decrease with sample size.
Assessing the power of a normality test (in R)
(More than a comment, perhaps not a complete answer) [I] would expect that as the sample size increases the probability of rejecting the null decreases Leaving aside considerations of biased tests (
Assessing the power of a normality test (in R) (More than a comment, perhaps not a complete answer) [I] would expect that as the sample size increases the probability of rejecting the null decreases Leaving aside considerations of biased tests (which are not uncommon in goodness of fit, so it's worth a mention), there are three situations relating to rejection rate one might want to consider: 1) the rejection rate when simulating from the null (as you seem to be doing in your question) Here, the rejection rate should be at or near the significance level, so, no, if you hold the significance level constant, the rejection rate doesn't decrease as n increases, but stays at/near $\alpha$. 2) the rejection rate when simulating from some alternative Here the rejection rate should increase as n increases. 3) the rejection rate for some collection of real data Practically, the null is never actually true, and real data will have some mixture of amounts of non-normality (as measured by the test statistic). If the degree of non-normality is not related to sample size, the rejection rate should increase as n increases. So in fact, in none of these situations should we see the rejection rate decrease with sample size.
Assessing the power of a normality test (in R) (More than a comment, perhaps not a complete answer) [I] would expect that as the sample size increases the probability of rejecting the null decreases Leaving aside considerations of biased tests (
30,906
Naive Bayes classifier gives a probability greater than 1
You will not break the algorithm by having a word which shows up in $100\%$ of messages. The forumlas you are using for the probability are wrong. For the two-word case, here is an example to show why. Suppose your words are $a$, $b$, and $x$ and that you have two messages to use to build the classifier. The first message is spam and reads a b. The second message is not spam and reads x. Then $$P(\text{spam} | \text{word a, word b}) = 1$$ as the only message which contains $a$ and $b$ is spam. But $P(\text{spam})=1/2$ as one out of two messages is spam. Also, $P(\text{word a}|\text{spam}) = 1$ and $P(\text{word b}|\text{spam}) = 1$. Also, $P(\text{word a})= 1/2$ because one out of two messages contains $a$, and similarly $P(\text{word b})=1/2$. So the right hand side of your formula is $$P(\text{spam})\frac{P(\text{word a}|\text{spam})P(\text{word b}|\text{spam})} {P(\text{word a})P(\text{word b})} = 2,$$ which is not a probability. The correct formula is $$P(\text{spam} | \text{word a, word b}) = \frac{P(\text{spam, word a, word b})}{P(\text{word a, word b})} = \frac{P(\text{word a, word b}|\text{spam})P(\text{spam})}{P(\text{word a, word b})}.$$ The naive Bayes assumption is that the words $a$ and $b$ appear independently, given that the message is spam (and also, given that the message is not spam). This happens to be true in this example, and the idea behind the naive Bayes classifier is to assume that it's true, in which case the formula becomes $$\frac{P(\text{word a}|\text{spam})P(\text{word b}|\text{spam})P(\text{spam})}{P(\text{word a, word b})}.$$ Your mistake was to assume that the denominator also becomes $P(\text{word a})P(\text{word b})$ but this is not true because $a$ and $b$ are not independent; they are only independent given that you know whether the message is spam or not. You can see that they are not independent by asking: "Suppose I know that a message contains $a$. Does that tell me anything new about whether it contains $b$?" The answer is yes, certainly, because the only message which contains $a$ also contains $b$. (End of example.) The confusion arises because people usually don't bother to write the denominator in the naive Bayes formula, as it doesn't affect the calculations except for a scaling factor which is the same for spam as for not spam. You will often see the formula written $$P(\text{spam} | \text{word a, word b}) \propto P(\text{spam}) P(\text{word a}|\text{spam} ) P(\text{word b}|\text{spam})$$ where the constant of proportionality happens to be $\frac{1}{P(\text{word a, word b})}$. But you can ingnore this constant when classifying a new message. You would simply calculate the right hand sides $$P(\text{spam}) P(\text{word a}|\text{spam} ) P(\text{word b}|\text{spam})$$ and $$P(\text{not spam}) P(\text{word a}|\text{not spam} ) P(\text{word b}|\text{not spam})$$ and then classify as spam or not spam depending on which of these is bigger.
Naive Bayes classifier gives a probability greater than 1
You will not break the algorithm by having a word which shows up in $100\%$ of messages. The forumlas you are using for the probability are wrong. For the two-word case, here is an example to show why
Naive Bayes classifier gives a probability greater than 1 You will not break the algorithm by having a word which shows up in $100\%$ of messages. The forumlas you are using for the probability are wrong. For the two-word case, here is an example to show why. Suppose your words are $a$, $b$, and $x$ and that you have two messages to use to build the classifier. The first message is spam and reads a b. The second message is not spam and reads x. Then $$P(\text{spam} | \text{word a, word b}) = 1$$ as the only message which contains $a$ and $b$ is spam. But $P(\text{spam})=1/2$ as one out of two messages is spam. Also, $P(\text{word a}|\text{spam}) = 1$ and $P(\text{word b}|\text{spam}) = 1$. Also, $P(\text{word a})= 1/2$ because one out of two messages contains $a$, and similarly $P(\text{word b})=1/2$. So the right hand side of your formula is $$P(\text{spam})\frac{P(\text{word a}|\text{spam})P(\text{word b}|\text{spam})} {P(\text{word a})P(\text{word b})} = 2,$$ which is not a probability. The correct formula is $$P(\text{spam} | \text{word a, word b}) = \frac{P(\text{spam, word a, word b})}{P(\text{word a, word b})} = \frac{P(\text{word a, word b}|\text{spam})P(\text{spam})}{P(\text{word a, word b})}.$$ The naive Bayes assumption is that the words $a$ and $b$ appear independently, given that the message is spam (and also, given that the message is not spam). This happens to be true in this example, and the idea behind the naive Bayes classifier is to assume that it's true, in which case the formula becomes $$\frac{P(\text{word a}|\text{spam})P(\text{word b}|\text{spam})P(\text{spam})}{P(\text{word a, word b})}.$$ Your mistake was to assume that the denominator also becomes $P(\text{word a})P(\text{word b})$ but this is not true because $a$ and $b$ are not independent; they are only independent given that you know whether the message is spam or not. You can see that they are not independent by asking: "Suppose I know that a message contains $a$. Does that tell me anything new about whether it contains $b$?" The answer is yes, certainly, because the only message which contains $a$ also contains $b$. (End of example.) The confusion arises because people usually don't bother to write the denominator in the naive Bayes formula, as it doesn't affect the calculations except for a scaling factor which is the same for spam as for not spam. You will often see the formula written $$P(\text{spam} | \text{word a, word b}) \propto P(\text{spam}) P(\text{word a}|\text{spam} ) P(\text{word b}|\text{spam})$$ where the constant of proportionality happens to be $\frac{1}{P(\text{word a, word b})}$. But you can ingnore this constant when classifying a new message. You would simply calculate the right hand sides $$P(\text{spam}) P(\text{word a}|\text{spam} ) P(\text{word b}|\text{spam})$$ and $$P(\text{not spam}) P(\text{word a}|\text{not spam} ) P(\text{word b}|\text{not spam})$$ and then classify as spam or not spam depending on which of these is bigger.
Naive Bayes classifier gives a probability greater than 1 You will not break the algorithm by having a word which shows up in $100\%$ of messages. The forumlas you are using for the probability are wrong. For the two-word case, here is an example to show why
30,907
Why is ARMA used to model a stationary process?
It's mainly by definition. You use ARMA if the series is stationary. If it is not stationary, you can convert the series into a stationary process by taking the nth difference, in this case the ARMA model becomes an ARIMA. Hope this helps.
Why is ARMA used to model a stationary process?
It's mainly by definition. You use ARMA if the series is stationary. If it is not stationary, you can convert the series into a stationary process by taking the nth difference, in this case the ARMA m
Why is ARMA used to model a stationary process? It's mainly by definition. You use ARMA if the series is stationary. If it is not stationary, you can convert the series into a stationary process by taking the nth difference, in this case the ARMA model becomes an ARIMA. Hope this helps.
Why is ARMA used to model a stationary process? It's mainly by definition. You use ARMA if the series is stationary. If it is not stationary, you can convert the series into a stationary process by taking the nth difference, in this case the ARMA m
30,908
Why is ARMA used to model a stationary process?
There is an important reason why the ARIMA might be preferred when the series are stationary. And this reason is the Wold's decomposition theorem - any covariance stationary process has a linear representation: a linear deterministic component ($V_t$) and a linear indeterministic components ($\varepsilon_t$) Suppose that ${X_t}$ is a covariance stationary process with $\mathbb{E}[X_t] = 0$ and covariance function, $\gamma(j) = \mathbb{E}[X_t X_{t−j}]$ , $ \forall j$. Then $$X_t = \sum_{j=0}^{\infty} \psi_j \varepsilon_{t−j} + V_t$$ where $\psi_0=1$, $\sum_{j=0}^{\infty} \psi_j^2<\infty$ $\varepsilon_{t−j} \sim WN(0, \sigma_{\varepsilon}^2)$ $\mathbb{E}[\varepsilon_t V_s] = 0, \forall s,t>0$ $\varepsilon_t = X_t - \mathbb{E}[X_t|X_{t-1},X_{t-2},...]$ As you may see, the first part of the representation looks like an $MA(\infty)$ process with square summable moving average terms. The second part is the deterministic part of $X_t$ because $V_t$ is perfectly predictable based on past observations on $X_t$. And we know that models of $MA(\infty)$ representations are in their most general form $ARMA(p,q)$ representations: as long as the roots of the autoregressive part of an ARMA process are less than unity in absolute value, the process has a $MA(\infty)$ representation. However, note, while an ARMA process generates an $MA(\infty)$ with square summable weights, it is not the only form that does this. A process that is square summable is not necessarily absolutely summable. $ARMA(p,q)$ models have ‘short memory’ relative to the entire class representations envisioned by the Wold representation. But Wold representation - despite covering more general cases- provides us with a strong argument of why modelling with ARMA is justifiable on stationary, short memory series. Note also, another example of a stationary process is the periodic processes.If $Z_1,Z_2$ independent $N(0,\sigma^2)$ and $\omega$ constant then the process $$X_t = Z_1 \cos (t \omega) + Z_2 \sin (t \omega)$$ is second order stationary with mean zero and autocovariance $cov(X_t,X_{t+h}) = \sigma^2 cos (\omega h) $.
Why is ARMA used to model a stationary process?
There is an important reason why the ARIMA might be preferred when the series are stationary. And this reason is the Wold's decomposition theorem - any covariance stationary process has a linear repr
Why is ARMA used to model a stationary process? There is an important reason why the ARIMA might be preferred when the series are stationary. And this reason is the Wold's decomposition theorem - any covariance stationary process has a linear representation: a linear deterministic component ($V_t$) and a linear indeterministic components ($\varepsilon_t$) Suppose that ${X_t}$ is a covariance stationary process with $\mathbb{E}[X_t] = 0$ and covariance function, $\gamma(j) = \mathbb{E}[X_t X_{t−j}]$ , $ \forall j$. Then $$X_t = \sum_{j=0}^{\infty} \psi_j \varepsilon_{t−j} + V_t$$ where $\psi_0=1$, $\sum_{j=0}^{\infty} \psi_j^2<\infty$ $\varepsilon_{t−j} \sim WN(0, \sigma_{\varepsilon}^2)$ $\mathbb{E}[\varepsilon_t V_s] = 0, \forall s,t>0$ $\varepsilon_t = X_t - \mathbb{E}[X_t|X_{t-1},X_{t-2},...]$ As you may see, the first part of the representation looks like an $MA(\infty)$ process with square summable moving average terms. The second part is the deterministic part of $X_t$ because $V_t$ is perfectly predictable based on past observations on $X_t$. And we know that models of $MA(\infty)$ representations are in their most general form $ARMA(p,q)$ representations: as long as the roots of the autoregressive part of an ARMA process are less than unity in absolute value, the process has a $MA(\infty)$ representation. However, note, while an ARMA process generates an $MA(\infty)$ with square summable weights, it is not the only form that does this. A process that is square summable is not necessarily absolutely summable. $ARMA(p,q)$ models have ‘short memory’ relative to the entire class representations envisioned by the Wold representation. But Wold representation - despite covering more general cases- provides us with a strong argument of why modelling with ARMA is justifiable on stationary, short memory series. Note also, another example of a stationary process is the periodic processes.If $Z_1,Z_2$ independent $N(0,\sigma^2)$ and $\omega$ constant then the process $$X_t = Z_1 \cos (t \omega) + Z_2 \sin (t \omega)$$ is second order stationary with mean zero and autocovariance $cov(X_t,X_{t+h}) = \sigma^2 cos (\omega h) $.
Why is ARMA used to model a stationary process? There is an important reason why the ARIMA might be preferred when the series are stationary. And this reason is the Wold's decomposition theorem - any covariance stationary process has a linear repr
30,909
Why is ARMA used to model a stationary process?
A series that contains a Level Shift can be made stationary by de-meaning. A series that has a level shift will appear to have significant acf structure. The remedy is NOT to build an ARIMA but to simply detect the point in time where the level shift occurs and the impact of the level shift. In practice there can be multiple level shifts and/or multiple time trends ... all possible obfuscated by Pulses and Seasonal Pulses.
Why is ARMA used to model a stationary process?
A series that contains a Level Shift can be made stationary by de-meaning. A series that has a level shift will appear to have significant acf structure. The remedy is NOT to build an ARIMA but to sim
Why is ARMA used to model a stationary process? A series that contains a Level Shift can be made stationary by de-meaning. A series that has a level shift will appear to have significant acf structure. The remedy is NOT to build an ARIMA but to simply detect the point in time where the level shift occurs and the impact of the level shift. In practice there can be multiple level shifts and/or multiple time trends ... all possible obfuscated by Pulses and Seasonal Pulses.
Why is ARMA used to model a stationary process? A series that contains a Level Shift can be made stationary by de-meaning. A series that has a level shift will appear to have significant acf structure. The remedy is NOT to build an ARIMA but to sim
30,910
Using self organizing maps for dimensionality reduction
The self organising map (SOM) is a space-filling grid that provides a discretised dimensionality reduction of the data. You start with a high-dimensional space of data points, and an arbitrary grid that sits in that space. The grid can be of any dimension, but is usually smaller than the dimension of your dataset, and is commonly 2D, because that's easy to visualise. For each datum in your data set, you find the nearest grid point, and "pull" that grid point toward the data set. You also pull each of the neighbouring grid points toward the new position of the first grid point. At the start of the process, you pull lots of the neighbours toward the data point. Later in the process, when your grid is starting to fill the space, you move less neighbours, and this acts as a kind of fine tuning. This process results in a set of points in the data space that fit the shape of the space reasonably well, but can also be treated as a lower-dimension grid. This is process explained well by two images from page 1468 of Kohonen's 1990 paper: This image shows a one dimensional map in a uniform distribution in a triangle. The grid starts as a mess in the centre, and is gradually pulled into a curve that fills the triangle reasonably well, given the number of grid points: The left part of this second image shows a 2D SOM grid closely filling the space defined by the cactus shape on the left: There is a video of the SOM process using a 2D grid in a 2D space, and in a 3D space on youtube. Now every one of the original data points in the space has one closest neighbour, to which it is assigned. The grid are thus the centres of clusters of data points. The grid provides the dimensionality reduction. Here is a comparison of dimensionality reduction using principal component analysis (PCA), from the SOM page on wikipedia: It immediately be seen that the one dimensional SOM provides a much better fit to the data, explaining over 93% of the variance, compared to 77% for PCA. However, as far as I am aware, there is no easy way to explain the remaining variance, as there is with PCA (using extra dimensions), since there is no neat way to unwrap the data around the discrete SOM grid.
Using self organizing maps for dimensionality reduction
The self organising map (SOM) is a space-filling grid that provides a discretised dimensionality reduction of the data. You start with a high-dimensional space of data points, and an arbitrary grid th
Using self organizing maps for dimensionality reduction The self organising map (SOM) is a space-filling grid that provides a discretised dimensionality reduction of the data. You start with a high-dimensional space of data points, and an arbitrary grid that sits in that space. The grid can be of any dimension, but is usually smaller than the dimension of your dataset, and is commonly 2D, because that's easy to visualise. For each datum in your data set, you find the nearest grid point, and "pull" that grid point toward the data set. You also pull each of the neighbouring grid points toward the new position of the first grid point. At the start of the process, you pull lots of the neighbours toward the data point. Later in the process, when your grid is starting to fill the space, you move less neighbours, and this acts as a kind of fine tuning. This process results in a set of points in the data space that fit the shape of the space reasonably well, but can also be treated as a lower-dimension grid. This is process explained well by two images from page 1468 of Kohonen's 1990 paper: This image shows a one dimensional map in a uniform distribution in a triangle. The grid starts as a mess in the centre, and is gradually pulled into a curve that fills the triangle reasonably well, given the number of grid points: The left part of this second image shows a 2D SOM grid closely filling the space defined by the cactus shape on the left: There is a video of the SOM process using a 2D grid in a 2D space, and in a 3D space on youtube. Now every one of the original data points in the space has one closest neighbour, to which it is assigned. The grid are thus the centres of clusters of data points. The grid provides the dimensionality reduction. Here is a comparison of dimensionality reduction using principal component analysis (PCA), from the SOM page on wikipedia: It immediately be seen that the one dimensional SOM provides a much better fit to the data, explaining over 93% of the variance, compared to 77% for PCA. However, as far as I am aware, there is no easy way to explain the remaining variance, as there is with PCA (using extra dimensions), since there is no neat way to unwrap the data around the discrete SOM grid.
Using self organizing maps for dimensionality reduction The self organising map (SOM) is a space-filling grid that provides a discretised dimensionality reduction of the data. You start with a high-dimensional space of data points, and an arbitrary grid th
30,911
Using self organizing maps for dimensionality reduction
Despite the fact that you end up with more nodes than feature dimensions, you're still reducing dimensionality. Bear in mind that initially you had a 25-dimensional space and, now, you have those 25 dimensions projected in just 2 dimensions. Instead of representing the full continuous 25-dimensional space, the SOM provides you the 'most important' points in that space.
Using self organizing maps for dimensionality reduction
Despite the fact that you end up with more nodes than feature dimensions, you're still reducing dimensionality. Bear in mind that initially you had a 25-dimensional space and, now, you have those 25 d
Using self organizing maps for dimensionality reduction Despite the fact that you end up with more nodes than feature dimensions, you're still reducing dimensionality. Bear in mind that initially you had a 25-dimensional space and, now, you have those 25 dimensions projected in just 2 dimensions. Instead of representing the full continuous 25-dimensional space, the SOM provides you the 'most important' points in that space.
Using self organizing maps for dimensionality reduction Despite the fact that you end up with more nodes than feature dimensions, you're still reducing dimensionality. Bear in mind that initially you had a 25-dimensional space and, now, you have those 25 d
30,912
Good classifiers for small training sets
First of all, you may want to have a look at the Elements of Statistical Learning. They discuss variable selection as well as different regularization techniques in chapter 3 (never mind it being about regression). If you think your variables are basically not correlated, and should go either into the model or not, then you may want to have a look at random forests. They try to cope with the small sample size problem by building a large number of models from slightly varying subsets of the data (subsetting both cases and variates). In addition, they can tell you how many decision trees use which variate, which could help your variable selection. However, if you think your variates may be correlated, methods like PCA-LDA or PLS-LDA may be more appropriate. If you chain them correctly, you can even derive coefficients that tell you how much of the original variates goes into what LD function. (You can ask me for R code, if that helps). I'd go for LDA instead of logistic regression here, as LR tends to need more training cases.
Good classifiers for small training sets
First of all, you may want to have a look at the Elements of Statistical Learning. They discuss variable selection as well as different regularization techniques in chapter 3 (never mind it being abou
Good classifiers for small training sets First of all, you may want to have a look at the Elements of Statistical Learning. They discuss variable selection as well as different regularization techniques in chapter 3 (never mind it being about regression). If you think your variables are basically not correlated, and should go either into the model or not, then you may want to have a look at random forests. They try to cope with the small sample size problem by building a large number of models from slightly varying subsets of the data (subsetting both cases and variates). In addition, they can tell you how many decision trees use which variate, which could help your variable selection. However, if you think your variates may be correlated, methods like PCA-LDA or PLS-LDA may be more appropriate. If you chain them correctly, you can even derive coefficients that tell you how much of the original variates goes into what LD function. (You can ask me for R code, if that helps). I'd go for LDA instead of logistic regression here, as LR tends to need more training cases.
Good classifiers for small training sets First of all, you may want to have a look at the Elements of Statistical Learning. They discuss variable selection as well as different regularization techniques in chapter 3 (never mind it being abou
30,913
Good classifiers for small training sets
You want to keep your model as simple as possible so it won't overfit. This usually means making simple assumptions about the distribution the data comes from. Some possibilities are Naive Bayes, Logistic Regression, some type of decision tree, maybe linear SVM (without playing with the external parameters too much). Also, you should try to have a very small number of features. You can try various feature selection methods, but if you want to learn about the importance of the original features, try not to distort the feature space (e.g. no PCA).
Good classifiers for small training sets
You want to keep your model as simple as possible so it won't overfit. This usually means making simple assumptions about the distribution the data comes from. Some possibilities are Naive Bayes, Logi
Good classifiers for small training sets You want to keep your model as simple as possible so it won't overfit. This usually means making simple assumptions about the distribution the data comes from. Some possibilities are Naive Bayes, Logistic Regression, some type of decision tree, maybe linear SVM (without playing with the external parameters too much). Also, you should try to have a very small number of features. You can try various feature selection methods, but if you want to learn about the importance of the original features, try not to distort the feature space (e.g. no PCA).
Good classifiers for small training sets You want to keep your model as simple as possible so it won't overfit. This usually means making simple assumptions about the distribution the data comes from. Some possibilities are Naive Bayes, Logi
30,914
Understanding bootstrap method for confidence interval of correlation coefficients
The short answer is that - at least in the simple cases - the observations are sampled with replacement. Imagine writing each of the data values on an n sided die and rolling the die n times. If you're trying to bootstrap a correlation, you resample the data in pairs $(x_i,y_i)$. If you think of your data as two columns, each row is an observation, and you resample the observations (rows). Here's an example: More generally, think of a matrix of data where the observations (rows) are resampled. (This is not a suitable resampling scheme for every situation, though. There are a plethora of bootstrap schemes.)
Understanding bootstrap method for confidence interval of correlation coefficients
The short answer is that - at least in the simple cases - the observations are sampled with replacement. Imagine writing each of the data values on an n sided die and rolling the die n times. If you'r
Understanding bootstrap method for confidence interval of correlation coefficients The short answer is that - at least in the simple cases - the observations are sampled with replacement. Imagine writing each of the data values on an n sided die and rolling the die n times. If you're trying to bootstrap a correlation, you resample the data in pairs $(x_i,y_i)$. If you think of your data as two columns, each row is an observation, and you resample the observations (rows). Here's an example: More generally, think of a matrix of data where the observations (rows) are resampled. (This is not a suitable resampling scheme for every situation, though. There are a plethora of bootstrap schemes.)
Understanding bootstrap method for confidence interval of correlation coefficients The short answer is that - at least in the simple cases - the observations are sampled with replacement. Imagine writing each of the data values on an n sided die and rolling the die n times. If you'r
30,915
Understanding bootstrap method for confidence interval of correlation coefficients
The bootstrap is one of a plethora of estimation techniques based on the empirical distribution function of the data, $x$: $$\mathbb{F}(t) = \int_{0}^t \frac{\sum_{i=1}^n I(s > x_i)}{n} ds$$ In the multivariate setting, you consider rows of observations perfectly correlated when bootstrapping. This prevents us from sampling post menopausal males in cancer risk studies. With a sample cumulative distribution function but you can draw samples from it based on any random sampling technique, which is a de facto tool in almost any statistical package. Drawing samples from this is equivalent to just assigning $1/n$ probability to every jointly observed row in your data. This means that, in your case, $(x_i, y_i)$ pairs would have to be sampled jointly. Permutation testing on the other hand allows you to randomly rearrange the columns of jointly observed rows of data and perform resampling tests based on those values.
Understanding bootstrap method for confidence interval of correlation coefficients
The bootstrap is one of a plethora of estimation techniques based on the empirical distribution function of the data, $x$: $$\mathbb{F}(t) = \int_{0}^t \frac{\sum_{i=1}^n I(s > x_i)}{n} ds$$ In the m
Understanding bootstrap method for confidence interval of correlation coefficients The bootstrap is one of a plethora of estimation techniques based on the empirical distribution function of the data, $x$: $$\mathbb{F}(t) = \int_{0}^t \frac{\sum_{i=1}^n I(s > x_i)}{n} ds$$ In the multivariate setting, you consider rows of observations perfectly correlated when bootstrapping. This prevents us from sampling post menopausal males in cancer risk studies. With a sample cumulative distribution function but you can draw samples from it based on any random sampling technique, which is a de facto tool in almost any statistical package. Drawing samples from this is equivalent to just assigning $1/n$ probability to every jointly observed row in your data. This means that, in your case, $(x_i, y_i)$ pairs would have to be sampled jointly. Permutation testing on the other hand allows you to randomly rearrange the columns of jointly observed rows of data and perform resampling tests based on those values.
Understanding bootstrap method for confidence interval of correlation coefficients The bootstrap is one of a plethora of estimation techniques based on the empirical distribution function of the data, $x$: $$\mathbb{F}(t) = \int_{0}^t \frac{\sum_{i=1}^n I(s > x_i)}{n} ds$$ In the m
30,916
Generate three random numbers that sum to 1 in R
The mode is a bit of a red herring. Here is a very simple solution to this problem that circumvent the need to define the mode precisely. I'm surprised it has not been proposed earlier. The constraint on the mode can be easily satisfied by drawing samples from a symmetric distribution and scaling them suitably: $$(x_i,y_i,z_i)\sim\mathrm{i.i.d.}\;\mathcal{L}(\mu,\sigma)$$ $$(x_i^*,y_i^*,z_i^*)=\left(\frac{x_i}{x_i+y_i+z_i},\frac{y_i}{x_i+y_i+z_i},\frac{z_i}{x_i+y_i+z_i}\right)$$ where $\mathcal{L}(\mu,\sigma)$ is a symmetric distribution (so that the mean, the mode and the median are the same) and chosen such that the probability mass below 0 is 0. For example, picking $\mathcal{L}(\mu,\sigma)$ to be $\mathrm{Beta}(2,2)$: a1 <- matrix(rbeta(100*3,2,2), nc=3) a1 <- sweep(a1, 1, rowSums(a1), FUN="/") colMeans(a1) # [1] 0.3342165 0.3341534 0.3316301 yielding the desired solution sum(colMeans(a1)) # [1] 1
Generate three random numbers that sum to 1 in R
The mode is a bit of a red herring. Here is a very simple solution to this problem that circumvent the need to define the mode precisely. I'm surprised it has not been proposed earlier. The constraint
Generate three random numbers that sum to 1 in R The mode is a bit of a red herring. Here is a very simple solution to this problem that circumvent the need to define the mode precisely. I'm surprised it has not been proposed earlier. The constraint on the mode can be easily satisfied by drawing samples from a symmetric distribution and scaling them suitably: $$(x_i,y_i,z_i)\sim\mathrm{i.i.d.}\;\mathcal{L}(\mu,\sigma)$$ $$(x_i^*,y_i^*,z_i^*)=\left(\frac{x_i}{x_i+y_i+z_i},\frac{y_i}{x_i+y_i+z_i},\frac{z_i}{x_i+y_i+z_i}\right)$$ where $\mathcal{L}(\mu,\sigma)$ is a symmetric distribution (so that the mean, the mode and the median are the same) and chosen such that the probability mass below 0 is 0. For example, picking $\mathcal{L}(\mu,\sigma)$ to be $\mathrm{Beta}(2,2)$: a1 <- matrix(rbeta(100*3,2,2), nc=3) a1 <- sweep(a1, 1, rowSums(a1), FUN="/") colMeans(a1) # [1] 0.3342165 0.3341534 0.3316301 yielding the desired solution sum(colMeans(a1)) # [1] 1
Generate three random numbers that sum to 1 in R The mode is a bit of a red herring. Here is a very simple solution to this problem that circumvent the need to define the mode precisely. I'm surprised it has not been proposed earlier. The constraint
30,917
Generate three random numbers that sum to 1 in R
If X1, X2, and X3 are i.i.d. Gamma(a) then {X1,X2,X3}/(X1+X2+X3) will be Dirichlet(a,a,a). If a>1 then the mode will be 1/3. The peak will be sharper for larger values of a.
Generate three random numbers that sum to 1 in R
If X1, X2, and X3 are i.i.d. Gamma(a) then {X1,X2,X3}/(X1+X2+X3) will be Dirichlet(a,a,a). If a>1 then the mode will be 1/3. The peak will be sharper for larger values of a.
Generate three random numbers that sum to 1 in R If X1, X2, and X3 are i.i.d. Gamma(a) then {X1,X2,X3}/(X1+X2+X3) will be Dirichlet(a,a,a). If a>1 then the mode will be 1/3. The peak will be sharper for larger values of a.
Generate three random numbers that sum to 1 in R If X1, X2, and X3 are i.i.d. Gamma(a) then {X1,X2,X3}/(X1+X2+X3) will be Dirichlet(a,a,a). If a>1 then the mode will be 1/3. The peak will be sharper for larger values of a.
30,918
Generate three random numbers that sum to 1 in R
Here is an approximate numerical answer. It can easily be made more precise. Let $\{U,V,W\} = {X,Y,Z}/(X+Y+Z)$, where $X,Y,Z$ are i.i.d. with a trapezoidal density on $[0,1]$: $f(x)=1+a-2ax.$ $U,V,W$ will have identical marginals. Given a numeric 'a', I used Mathematica to get the cdf of $U$: F[u_] = Assuming[0 < u < 1, Simplify@Integrate[ Boole[x < u(x+y+z)] f[x] f[y] f[z], {x,0,1},{y,0,1},{z,0,1}] Differentiating $F$ twice, setting the result to zero, and solving the resulting 7th degree polynomial gave the mode. I used a binary search to refine the value of 'a'. I used exact arithmetic throughout, up to the point of solving the polynomial. a mode 1 .318182 7/8 .322065 13/16 .327099 25/32 .330465 49/64 .332373 97/128 .333376 <-- close enough? 3/4 .334221 1/2 .353738 0 .359187
Generate three random numbers that sum to 1 in R
Here is an approximate numerical answer. It can easily be made more precise. Let $\{U,V,W\} = {X,Y,Z}/(X+Y+Z)$, where $X,Y,Z$ are i.i.d. with a trapezoidal density on $[0,1]$: $f(x)=1+a-2ax.$ $U,V,W$
Generate three random numbers that sum to 1 in R Here is an approximate numerical answer. It can easily be made more precise. Let $\{U,V,W\} = {X,Y,Z}/(X+Y+Z)$, where $X,Y,Z$ are i.i.d. with a trapezoidal density on $[0,1]$: $f(x)=1+a-2ax.$ $U,V,W$ will have identical marginals. Given a numeric 'a', I used Mathematica to get the cdf of $U$: F[u_] = Assuming[0 < u < 1, Simplify@Integrate[ Boole[x < u(x+y+z)] f[x] f[y] f[z], {x,0,1},{y,0,1},{z,0,1}] Differentiating $F$ twice, setting the result to zero, and solving the resulting 7th degree polynomial gave the mode. I used a binary search to refine the value of 'a'. I used exact arithmetic throughout, up to the point of solving the polynomial. a mode 1 .318182 7/8 .322065 13/16 .327099 25/32 .330465 49/64 .332373 97/128 .333376 <-- close enough? 3/4 .334221 1/2 .353738 0 .359187
Generate three random numbers that sum to 1 in R Here is an approximate numerical answer. It can easily be made more precise. Let $\{U,V,W\} = {X,Y,Z}/(X+Y+Z)$, where $X,Y,Z$ are i.i.d. with a trapezoidal density on $[0,1]$: $f(x)=1+a-2ax.$ $U,V,W$
30,919
Generate three random numbers that sum to 1 in R
Analytically: Given a joint pdf for $X$,$Y$, and $Z$ $f_{X,Y,Z}(x,y,z)$, if they are iid, then $f_{X,Y,Z}(x,y,z)=f_X(x)f_Y(y)f_Z(z)$, where $f_X(x)=f_Y(y)=f_Z(z)$. You have to find the pdf $$f_{X,Y,Z}\left(\dfrac{x}{x+y+z}\right)$$ After differentiating and equal to zero, you'll find your mode. Obviosly, mode will depend on $f_{X,Y,Z}(x,y,z)$ and consequently on $f_X(x)$, $f_Y(y)$ and $f_Z(z)$. Numerically: sample three random numbers from your preferred distribution, standardize them and save them in the $i^{th}$-row an Nx3 matrix. Repeat this procedure N times and plot the frequencies of each column. The analytical solution is preferred instead of trying to demonstrate it from random samples in R, which would be just a numerical approximation.
Generate three random numbers that sum to 1 in R
Analytically: Given a joint pdf for $X$,$Y$, and $Z$ $f_{X,Y,Z}(x,y,z)$, if they are iid, then $f_{X,Y,Z}(x,y,z)=f_X(x)f_Y(y)f_Z(z)$, where $f_X(x)=f_Y(y)=f_Z(z)$. You have to find the pdf $$f_{X,Y,Z}
Generate three random numbers that sum to 1 in R Analytically: Given a joint pdf for $X$,$Y$, and $Z$ $f_{X,Y,Z}(x,y,z)$, if they are iid, then $f_{X,Y,Z}(x,y,z)=f_X(x)f_Y(y)f_Z(z)$, where $f_X(x)=f_Y(y)=f_Z(z)$. You have to find the pdf $$f_{X,Y,Z}\left(\dfrac{x}{x+y+z}\right)$$ After differentiating and equal to zero, you'll find your mode. Obviosly, mode will depend on $f_{X,Y,Z}(x,y,z)$ and consequently on $f_X(x)$, $f_Y(y)$ and $f_Z(z)$. Numerically: sample three random numbers from your preferred distribution, standardize them and save them in the $i^{th}$-row an Nx3 matrix. Repeat this procedure N times and plot the frequencies of each column. The analytical solution is preferred instead of trying to demonstrate it from random samples in R, which would be just a numerical approximation.
Generate three random numbers that sum to 1 in R Analytically: Given a joint pdf for $X$,$Y$, and $Z$ $f_{X,Y,Z}(x,y,z)$, if they are iid, then $f_{X,Y,Z}(x,y,z)=f_X(x)f_Y(y)f_Z(z)$, where $f_X(x)=f_Y(y)=f_Z(z)$. You have to find the pdf $$f_{X,Y,Z}
30,920
Generate three random numbers that sum to 1 in R
It is still unclear whether the OP wants a solution with a mode of 0.33 or $1 \over 3$ or a mean with one of those two values. Without knowing the exact need, there are multiple possibilities. [1] and [4] below address the problem of getting a mean of ${1 \over 3},$ while [2] and [3] are for a mode of ${1 \over 3}$. [1] Generate $U_1, U_2, U_3$ as continuous uniform random variates on $[0,{2 \over 3}]$. Let $A = {{U_1} \over {U_1 + U_2 + U_3}},$ $B = {{U_2} \over {U_1 + U_2 + U_3}},$ and $C = {{U_3} \over {U_1 + U_2 + U_3}}.$ [2] Let $X=(1/3)*W_1 + (1/6),$ $Y = (1/3)*W_2 + (1/6),$ and $Z = 1 - X - Y,$ where $W_1$ and $W_2$ are continuous uniform on $[0,1].$ $Z$ has a different distribution than $X$ or $Y,$ but I think this will meet the original mode requirement. [3] In an effort to produce a more intuitive version, let $R_1$ be right triangular with left endpoint at zero and mode at ${1 \over 3} .$ Let $L_1$ be left triangular with mode at $1 \over 3$ and right endpoint at ${2 \over 3} .$ Then $R_1 + L_1$ has a unique mode at ${2 \over 3},$ and if we define $Q = 1 - (R_1 + L_1)$ then $Q$ has a unique mode at ${1 \over 3}.$ The pdf of $Q$ is given below in the comments. [4] Trying to be clever, simple, and elegant. Generate 2 independent uniform[0,1] realizations. These 2 points divide the interval from 0 to 1 into 3 pieces. Use the lengths of these 3 pieces as the desired variates. Note how this generalizes intuitively to any sum and any number of random variables. Each variate is identically distributed (another right triangular distribution). Every pairwise correlation is $-{1 \over 2}.$ However, like approach [1], the mean is ${1 \over 3},$ but the mode here is not ${1 \over 3}.$ As whuber noted, it is at zero.
Generate three random numbers that sum to 1 in R
It is still unclear whether the OP wants a solution with a mode of 0.33 or $1 \over 3$ or a mean with one of those two values. Without knowing the exact need, there are multiple possibilities. [1] and
Generate three random numbers that sum to 1 in R It is still unclear whether the OP wants a solution with a mode of 0.33 or $1 \over 3$ or a mean with one of those two values. Without knowing the exact need, there are multiple possibilities. [1] and [4] below address the problem of getting a mean of ${1 \over 3},$ while [2] and [3] are for a mode of ${1 \over 3}$. [1] Generate $U_1, U_2, U_3$ as continuous uniform random variates on $[0,{2 \over 3}]$. Let $A = {{U_1} \over {U_1 + U_2 + U_3}},$ $B = {{U_2} \over {U_1 + U_2 + U_3}},$ and $C = {{U_3} \over {U_1 + U_2 + U_3}}.$ [2] Let $X=(1/3)*W_1 + (1/6),$ $Y = (1/3)*W_2 + (1/6),$ and $Z = 1 - X - Y,$ where $W_1$ and $W_2$ are continuous uniform on $[0,1].$ $Z$ has a different distribution than $X$ or $Y,$ but I think this will meet the original mode requirement. [3] In an effort to produce a more intuitive version, let $R_1$ be right triangular with left endpoint at zero and mode at ${1 \over 3} .$ Let $L_1$ be left triangular with mode at $1 \over 3$ and right endpoint at ${2 \over 3} .$ Then $R_1 + L_1$ has a unique mode at ${2 \over 3},$ and if we define $Q = 1 - (R_1 + L_1)$ then $Q$ has a unique mode at ${1 \over 3}.$ The pdf of $Q$ is given below in the comments. [4] Trying to be clever, simple, and elegant. Generate 2 independent uniform[0,1] realizations. These 2 points divide the interval from 0 to 1 into 3 pieces. Use the lengths of these 3 pieces as the desired variates. Note how this generalizes intuitively to any sum and any number of random variables. Each variate is identically distributed (another right triangular distribution). Every pairwise correlation is $-{1 \over 2}.$ However, like approach [1], the mean is ${1 \over 3},$ but the mode here is not ${1 \over 3}.$ As whuber noted, it is at zero.
Generate three random numbers that sum to 1 in R It is still unclear whether the OP wants a solution with a mode of 0.33 or $1 \over 3$ or a mean with one of those two values. Without knowing the exact need, there are multiple possibilities. [1] and
30,921
Generate three random numbers that sum to 1 in R
Here is a simple approach. Generate $X \sim \mathrm{Unif} [0,{2 \over 3}].$ Let $$Y = \begin{cases} X+{1/3} \ , & \text{if} \ X \le {1/3} \\ X-{1/3} \ , & \text{if} \ {1/3} \lt X \le {2/3} \end{cases}$$ Let $Z = 1 - X - Y.$ Then it's not too hard to show that $X,Y,$ and $Z$ are identically distributed with $\mathrm{Unif} [0,{2 \over 3}]$ distributions. So each has mean and median of ${1 \over 3}$ and has a mode there as well. Additionally, all pairwise correlations are $ -{1 \over 2},$ and only one call to a uniform random generator is needed to get the three variates.
Generate three random numbers that sum to 1 in R
Here is a simple approach. Generate $X \sim \mathrm{Unif} [0,{2 \over 3}].$ Let $$Y = \begin{cases} X+{1/3} \ , & \text{if} \ X \le {1/3} \\ X-{1/3} \ , & \text{if} \ {1/3} \lt X \le {2/3} \end{cases
Generate three random numbers that sum to 1 in R Here is a simple approach. Generate $X \sim \mathrm{Unif} [0,{2 \over 3}].$ Let $$Y = \begin{cases} X+{1/3} \ , & \text{if} \ X \le {1/3} \\ X-{1/3} \ , & \text{if} \ {1/3} \lt X \le {2/3} \end{cases}$$ Let $Z = 1 - X - Y.$ Then it's not too hard to show that $X,Y,$ and $Z$ are identically distributed with $\mathrm{Unif} [0,{2 \over 3}]$ distributions. So each has mean and median of ${1 \over 3}$ and has a mode there as well. Additionally, all pairwise correlations are $ -{1 \over 2},$ and only one call to a uniform random generator is needed to get the three variates.
Generate three random numbers that sum to 1 in R Here is a simple approach. Generate $X \sim \mathrm{Unif} [0,{2 \over 3}].$ Let $$Y = \begin{cases} X+{1/3} \ , & \text{if} \ X \le {1/3} \\ X-{1/3} \ , & \text{if} \ {1/3} \lt X \le {2/3} \end{cases
30,922
How to use a fitted model parameters for forecasting other time series
If you have a fitted arima model, you can apply it to another time series without re-estimating using refit <- Arima(newdata, model=fit) The Arima function is part of the forecast package: Reference
How to use a fitted model parameters for forecasting other time series
If you have a fitted arima model, you can apply it to another time series without re-estimating using refit <- Arima(newdata, model=fit) The Arima function is part of the forecast package: Referenc
How to use a fitted model parameters for forecasting other time series If you have a fitted arima model, you can apply it to another time series without re-estimating using refit <- Arima(newdata, model=fit) The Arima function is part of the forecast package: Reference
How to use a fitted model parameters for forecasting other time series If you have a fitted arima model, you can apply it to another time series without re-estimating using refit <- Arima(newdata, model=fit) The Arima function is part of the forecast package: Referenc
30,923
How to use a fitted model parameters for forecasting other time series
You should already have a predict() function that can accept new data (see ?predict.arima0). Though, there is a nice R package built for forecasting with ARIMA models called forecast that I recommend you play with a bit as well. To forecast using the same parameters on different data, you might try "refitting" the same model on new data but fix the parameters (using the fixed argument to arima()) at the values you estimated on a different data set. Then an arima object is returned with which you can use the available forecasting methods.
How to use a fitted model parameters for forecasting other time series
You should already have a predict() function that can accept new data (see ?predict.arima0). Though, there is a nice R package built for forecasting with ARIMA models called forecast that I recommend
How to use a fitted model parameters for forecasting other time series You should already have a predict() function that can accept new data (see ?predict.arima0). Though, there is a nice R package built for forecasting with ARIMA models called forecast that I recommend you play with a bit as well. To forecast using the same parameters on different data, you might try "refitting" the same model on new data but fix the parameters (using the fixed argument to arima()) at the values you estimated on a different data set. Then an arima object is returned with which you can use the available forecasting methods.
How to use a fitted model parameters for forecasting other time series You should already have a predict() function that can accept new data (see ?predict.arima0). Though, there is a nice R package built for forecasting with ARIMA models called forecast that I recommend
30,924
Explain Kernel density chart
You can think of the Kernel Density Estimation as a smoothed histogram. Histograms are limited by the fact that they are inherently discrete (via bins) and are thus more appropriate for displaying data on discrete variables and can be very sensitive to bin size. What you are actually doing with the Kernel Density Estimation is estimating the probability density function. This makes the interpretation straightforward. So the area under the curve is 1, and the probability of a value being between x1 and x2 is the area under the curve between those two points. The number of Y values will determine the "resolution" of the curve, so if you assume a straight line between every two adjacent Y points you can calculate an approximation of the area under the curve between those two points. To determine the probability of an $x$ value $P(x_a<x<x_b)$: $P(x_a<x<x_b)=y_a+..+y_b$ The result will be more accurate the more $y$ values you have.
Explain Kernel density chart
You can think of the Kernel Density Estimation as a smoothed histogram. Histograms are limited by the fact that they are inherently discrete (via bins) and are thus more appropriate for displaying dat
Explain Kernel density chart You can think of the Kernel Density Estimation as a smoothed histogram. Histograms are limited by the fact that they are inherently discrete (via bins) and are thus more appropriate for displaying data on discrete variables and can be very sensitive to bin size. What you are actually doing with the Kernel Density Estimation is estimating the probability density function. This makes the interpretation straightforward. So the area under the curve is 1, and the probability of a value being between x1 and x2 is the area under the curve between those two points. The number of Y values will determine the "resolution" of the curve, so if you assume a straight line between every two adjacent Y points you can calculate an approximation of the area under the curve between those two points. To determine the probability of an $x$ value $P(x_a<x<x_b)$: $P(x_a<x<x_b)=y_a+..+y_b$ The result will be more accurate the more $y$ values you have.
Explain Kernel density chart You can think of the Kernel Density Estimation as a smoothed histogram. Histograms are limited by the fact that they are inherently discrete (via bins) and are thus more appropriate for displaying dat
30,925
Explain Kernel density chart
Since no reputation to comment on the above post... The expression $P(x_a < x < x_b) = y_a + ... + y_b$, does not look right. Take for example the uniform density function on the interval [0, 1.0], then according to the above and using only $y_a, y_b$ the probability of any interval would be 2. What I think the poster was trying to refer to was the trapezoid rule.
Explain Kernel density chart
Since no reputation to comment on the above post... The expression $P(x_a < x < x_b) = y_a + ... + y_b$, does not look right. Take for example the uniform density function on the interval [0, 1.0], th
Explain Kernel density chart Since no reputation to comment on the above post... The expression $P(x_a < x < x_b) = y_a + ... + y_b$, does not look right. Take for example the uniform density function on the interval [0, 1.0], then according to the above and using only $y_a, y_b$ the probability of any interval would be 2. What I think the poster was trying to refer to was the trapezoid rule.
Explain Kernel density chart Since no reputation to comment on the above post... The expression $P(x_a < x < x_b) = y_a + ... + y_b$, does not look right. Take for example the uniform density function on the interval [0, 1.0], th
30,926
Justification for low/high or tertiary splits in ANOVA
(This isn't a direct answer to the question, more a bunch of references relating to why the approach should be avoided.) Some of the issues include downward bias in estimation of effects, inflation of error variance and (consequently) low power. There's also the dependence issue that impacts the calculation of p-values (i.e. p-values calculated in the 'usual' way are not correct). There's a wealth of material on why median (etc) splits of variables are a bad idea. http://www.uvm.edu/~dhowell/gradstat/psych341/lectures/Factorial2Folder/Median-split.html http://psych.colorado.edu/~mcclella/MedianSplit/ http://core.ecu.edu/psyc/wuenschk/stathelp/Dichot-Not.doc MacCallum, R. C., Zhang, S., Preacher, K. J., & Rucker, D. D. (2002). On the practice of dichotomization of quantitative variables. Psychological Methods, 7, 19–40. here Aiken, L. S., & West, S. G. (1991). Multiple regression: Testing and interpreting interactions http://www.theanalysisfactor.com/continuous-and-categorical-variables-the-trouble-with-median-splits/ Google turns up a bunch more references and links Cutting in 3 or 4 doesn't avoid the problems but it's not quite as bad.
Justification for low/high or tertiary splits in ANOVA
(This isn't a direct answer to the question, more a bunch of references relating to why the approach should be avoided.) Some of the issues include downward bias in estimation of effects, inflation of
Justification for low/high or tertiary splits in ANOVA (This isn't a direct answer to the question, more a bunch of references relating to why the approach should be avoided.) Some of the issues include downward bias in estimation of effects, inflation of error variance and (consequently) low power. There's also the dependence issue that impacts the calculation of p-values (i.e. p-values calculated in the 'usual' way are not correct). There's a wealth of material on why median (etc) splits of variables are a bad idea. http://www.uvm.edu/~dhowell/gradstat/psych341/lectures/Factorial2Folder/Median-split.html http://psych.colorado.edu/~mcclella/MedianSplit/ http://core.ecu.edu/psyc/wuenschk/stathelp/Dichot-Not.doc MacCallum, R. C., Zhang, S., Preacher, K. J., & Rucker, D. D. (2002). On the practice of dichotomization of quantitative variables. Psychological Methods, 7, 19–40. here Aiken, L. S., & West, S. G. (1991). Multiple regression: Testing and interpreting interactions http://www.theanalysisfactor.com/continuous-and-categorical-variables-the-trouble-with-median-splits/ Google turns up a bunch more references and links Cutting in 3 or 4 doesn't avoid the problems but it's not quite as bad.
Justification for low/high or tertiary splits in ANOVA (This isn't a direct answer to the question, more a bunch of references relating to why the approach should be avoided.) Some of the issues include downward bias in estimation of effects, inflation of
30,927
Confusion regarding kriging
Suppose $\left(Z_0, Z_1, \ldots, Z_n\right)$ is a vector assumed to have a multivariate distribution of unknown mean $(\mu, \mu, \ldots, \mu)$ and known variance-covariance matrix $\Sigma$. We observe $\left(z_1, z_2, \ldots, z_n\right)$ from this distribution and wish to predict $z_0$ from this information using an unbiased linear predictor: Linear means the prediction must take the form $\hat{z_0} = \lambda_1 z_1 + \lambda_2 z_2 + \cdots + \lambda_n z_n$ for coefficients $\lambda_i$ to be determined. These coefficients can depend at most on what is known in advance: namely, the entries of $\Sigma$. This predictor can also be considered a random variable $\hat{Z_0} = \lambda_1 Z_1 + \lambda_2 Z_2 + \cdots + \lambda_n Z_n$. Unbiased means the expectation of $\hat{Z_0}$ equals its (unknown) mean $\mu$. Writing things out gives some information about the coefficients: $$\eqalign{ \mu &= E[\hat{Z_0}] = E[\lambda_1 Z_1 + \lambda_2 Z_2 + \cdots + \lambda_n Z_n] \\ &= \lambda_1 E[Z_1] + \lambda_2 E[Z_2] + \cdots + \lambda_n E[Z_n] \\ &= \lambda_1 \mu + \cdots + \lambda_n \mu \\ &= \left(\lambda_1 + \cdots + \lambda_n\right) \mu. \\ }$$ The second line is due to linearity of expectation and all the rest is simple algebra. Because this procedure is suppose to work regardless of the value of $\mu$, evidently the coefficients have to sum to unity. Writing the coefficients in vector notation $\lambda = (\lambda_i)'$, this can be neatly written $\mathbf{1}\lambda=1$. Among the set of all such unbiased linear predictors, we seek one that deviates as little from the real value as possible, measured in the room mean square. This, again, is a computation. It relies on the bilinearity and symmetry of covariance, whose application is responsible for the summations in the second line: $$\eqalign{ E[(\hat{Z_0} - Z_0)^2] &= E[(\lambda_1 Z_1 + \lambda_2 Z_2 + \cdots + \lambda_n Z_n - Z_0)^2] \\ &= \sum_{i=1}^n \sum_{j=1}^n \lambda_i \lambda_j \text{var}[Z_i, Z_j]-2\sum_{i=1}^n\lambda_i \text{var}[Z_i, Z_0] + \text{var}[Z_0, Z_0] \\ &= \sum_{i=1}^n \sum_{j=1}^n \lambda_i \lambda_j \Sigma_{i,j} - 2\sum_{i=1}^n\lambda_i\Sigma_{0,i} + \Sigma_{0,0}. }$$ Whence the coefficients can be obtained by minimizing this quadratic form subject to the (linear) constraint $\mathbf{1}\lambda=1$. This is readily solved using the method of Lagrange multipliers, yielding a linear system of equations, the "Kriging equations." In the application, $Z$ is a spatial stochastic process ("random field"). This means that for any given set of fixed (not random) locations $\mathbf{x_0}, \ldots, \mathbf{x_n}$, the vector of values of $Z$ at those locations, $\left(Z(\mathbf{x_0}), \ldots, Z(\mathbf{x_n})\right)$ is random with some kind of a multivariate distribution. Write $Z_i = Z(\mathbf{x_i})$ and apply the foregoing analysis, assuming the means of the process at all $n+1$ locations $\mathbf{x_i}$ are the same and assuming the covariance matrix of the process values at these $n+1$ locations is known with certainty. Let's interpret this. Under the assumptions (including constant mean and known covariance), the coefficients determine the minimum variance attainable by any linear estimator. Let's call this variance $\sigma_{OK}^2$ ("OK" is for "ordinary kriging"). It depends solely on the matrix $\Sigma$. It tells us that if we were to repeatedly sample from $\left(Z_0, \ldots, Z_n\right)$ and use these coefficients to predict the $z_0$ values from the remaining values each time, then On the average our predictions would be correct. Typically, our predictions of the $z_0$ would deviate about $\sigma_{OK}$ from the actual values of the $z_0$. Much more needs to be said before this can be applied to practical situations like estimating a surface from punctual data: we need additional assumptions about how the statistical characteristics of the spatial process vary from one location to another and from one realization to another (even though, in practice, usually only one realization will ever be available). But this exposition should be enough to follow how the search for a "Best" Unbiased Linear Predictor ("BLUP") leads straightforwardly to a system of linear equations. By the way, kriging as usually practiced is not quite the same as least squares estimation, because $\Sigma$ is estimated in a preliminary procedure (known as "variography") using the same data. That is contrary to the assumptions of this derivation, which assumed $\Sigma$ was known (and a fortiori independent of the data). Thus, at the very outset, kriging has some conceptual and statistical flaws built into it. Thoughtful practitioners have always been aware of this and found various creative ways to (try to) justify the inconsistencies. (Having lots of data can really help.) Procedures now exist for simultaneously estimating $\Sigma$ and predicting a collection of values at unknown locations. They require slightly stronger assumptions (multivariate normality) in order to accomplish this feat.
Confusion regarding kriging
Suppose $\left(Z_0, Z_1, \ldots, Z_n\right)$ is a vector assumed to have a multivariate distribution of unknown mean $(\mu, \mu, \ldots, \mu)$ and known variance-covariance matrix $\Sigma$. We observ
Confusion regarding kriging Suppose $\left(Z_0, Z_1, \ldots, Z_n\right)$ is a vector assumed to have a multivariate distribution of unknown mean $(\mu, \mu, \ldots, \mu)$ and known variance-covariance matrix $\Sigma$. We observe $\left(z_1, z_2, \ldots, z_n\right)$ from this distribution and wish to predict $z_0$ from this information using an unbiased linear predictor: Linear means the prediction must take the form $\hat{z_0} = \lambda_1 z_1 + \lambda_2 z_2 + \cdots + \lambda_n z_n$ for coefficients $\lambda_i$ to be determined. These coefficients can depend at most on what is known in advance: namely, the entries of $\Sigma$. This predictor can also be considered a random variable $\hat{Z_0} = \lambda_1 Z_1 + \lambda_2 Z_2 + \cdots + \lambda_n Z_n$. Unbiased means the expectation of $\hat{Z_0}$ equals its (unknown) mean $\mu$. Writing things out gives some information about the coefficients: $$\eqalign{ \mu &= E[\hat{Z_0}] = E[\lambda_1 Z_1 + \lambda_2 Z_2 + \cdots + \lambda_n Z_n] \\ &= \lambda_1 E[Z_1] + \lambda_2 E[Z_2] + \cdots + \lambda_n E[Z_n] \\ &= \lambda_1 \mu + \cdots + \lambda_n \mu \\ &= \left(\lambda_1 + \cdots + \lambda_n\right) \mu. \\ }$$ The second line is due to linearity of expectation and all the rest is simple algebra. Because this procedure is suppose to work regardless of the value of $\mu$, evidently the coefficients have to sum to unity. Writing the coefficients in vector notation $\lambda = (\lambda_i)'$, this can be neatly written $\mathbf{1}\lambda=1$. Among the set of all such unbiased linear predictors, we seek one that deviates as little from the real value as possible, measured in the room mean square. This, again, is a computation. It relies on the bilinearity and symmetry of covariance, whose application is responsible for the summations in the second line: $$\eqalign{ E[(\hat{Z_0} - Z_0)^2] &= E[(\lambda_1 Z_1 + \lambda_2 Z_2 + \cdots + \lambda_n Z_n - Z_0)^2] \\ &= \sum_{i=1}^n \sum_{j=1}^n \lambda_i \lambda_j \text{var}[Z_i, Z_j]-2\sum_{i=1}^n\lambda_i \text{var}[Z_i, Z_0] + \text{var}[Z_0, Z_0] \\ &= \sum_{i=1}^n \sum_{j=1}^n \lambda_i \lambda_j \Sigma_{i,j} - 2\sum_{i=1}^n\lambda_i\Sigma_{0,i} + \Sigma_{0,0}. }$$ Whence the coefficients can be obtained by minimizing this quadratic form subject to the (linear) constraint $\mathbf{1}\lambda=1$. This is readily solved using the method of Lagrange multipliers, yielding a linear system of equations, the "Kriging equations." In the application, $Z$ is a spatial stochastic process ("random field"). This means that for any given set of fixed (not random) locations $\mathbf{x_0}, \ldots, \mathbf{x_n}$, the vector of values of $Z$ at those locations, $\left(Z(\mathbf{x_0}), \ldots, Z(\mathbf{x_n})\right)$ is random with some kind of a multivariate distribution. Write $Z_i = Z(\mathbf{x_i})$ and apply the foregoing analysis, assuming the means of the process at all $n+1$ locations $\mathbf{x_i}$ are the same and assuming the covariance matrix of the process values at these $n+1$ locations is known with certainty. Let's interpret this. Under the assumptions (including constant mean and known covariance), the coefficients determine the minimum variance attainable by any linear estimator. Let's call this variance $\sigma_{OK}^2$ ("OK" is for "ordinary kriging"). It depends solely on the matrix $\Sigma$. It tells us that if we were to repeatedly sample from $\left(Z_0, \ldots, Z_n\right)$ and use these coefficients to predict the $z_0$ values from the remaining values each time, then On the average our predictions would be correct. Typically, our predictions of the $z_0$ would deviate about $\sigma_{OK}$ from the actual values of the $z_0$. Much more needs to be said before this can be applied to practical situations like estimating a surface from punctual data: we need additional assumptions about how the statistical characteristics of the spatial process vary from one location to another and from one realization to another (even though, in practice, usually only one realization will ever be available). But this exposition should be enough to follow how the search for a "Best" Unbiased Linear Predictor ("BLUP") leads straightforwardly to a system of linear equations. By the way, kriging as usually practiced is not quite the same as least squares estimation, because $\Sigma$ is estimated in a preliminary procedure (known as "variography") using the same data. That is contrary to the assumptions of this derivation, which assumed $\Sigma$ was known (and a fortiori independent of the data). Thus, at the very outset, kriging has some conceptual and statistical flaws built into it. Thoughtful practitioners have always been aware of this and found various creative ways to (try to) justify the inconsistencies. (Having lots of data can really help.) Procedures now exist for simultaneously estimating $\Sigma$ and predicting a collection of values at unknown locations. They require slightly stronger assumptions (multivariate normality) in order to accomplish this feat.
Confusion regarding kriging Suppose $\left(Z_0, Z_1, \ldots, Z_n\right)$ is a vector assumed to have a multivariate distribution of unknown mean $(\mu, \mu, \ldots, \mu)$ and known variance-covariance matrix $\Sigma$. We observ
30,928
Confusion regarding kriging
Kriging is simply least squares estimation for spatial data. As such it provides a linear unbiased estimator that minimizes the sum of squared errors. Since it is unbiased the MSE = the estimator variance and is a minimum.
Confusion regarding kriging
Kriging is simply least squares estimation for spatial data. As such it provides a linear unbiased estimator that minimizes the sum of squared errors. Since it is unbiased the MSE = the estimator va
Confusion regarding kriging Kriging is simply least squares estimation for spatial data. As such it provides a linear unbiased estimator that minimizes the sum of squared errors. Since it is unbiased the MSE = the estimator variance and is a minimum.
Confusion regarding kriging Kriging is simply least squares estimation for spatial data. As such it provides a linear unbiased estimator that minimizes the sum of squared errors. Since it is unbiased the MSE = the estimator va
30,929
What are the differences between the linear regression and mixed models?
The main difference comes in what types of questions you are trying to answer with your analysis and how you consider the factor school. In lmfit you are considering school to be a fixed effect, which means that you are only interested in the schools that are in your data set, but you are (possibly) interested in specific differences between the schools. With this model you cannot say anything about students at schools that are not in your sample (because you have no information on their fixed effect). In lmefit your are considering school to be a random effect, or essentially the schools in your data set are a random sample from a larger population of schools. Here you are generally uninterested is specific comparison between schools, but could be interested in prediction for schools in the original sample and predictions for schools that were not in the original sample. If I have data from all the schools in my area and am interested in seeing if there is a difference between 2 schools that I am considering sending my children to (and if so which is better) then I would use the fixed effects model. If I am interested in making predictions and may make a prediction for schools not in my data set (since I only have a subset of the schools) then I would use the mixed effects model. If I believe that there could be an effect due to schools, but I don't care specifically about comparisons between schools, just want to allow the model to adjust for the clustering, but will do all inference on the SES variable, then I would use the mixed effect model (though the fixed would work in this case, but using as an adjustment is a bit more natural as a random effect).
What are the differences between the linear regression and mixed models?
The main difference comes in what types of questions you are trying to answer with your analysis and how you consider the factor school. In lmfit you are considering school to be a fixed effect, which
What are the differences between the linear regression and mixed models? The main difference comes in what types of questions you are trying to answer with your analysis and how you consider the factor school. In lmfit you are considering school to be a fixed effect, which means that you are only interested in the schools that are in your data set, but you are (possibly) interested in specific differences between the schools. With this model you cannot say anything about students at schools that are not in your sample (because you have no information on their fixed effect). In lmefit your are considering school to be a random effect, or essentially the schools in your data set are a random sample from a larger population of schools. Here you are generally uninterested is specific comparison between schools, but could be interested in prediction for schools in the original sample and predictions for schools that were not in the original sample. If I have data from all the schools in my area and am interested in seeing if there is a difference between 2 schools that I am considering sending my children to (and if so which is better) then I would use the fixed effects model. If I am interested in making predictions and may make a prediction for schools not in my data set (since I only have a subset of the schools) then I would use the mixed effects model. If I believe that there could be an effect due to schools, but I don't care specifically about comparisons between schools, just want to allow the model to adjust for the clustering, but will do all inference on the SES variable, then I would use the mixed effect model (though the fixed would work in this case, but using as an adjustment is a bit more natural as a random effect).
What are the differences between the linear regression and mixed models? The main difference comes in what types of questions you are trying to answer with your analysis and how you consider the factor school. In lmfit you are considering school to be a fixed effect, which
30,930
What are the differences between the linear regression and mixed models?
The second model is effectively a fixed effects regression, while the first one is a random effect model. These are two very different models, of course. In the fixed effects model you assume that each schools has its own intercept, that captures something about this school, or characterizes it in some way. That's why this intercept can be correlated with the regression design matrix, i.e. with other predictors. In random effects model the intercept of each school will be different too, but it just happened randomly, almost as if it was randomly picked from the common distribution. The intercept in this case cannot be correlated with other predictors, the design matrix.
What are the differences between the linear regression and mixed models?
The second model is effectively a fixed effects regression, while the first one is a random effect model. These are two very different models, of course. In the fixed effects model you assume that eac
What are the differences between the linear regression and mixed models? The second model is effectively a fixed effects regression, while the first one is a random effect model. These are two very different models, of course. In the fixed effects model you assume that each schools has its own intercept, that captures something about this school, or characterizes it in some way. That's why this intercept can be correlated with the regression design matrix, i.e. with other predictors. In random effects model the intercept of each school will be different too, but it just happened randomly, almost as if it was randomly picked from the common distribution. The intercept in this case cannot be correlated with other predictors, the design matrix.
What are the differences between the linear regression and mixed models? The second model is effectively a fixed effects regression, while the first one is a random effect model. These are two very different models, of course. In the fixed effects model you assume that eac
30,931
What are the differences between the linear regression and mixed models?
I provide a small demo in R from atmosphere science. Consider Ozone layer that varies due to reasons such as seasons (winter, summer, ...) and the tilt of the Earth axle. You can see here some monthly variations below (Source of the picture here.) where you can see that there are some monthly functions in the layer. If you ask about the total trend (and not monthly figures), you are good to go with standard linear regression while hierarchial/mixed model helps you answer questions specific to months such that > coef(fitML_) (Intercept) Month 15.656673 3.677636 > coef(fitML_hierarchial) $Month (Intercept) 5 25.52226 6 32.48859 7 57.14909 8 57.90293 9 32.40247 attr(,"class") [1] "coef.mer" where alas R's ozone data is only half year long. Small working example library(ggplot2) library(lme4) library(forecast) library(lmerTest) library(gridExtra) data(airquality) ggplot(data=airquality) + aes(y=Ozone, x=as.Date(as.character(paste("20180", airquality$Month, airquality$Day, sep="")), format="%Y%m%d") ) + geom_smooth() fitML_ <- lm(data=airquality, Ozone ~ Month) fitML_hierarchial <- lmerTest::lmer(data=airquality, Ozone ~ 1 + (1|Month)) predLM_ <- predict(fitML_) predLM_hierarchial <- predict(fitML_hierarchial) predDates_<-seq.Date(as.Date(as.character("20181001"), format="%Y%m%d"), by = 1, length.out = 116) #LM g1<-ggplot(data=airquality) + aes(y=Ozone, x=as.Date(as.character(paste("20180", airquality$Month, airquality$Day, sep="")), format="%Y%m%d") ) + geom_smooth() + geom_smooth(data=data.frame(predDates=predDates_, predML=predLM_), aes(x=predDates_, y=predLM_)) #LM hierarchial g2<-ggplot(data=airquality) + aes(y=Ozone, x=as.Date(as.character(paste("20180", airquality$Month, airquality$Day, sep="")), format="%Y%m%d") ) + geom_smooth() + geom_smooth(data=data.frame(predDates=predDates_, predML=predLM_), aes(x=predDates_, y=predLM_hierarchial)) grid.arrange(g1, g2) #ggplot(data=airquality) + aes(y=Ozone, x=as.Date(as.character(paste("20180", airquality$Month, airquality$Day, sep="")), format="%Y%m%d") ) + geom_smooth() + geom_point(aes(x=as.Date("20180701", format="%Y%m%d"), y=25), colour="red") where you can see that the hierarchial model has tried to predict the monthly fluctuations while the linear regression just minimising the R2 over the whole data. Alas with longer data you should get better predictions, here we have only one data point per month, hence the poor quality in predictions.
What are the differences between the linear regression and mixed models?
I provide a small demo in R from atmosphere science. Consider Ozone layer that varies due to reasons such as seasons (winter, summer, ...) and the tilt of the Earth axle. You can see here some monthly
What are the differences between the linear regression and mixed models? I provide a small demo in R from atmosphere science. Consider Ozone layer that varies due to reasons such as seasons (winter, summer, ...) and the tilt of the Earth axle. You can see here some monthly variations below (Source of the picture here.) where you can see that there are some monthly functions in the layer. If you ask about the total trend (and not monthly figures), you are good to go with standard linear regression while hierarchial/mixed model helps you answer questions specific to months such that > coef(fitML_) (Intercept) Month 15.656673 3.677636 > coef(fitML_hierarchial) $Month (Intercept) 5 25.52226 6 32.48859 7 57.14909 8 57.90293 9 32.40247 attr(,"class") [1] "coef.mer" where alas R's ozone data is only half year long. Small working example library(ggplot2) library(lme4) library(forecast) library(lmerTest) library(gridExtra) data(airquality) ggplot(data=airquality) + aes(y=Ozone, x=as.Date(as.character(paste("20180", airquality$Month, airquality$Day, sep="")), format="%Y%m%d") ) + geom_smooth() fitML_ <- lm(data=airquality, Ozone ~ Month) fitML_hierarchial <- lmerTest::lmer(data=airquality, Ozone ~ 1 + (1|Month)) predLM_ <- predict(fitML_) predLM_hierarchial <- predict(fitML_hierarchial) predDates_<-seq.Date(as.Date(as.character("20181001"), format="%Y%m%d"), by = 1, length.out = 116) #LM g1<-ggplot(data=airquality) + aes(y=Ozone, x=as.Date(as.character(paste("20180", airquality$Month, airquality$Day, sep="")), format="%Y%m%d") ) + geom_smooth() + geom_smooth(data=data.frame(predDates=predDates_, predML=predLM_), aes(x=predDates_, y=predLM_)) #LM hierarchial g2<-ggplot(data=airquality) + aes(y=Ozone, x=as.Date(as.character(paste("20180", airquality$Month, airquality$Day, sep="")), format="%Y%m%d") ) + geom_smooth() + geom_smooth(data=data.frame(predDates=predDates_, predML=predLM_), aes(x=predDates_, y=predLM_hierarchial)) grid.arrange(g1, g2) #ggplot(data=airquality) + aes(y=Ozone, x=as.Date(as.character(paste("20180", airquality$Month, airquality$Day, sep="")), format="%Y%m%d") ) + geom_smooth() + geom_point(aes(x=as.Date("20180701", format="%Y%m%d"), y=25), colour="red") where you can see that the hierarchial model has tried to predict the monthly fluctuations while the linear regression just minimising the R2 over the whole data. Alas with longer data you should get better predictions, here we have only one data point per month, hence the poor quality in predictions.
What are the differences between the linear regression and mixed models? I provide a small demo in R from atmosphere science. Consider Ozone layer that varies due to reasons such as seasons (winter, summer, ...) and the tilt of the Earth axle. You can see here some monthly
30,932
How to do post-hoc tests with logistic regression?
Unfortunately, I don't know SPSS. That said, if you want to carry out a Wald test where the null is $H_0: \beta_{groupA} - \beta_{groupB} = 0$ you could ask SPSS the variance/covariance matrix of your parameter estimates and construct the Wald test by hand. Under $H_0$ your test statistics $\chi^2_{obs}$ is distributed as a $\chi^2$ r.v. with 1 degree of freedom $$ \chi^2_{obs} = \frac{(\hat{\beta}_{groupA} - \hat{\beta}_{groupB})^2}{{\rm var}[\hat{\beta}_{groupA}]+{\rm var}[\hat{\beta}_{groupB}]-2*{\rm cov}[\hat{\beta}_{groupA},\hat{\beta}_{groupB}]} $$ Now you can calculate your p-value. But I am sure that SPSS has a command to perform specific tests on the estimated parameters.
How to do post-hoc tests with logistic regression?
Unfortunately, I don't know SPSS. That said, if you want to carry out a Wald test where the null is $H_0: \beta_{groupA} - \beta_{groupB} = 0$ you could ask SPSS the variance/covariance matrix of you
How to do post-hoc tests with logistic regression? Unfortunately, I don't know SPSS. That said, if you want to carry out a Wald test where the null is $H_0: \beta_{groupA} - \beta_{groupB} = 0$ you could ask SPSS the variance/covariance matrix of your parameter estimates and construct the Wald test by hand. Under $H_0$ your test statistics $\chi^2_{obs}$ is distributed as a $\chi^2$ r.v. with 1 degree of freedom $$ \chi^2_{obs} = \frac{(\hat{\beta}_{groupA} - \hat{\beta}_{groupB})^2}{{\rm var}[\hat{\beta}_{groupA}]+{\rm var}[\hat{\beta}_{groupB}]-2*{\rm cov}[\hat{\beta}_{groupA},\hat{\beta}_{groupB}]} $$ Now you can calculate your p-value. But I am sure that SPSS has a command to perform specific tests on the estimated parameters.
How to do post-hoc tests with logistic regression? Unfortunately, I don't know SPSS. That said, if you want to carry out a Wald test where the null is $H_0: \beta_{groupA} - \beta_{groupB} = 0$ you could ask SPSS the variance/covariance matrix of you
30,933
How to do post-hoc tests with logistic regression?
SPSS only let me compare individual groups to the Control group. Actually SPSS Logistic Regression has about 6 built-in types of contrasts. One of them (Indicator) compares each group to a control group, which you can specify using the group's number. I.e., among groups numbered 1 through 4 and labeled as North, South, East, and West, "indicator(3)" will set East as the control group. Another type (Deviation) shows how each group's logit deviates from the (unweighted) average group's logit. It's useful to go into either the general Help files or the Command Syntax Reference, also found in Help, to find the definitions for each type. Personally, I find that Deviation and Indicator are all I ever seem to need. Maybe that makes me a minor-leaguer :-)
How to do post-hoc tests with logistic regression?
SPSS only let me compare individual groups to the Control group. Actually SPSS Logistic Regression has about 6 built-in types of contrasts. One of them (Indicator) compares each group to a control g
How to do post-hoc tests with logistic regression? SPSS only let me compare individual groups to the Control group. Actually SPSS Logistic Regression has about 6 built-in types of contrasts. One of them (Indicator) compares each group to a control group, which you can specify using the group's number. I.e., among groups numbered 1 through 4 and labeled as North, South, East, and West, "indicator(3)" will set East as the control group. Another type (Deviation) shows how each group's logit deviates from the (unweighted) average group's logit. It's useful to go into either the general Help files or the Command Syntax Reference, also found in Help, to find the definitions for each type. Personally, I find that Deviation and Indicator are all I ever seem to need. Maybe that makes me a minor-leaguer :-)
How to do post-hoc tests with logistic regression? SPSS only let me compare individual groups to the Control group. Actually SPSS Logistic Regression has about 6 built-in types of contrasts. One of them (Indicator) compares each group to a control g
30,934
How to do post-hoc tests with logistic regression?
In R you can do general multiplicity-adjusted contrasts for logistic regression. See for example the rms package's contrast.rms function, which uses the R multcomp package.
How to do post-hoc tests with logistic regression?
In R you can do general multiplicity-adjusted contrasts for logistic regression. See for example the rms package's contrast.rms function, which uses the R multcomp package.
How to do post-hoc tests with logistic regression? In R you can do general multiplicity-adjusted contrasts for logistic regression. See for example the rms package's contrast.rms function, which uses the R multcomp package.
How to do post-hoc tests with logistic regression? In R you can do general multiplicity-adjusted contrasts for logistic regression. See for example the rms package's contrast.rms function, which uses the R multcomp package.
30,935
Standard error of a ratio
1) The variance of a ratio is approximately: $Var(x/y) \approx \left(\frac{E(x)}{E(y)}\right)^2 \left(\frac{Var(x)}{E(x)^2} + \frac{Var(y)}{E(y)^2} - 2 \frac{Cov(x,y)}{E(x)E(y)}\right)$ You might want to look at the answers to this question for more information. Usually regression packages do provide at least the option to print out the estimated covariance matrix of the parameter estimates, so perhaps there's some way of getting that covariance term. 2) However, the bootstrap may give you more accurate confidence intervals, especially if your denominator variable is not many standard errors away from zero.
Standard error of a ratio
1) The variance of a ratio is approximately: $Var(x/y) \approx \left(\frac{E(x)}{E(y)}\right)^2 \left(\frac{Var(x)}{E(x)^2} + \frac{Var(y)}{E(y)^2} - 2 \frac{Cov(x,y)}{E(x)E(y)}\right)$ You might wan
Standard error of a ratio 1) The variance of a ratio is approximately: $Var(x/y) \approx \left(\frac{E(x)}{E(y)}\right)^2 \left(\frac{Var(x)}{E(x)^2} + \frac{Var(y)}{E(y)^2} - 2 \frac{Cov(x,y)}{E(x)E(y)}\right)$ You might want to look at the answers to this question for more information. Usually regression packages do provide at least the option to print out the estimated covariance matrix of the parameter estimates, so perhaps there's some way of getting that covariance term. 2) However, the bootstrap may give you more accurate confidence intervals, especially if your denominator variable is not many standard errors away from zero.
Standard error of a ratio 1) The variance of a ratio is approximately: $Var(x/y) \approx \left(\frac{E(x)}{E(y)}\right)^2 \left(\frac{Var(x)}{E(x)^2} + \frac{Var(y)}{E(y)^2} - 2 \frac{Cov(x,y)}{E(x)E(y)}\right)$ You might wan
30,936
Standard error of a ratio
With the Bayesian approach it is easy to simulate from the posterior distributions of $A$ and $B$ and then to get simulations of the posterior distribution of $A/B$. Using the standard noninformative prior for the Gaussian linear model, we do not need MCMC techniques and we probably obtain a good frequentist-matching property: a $95\%$-posterior credibility interval is approximately a $95\%$-confidence interval in the frequentist sense.
Standard error of a ratio
With the Bayesian approach it is easy to simulate from the posterior distributions of $A$ and $B$ and then to get simulations of the posterior distribution of $A/B$. Using the standard noninformative
Standard error of a ratio With the Bayesian approach it is easy to simulate from the posterior distributions of $A$ and $B$ and then to get simulations of the posterior distribution of $A/B$. Using the standard noninformative prior for the Gaussian linear model, we do not need MCMC techniques and we probably obtain a good frequentist-matching property: a $95\%$-posterior credibility interval is approximately a $95\%$-confidence interval in the frequentist sense.
Standard error of a ratio With the Bayesian approach it is easy to simulate from the posterior distributions of $A$ and $B$ and then to get simulations of the posterior distribution of $A/B$. Using the standard noninformative
30,937
Standard error of a ratio
I asked a similar question here: Testing the significance of differences between ratios with small samples Hopefully you will find some of the answers useful!
Standard error of a ratio
I asked a similar question here: Testing the significance of differences between ratios with small samples Hopefully you will find some of the answers useful!
Standard error of a ratio I asked a similar question here: Testing the significance of differences between ratios with small samples Hopefully you will find some of the answers useful!
Standard error of a ratio I asked a similar question here: Testing the significance of differences between ratios with small samples Hopefully you will find some of the answers useful!
30,938
Find the equation from generalized linear model output
This generalized linear model supposes the outcome associated with an independent value of $x$ has a binomial distribution whose log odds ("logit") vary linearly with $x$. The output provides the coefficients of that linear relation; namely, the intercept is estimated as -0.9781 and the slope ("our_bid") as -0.002050. You can see them in the Estimate column: Estimate Std. Error z value Pr(>|z|) (Intercept) -9.781e-01 2.836e-02 -34.49 <2e-16 *** our_bid -2.050e-03 7.576e-05 -27.07 <2e-16 *** The probability, which you wish to plot, is related to the log odds by $$\text{probability} = \frac{1}{1 + \exp(-\text{log odds})}.$$ R calls this the "inverse logit" function, inv.logit. Putting these together gives the equation $$\text{probability} = \frac{1}{1 + \exp\left(-[-0.9781 - 0.00205 x]\right)}.$$ An R command to plot it would be plot(inv.logit(-0.9781 - 0.00205*(0:1000))) In general, you should extract these coefficients with the coefficients command rather than transcribing them (as I did here, because I do not have access to your data).
Find the equation from generalized linear model output
This generalized linear model supposes the outcome associated with an independent value of $x$ has a binomial distribution whose log odds ("logit") vary linearly with $x$. The output provides the coe
Find the equation from generalized linear model output This generalized linear model supposes the outcome associated with an independent value of $x$ has a binomial distribution whose log odds ("logit") vary linearly with $x$. The output provides the coefficients of that linear relation; namely, the intercept is estimated as -0.9781 and the slope ("our_bid") as -0.002050. You can see them in the Estimate column: Estimate Std. Error z value Pr(>|z|) (Intercept) -9.781e-01 2.836e-02 -34.49 <2e-16 *** our_bid -2.050e-03 7.576e-05 -27.07 <2e-16 *** The probability, which you wish to plot, is related to the log odds by $$\text{probability} = \frac{1}{1 + \exp(-\text{log odds})}.$$ R calls this the "inverse logit" function, inv.logit. Putting these together gives the equation $$\text{probability} = \frac{1}{1 + \exp\left(-[-0.9781 - 0.00205 x]\right)}.$$ An R command to plot it would be plot(inv.logit(-0.9781 - 0.00205*(0:1000))) In general, you should extract these coefficients with the coefficients command rather than transcribing them (as I did here, because I do not have access to your data).
Find the equation from generalized linear model output This generalized linear model supposes the outcome associated with an independent value of $x$ has a binomial distribution whose log odds ("logit") vary linearly with $x$. The output provides the coe
30,939
Microsoft Excel formula for variance
Use VARP for the variance you want ("population variance"). VAR is the unbiased estimator for a normally distributed population.
Microsoft Excel formula for variance
Use VARP for the variance you want ("population variance"). VAR is the unbiased estimator for a normally distributed population.
Microsoft Excel formula for variance Use VARP for the variance you want ("population variance"). VAR is the unbiased estimator for a normally distributed population.
Microsoft Excel formula for variance Use VARP for the variance you want ("population variance"). VAR is the unbiased estimator for a normally distributed population.
30,940
Microsoft Excel formula for variance
Use VAR (with n-1 denominator) when you wish to estimate the variance of the underlying population from the sample, or VARP (with n denominator) when the sample is the population. I find the name "population variance" quite ambiguous...
Microsoft Excel formula for variance
Use VAR (with n-1 denominator) when you wish to estimate the variance of the underlying population from the sample, or VARP (with n denominator) when the sample is the population. I find the name "po
Microsoft Excel formula for variance Use VAR (with n-1 denominator) when you wish to estimate the variance of the underlying population from the sample, or VARP (with n denominator) when the sample is the population. I find the name "population variance" quite ambiguous...
Microsoft Excel formula for variance Use VAR (with n-1 denominator) when you wish to estimate the variance of the underlying population from the sample, or VARP (with n denominator) when the sample is the population. I find the name "po
30,941
Relation between logistic regression coefficient and odds ratio in JMP
Ok, I drop a quick response. Your idea is correct in that the regression coefficient is the log of the OR. More precisely, if $b$ is your regression coefficient, $\exp(b)$ is the odds ratio corresponding to a one unit change in your variable. So, to get back to the adjusted odds, you need to know what are the internal coding convention for your factor levels. Usually, for a binary variable it is 0/1 or 1/2. But if it happens that your levels are represented as -1/+1 (which I suspect here), then you have to multiply the regression coefficient by 2 when exponentiating. The same would apply if you were working with a continuous variable, like age, and want to express the odds for 5 years ($\exp(5b)$) instead of 1 year ($\exp(b)$). Update: I just found this about JMP coding for nominal variables (version < 7).
Relation between logistic regression coefficient and odds ratio in JMP
Ok, I drop a quick response. Your idea is correct in that the regression coefficient is the log of the OR. More precisely, if $b$ is your regression coefficient, $\exp(b)$ is the odds ratio correspond
Relation between logistic regression coefficient and odds ratio in JMP Ok, I drop a quick response. Your idea is correct in that the regression coefficient is the log of the OR. More precisely, if $b$ is your regression coefficient, $\exp(b)$ is the odds ratio corresponding to a one unit change in your variable. So, to get back to the adjusted odds, you need to know what are the internal coding convention for your factor levels. Usually, for a binary variable it is 0/1 or 1/2. But if it happens that your levels are represented as -1/+1 (which I suspect here), then you have to multiply the regression coefficient by 2 when exponentiating. The same would apply if you were working with a continuous variable, like age, and want to express the odds for 5 years ($\exp(5b)$) instead of 1 year ($\exp(b)$). Update: I just found this about JMP coding for nominal variables (version < 7).
Relation between logistic regression coefficient and odds ratio in JMP Ok, I drop a quick response. Your idea is correct in that the regression coefficient is the log of the OR. More precisely, if $b$ is your regression coefficient, $\exp(b)$ is the odds ratio correspond
30,942
Finding marginal densities of $f (x,y) = c \sqrt{1 - x^2 - y^2}, x^2 + y^2 \leq 1$
Geometry helps here. The graph of $f$ is a spherical dome of unit radius. (It follows immediately that its volume is half that of a unit sphere, $(4 \pi /3)/2$, whence $c=3/(2 \pi)$.) The marginal densities are given by areas of vertical cross-sections through this sphere. Obviously each cross-section is a semicircle: to obtain the marginal density, find its radius as a function of the remaining variable and use the formula for the area of a circle. Normalizing the resulting univariate function to have unit area turns it into a density.
Finding marginal densities of $f (x,y) = c \sqrt{1 - x^2 - y^2}, x^2 + y^2 \leq 1$
Geometry helps here. The graph of $f$ is a spherical dome of unit radius. (It follows immediately that its volume is half that of a unit sphere, $(4 \pi /3)/2$, whence $c=3/(2 \pi)$.) The marginal
Finding marginal densities of $f (x,y) = c \sqrt{1 - x^2 - y^2}, x^2 + y^2 \leq 1$ Geometry helps here. The graph of $f$ is a spherical dome of unit radius. (It follows immediately that its volume is half that of a unit sphere, $(4 \pi /3)/2$, whence $c=3/(2 \pi)$.) The marginal densities are given by areas of vertical cross-sections through this sphere. Obviously each cross-section is a semicircle: to obtain the marginal density, find its radius as a function of the remaining variable and use the formula for the area of a circle. Normalizing the resulting univariate function to have unit area turns it into a density.
Finding marginal densities of $f (x,y) = c \sqrt{1 - x^2 - y^2}, x^2 + y^2 \leq 1$ Geometry helps here. The graph of $f$ is a spherical dome of unit radius. (It follows immediately that its volume is half that of a unit sphere, $(4 \pi /3)/2$, whence $c=3/(2 \pi)$.) The marginal
30,943
Visualizing multiple "histograms" (bar-charts)
As you have found out there are no easy answers to your question! I presume that you interested in finding strange or different book stores? If this is the case then you could try things like PCA (see the wikipedia cluster analysis page for more details). To give you an idea, consider this example. You have 26 bookshops (with names A, B,..Z). All bookshops are similar, except: Shop Z sells only a few History books. Shops O-Y sell more romance books than average. A principal components plot highlights these shops for further investigation. Here's some sample R code: > d = data.frame(Romance = rpois(26, 50), Horror = rpois(26, 100), Science = rpois(26, 75), History = rpois(26, 125)) > rownames(d) = LETTERS #Alter a few shops > d[15:25,][1] = rpois(11,150) > d[26,][4] = rpois(1, 10) #look at the data > head(d, 2) Romance Horror Science History A 36 107 62 139 B 47 93 64 118 > books.PC.cov = prcomp(d) > books.scores.cov = predict(books.PC.cov) # Plot of PC1 vs PC2 > plot(books.scores.cov[,1],books.scores.cov[,2], xlab="PC 1",ylab="PC 2", pch=NA) > text(books.scores.cov[,1],books.scores.cov[,2],labels=LETTERS) This gives the following plot: PCA plot http://img265.imageshack.us/img265/7263/tmplx.jpg Notice that: Shop z is an outlying point. The others shops form two distinct groups. Other possibilities You could also look at GGobi, I've never used it, but it looks interesting.
Visualizing multiple "histograms" (bar-charts)
As you have found out there are no easy answers to your question! I presume that you interested in finding strange or different book stores? If this is the case then you could try things like PCA (see
Visualizing multiple "histograms" (bar-charts) As you have found out there are no easy answers to your question! I presume that you interested in finding strange or different book stores? If this is the case then you could try things like PCA (see the wikipedia cluster analysis page for more details). To give you an idea, consider this example. You have 26 bookshops (with names A, B,..Z). All bookshops are similar, except: Shop Z sells only a few History books. Shops O-Y sell more romance books than average. A principal components plot highlights these shops for further investigation. Here's some sample R code: > d = data.frame(Romance = rpois(26, 50), Horror = rpois(26, 100), Science = rpois(26, 75), History = rpois(26, 125)) > rownames(d) = LETTERS #Alter a few shops > d[15:25,][1] = rpois(11,150) > d[26,][4] = rpois(1, 10) #look at the data > head(d, 2) Romance Horror Science History A 36 107 62 139 B 47 93 64 118 > books.PC.cov = prcomp(d) > books.scores.cov = predict(books.PC.cov) # Plot of PC1 vs PC2 > plot(books.scores.cov[,1],books.scores.cov[,2], xlab="PC 1",ylab="PC 2", pch=NA) > text(books.scores.cov[,1],books.scores.cov[,2],labels=LETTERS) This gives the following plot: PCA plot http://img265.imageshack.us/img265/7263/tmplx.jpg Notice that: Shop z is an outlying point. The others shops form two distinct groups. Other possibilities You could also look at GGobi, I've never used it, but it looks interesting.
Visualizing multiple "histograms" (bar-charts) As you have found out there are no easy answers to your question! I presume that you interested in finding strange or different book stores? If this is the case then you could try things like PCA (see
30,944
Visualizing multiple "histograms" (bar-charts)
I would suggest something that hasn't got a defined name (probably "parallel plot") and looks like this: Basically you plot all counts for all bookstores as points over categories listed on x axis and connect the results from each bookstore with a line. Still this may be too tangled for 1M lines, though. The concept comes from GGobi which was already mentioned by csgillespie.
Visualizing multiple "histograms" (bar-charts)
I would suggest something that hasn't got a defined name (probably "parallel plot") and looks like this: Basically you plot all counts for all bookstores as points over categories listed on x axis
Visualizing multiple "histograms" (bar-charts) I would suggest something that hasn't got a defined name (probably "parallel plot") and looks like this: Basically you plot all counts for all bookstores as points over categories listed on x axis and connect the results from each bookstore with a line. Still this may be too tangled for 1M lines, though. The concept comes from GGobi which was already mentioned by csgillespie.
Visualizing multiple "histograms" (bar-charts) I would suggest something that hasn't got a defined name (probably "parallel plot") and looks like this: Basically you plot all counts for all bookstores as points over categories listed on x axis
30,945
How to handle count data (categorical data), when it has been converted to a rate?
For me it does not at all sound appropriate to use a chi-square test here. I guess what you wanna do is the following: You have different wards or treatments or whatever else kind of nominal variable (i.e., groups) that divides your data. For each of these groups you collected the Infection Count and the Patient Bed Days to calculate the infection per patient bed days. Know you wanna check for differences between the groups, right? If so, an analysis of variance (ANOVA, in case of more than two groups) or a t-test (in case of two groups) is probably appropriate given by the reasons in Srikant Vadali's post (and if the assumptions homogeneity of variances and comparable groups sizes are also met) and the beginner tag should be added.
How to handle count data (categorical data), when it has been converted to a rate?
For me it does not at all sound appropriate to use a chi-square test here. I guess what you wanna do is the following: You have different wards or treatments or whatever else kind of nominal variable
How to handle count data (categorical data), when it has been converted to a rate? For me it does not at all sound appropriate to use a chi-square test here. I guess what you wanna do is the following: You have different wards or treatments or whatever else kind of nominal variable (i.e., groups) that divides your data. For each of these groups you collected the Infection Count and the Patient Bed Days to calculate the infection per patient bed days. Know you wanna check for differences between the groups, right? If so, an analysis of variance (ANOVA, in case of more than two groups) or a t-test (in case of two groups) is probably appropriate given by the reasons in Srikant Vadali's post (and if the assumptions homogeneity of variances and comparable groups sizes are also met) and the beginner tag should be added.
How to handle count data (categorical data), when it has been converted to a rate? For me it does not at all sound appropriate to use a chi-square test here. I guess what you wanna do is the following: You have different wards or treatments or whatever else kind of nominal variable
30,946
How to handle count data (categorical data), when it has been converted to a rate?
I'm not quite sure what your data look like, or what your precise problem is, but I assume you have a table with the following headings and type: ward (categorical), infections (integer), patient-bed-days (integer or continuous). and you want to tell if the infection rate is statistically different for different wards? One way of doing this is to use a Poisson model: Infections ~ Poisson (Patient bed days * ward infection rate) This can be achieved by using a Poisson glm, with log link function and the log of patient-bed-days in the offset. In R, the code would look something like: glm(infections ~ ward + offset(log(patient-bed-days)), family=poisson())
How to handle count data (categorical data), when it has been converted to a rate?
I'm not quite sure what your data look like, or what your precise problem is, but I assume you have a table with the following headings and type: ward (categorical), infections (integer), patient-bed
How to handle count data (categorical data), when it has been converted to a rate? I'm not quite sure what your data look like, or what your precise problem is, but I assume you have a table with the following headings and type: ward (categorical), infections (integer), patient-bed-days (integer or continuous). and you want to tell if the infection rate is statistically different for different wards? One way of doing this is to use a Poisson model: Infections ~ Poisson (Patient bed days * ward infection rate) This can be achieved by using a Poisson glm, with log link function and the log of patient-bed-days in the offset. In R, the code would look something like: glm(infections ~ ward + offset(log(patient-bed-days)), family=poisson())
How to handle count data (categorical data), when it has been converted to a rate? I'm not quite sure what your data look like, or what your precise problem is, but I assume you have a table with the following headings and type: ward (categorical), infections (integer), patient-bed
30,947
How to handle count data (categorical data), when it has been converted to a rate?
If you were considering conducting Poisson or related regressions on this data (with your outcome variable as a rate), remember to include an offset term for the patient bed days as it technically becomes the "exposure" to your counts. However, in that case, you may also want to consider using just the infection count (not the rate) as your dependent variable, and include the patient bed days as a covariate. I am working on a data set with a similar count vs. rate decision and it seems like converting your dependent variable to a rate leads to a decrease in variability, an increase in skewness and a proportionally larger standard deviation. This makes it more difficult to detect any significant effects. Also watch out if your data is zero-truncated or zero-inflated, and make the appropriate adjustments.
How to handle count data (categorical data), when it has been converted to a rate?
If you were considering conducting Poisson or related regressions on this data (with your outcome variable as a rate), remember to include an offset term for the patient bed days as it technically bec
How to handle count data (categorical data), when it has been converted to a rate? If you were considering conducting Poisson or related regressions on this data (with your outcome variable as a rate), remember to include an offset term for the patient bed days as it technically becomes the "exposure" to your counts. However, in that case, you may also want to consider using just the infection count (not the rate) as your dependent variable, and include the patient bed days as a covariate. I am working on a data set with a similar count vs. rate decision and it seems like converting your dependent variable to a rate leads to a decrease in variability, an increase in skewness and a proportionally larger standard deviation. This makes it more difficult to detect any significant effects. Also watch out if your data is zero-truncated or zero-inflated, and make the appropriate adjustments.
How to handle count data (categorical data), when it has been converted to a rate? If you were considering conducting Poisson or related regressions on this data (with your outcome variable as a rate), remember to include an offset term for the patient bed days as it technically bec
30,948
How to handle count data (categorical data), when it has been converted to a rate?
From a technical purist point of view, you cannot as your ratio "infection per patient bed days" is not a continuous variable. For example, an irrational value will never appear in your dataset. However, you can ignore this technical issue and do whatever tests that may be appropriate for your context. By way of analogy, incomes levels are discrete but almost everyone treats them as continuous. By the way, it is not entirely clear why you want to do a chi-square but I am assuming there is some background context why that makes sense for you.
How to handle count data (categorical data), when it has been converted to a rate?
From a technical purist point of view, you cannot as your ratio "infection per patient bed days" is not a continuous variable. For example, an irrational value will never appear in your dataset. Howev
How to handle count data (categorical data), when it has been converted to a rate? From a technical purist point of view, you cannot as your ratio "infection per patient bed days" is not a continuous variable. For example, an irrational value will never appear in your dataset. However, you can ignore this technical issue and do whatever tests that may be appropriate for your context. By way of analogy, incomes levels are discrete but almost everyone treats them as continuous. By the way, it is not entirely clear why you want to do a chi-square but I am assuming there is some background context why that makes sense for you.
How to handle count data (categorical data), when it has been converted to a rate? From a technical purist point of view, you cannot as your ratio "infection per patient bed days" is not a continuous variable. For example, an irrational value will never appear in your dataset. Howev
30,949
How to handle count data (categorical data), when it has been converted to a rate?
Chi-square tests do not seem appropriate. As others said, provided there are a reasonable number of different rates, you could treat the data as continuous and do regression or ANOVA. You would then want to look at the distribution of the residuals.
How to handle count data (categorical data), when it has been converted to a rate?
Chi-square tests do not seem appropriate. As others said, provided there are a reasonable number of different rates, you could treat the data as continuous and do regression or ANOVA. You would then
How to handle count data (categorical data), when it has been converted to a rate? Chi-square tests do not seem appropriate. As others said, provided there are a reasonable number of different rates, you could treat the data as continuous and do regression or ANOVA. You would then want to look at the distribution of the residuals.
How to handle count data (categorical data), when it has been converted to a rate? Chi-square tests do not seem appropriate. As others said, provided there are a reasonable number of different rates, you could treat the data as continuous and do regression or ANOVA. You would then
30,950
How to handle count data (categorical data), when it has been converted to a rate?
One way of proceeding is to construct various null models each of which assume factors are independent of one another. The independence assumption often makes these easy to construct. Then the predicted joint densities are the products of the marginal densities. To the degree the actual data are consistent with these, you know factors are independent. If they are greater or lesser than the joint prediction, you may be able to infer they co-vary positively or negatively. Be careful to consider numbers of observations in each case, and you may be able to do that formally by treating populations as extended hypergeometrics. This is all in the spirit of the Fisher Exact Test, but Fisher actually formulated it so more general situations could be modeled. See, for example, Discrete Multivariate Analysis: Theory and Practice, by Yvonne M. Bishop, Stephen E. Fienberg, Paul W. Holland, R.J. Light, F. Mosteller, and The Analysis of Cross-Classified Categorical Data, by Stephen E. Fienberg.
How to handle count data (categorical data), when it has been converted to a rate?
One way of proceeding is to construct various null models each of which assume factors are independent of one another. The independence assumption often makes these easy to construct. Then the predict
How to handle count data (categorical data), when it has been converted to a rate? One way of proceeding is to construct various null models each of which assume factors are independent of one another. The independence assumption often makes these easy to construct. Then the predicted joint densities are the products of the marginal densities. To the degree the actual data are consistent with these, you know factors are independent. If they are greater or lesser than the joint prediction, you may be able to infer they co-vary positively or negatively. Be careful to consider numbers of observations in each case, and you may be able to do that formally by treating populations as extended hypergeometrics. This is all in the spirit of the Fisher Exact Test, but Fisher actually formulated it so more general situations could be modeled. See, for example, Discrete Multivariate Analysis: Theory and Practice, by Yvonne M. Bishop, Stephen E. Fienberg, Paul W. Holland, R.J. Light, F. Mosteller, and The Analysis of Cross-Classified Categorical Data, by Stephen E. Fienberg.
How to handle count data (categorical data), when it has been converted to a rate? One way of proceeding is to construct various null models each of which assume factors are independent of one another. The independence assumption often makes these easy to construct. Then the predict
30,951
For hyperparameter tuning with cross validation, is it okay for the fold splits to be same for every hyperparameter trial?
It'd actually be better to use the same folds while comparing different models, as you've done initially. If you input the pipeline object into the randomCV object, it should use the same folds. But, if you do the other way around, each run will change the folds as you said. Even in that case, you can fix the folds by fixing the cv argument in the pipeline object.
For hyperparameter tuning with cross validation, is it okay for the fold splits to be same for every
It'd actually be better to use the same folds while comparing different models, as you've done initially. If you input the pipeline object into the randomCV object, it should use the same folds. But,
For hyperparameter tuning with cross validation, is it okay for the fold splits to be same for every hyperparameter trial? It'd actually be better to use the same folds while comparing different models, as you've done initially. If you input the pipeline object into the randomCV object, it should use the same folds. But, if you do the other way around, each run will change the folds as you said. Even in that case, you can fix the folds by fixing the cv argument in the pipeline object.
For hyperparameter tuning with cross validation, is it okay for the fold splits to be same for every It'd actually be better to use the same folds while comparing different models, as you've done initially. If you input the pipeline object into the randomCV object, it should use the same folds. But,
30,952
For hyperparameter tuning with cross validation, is it okay for the fold splits to be same for every hyperparameter trial?
Using the same or different splits amounts to using a different experimental design for your optimization: Using the same splits means that you are setting up the comparisons for your optimization in a paired fashion. Paired tests/comparisons typically have higher statistical power which you may use for your optimization decisions.
For hyperparameter tuning with cross validation, is it okay for the fold splits to be same for every
Using the same or different splits amounts to using a different experimental design for your optimization: Using the same splits means that you are setting up the comparisons for your optimization in
For hyperparameter tuning with cross validation, is it okay for the fold splits to be same for every hyperparameter trial? Using the same or different splits amounts to using a different experimental design for your optimization: Using the same splits means that you are setting up the comparisons for your optimization in a paired fashion. Paired tests/comparisons typically have higher statistical power which you may use for your optimization decisions.
For hyperparameter tuning with cross validation, is it okay for the fold splits to be same for every Using the same or different splits amounts to using a different experimental design for your optimization: Using the same splits means that you are setting up the comparisons for your optimization in
30,953
For hyperparameter tuning with cross validation, is it okay for the fold splits to be same for every hyperparameter trial?
Keeping same fold for hyperparameter tunning is better idea. If you have random data in each iteration, then you will not be able to understand whether variance in the model is coming due to random data or different hyperparameter. So, to eliminate variance in the model due to randomness in the data, we generally use static folds which can be created before grid or random search starts.
For hyperparameter tuning with cross validation, is it okay for the fold splits to be same for every
Keeping same fold for hyperparameter tunning is better idea. If you have random data in each iteration, then you will not be able to understand whether variance in the model is coming due to random da
For hyperparameter tuning with cross validation, is it okay for the fold splits to be same for every hyperparameter trial? Keeping same fold for hyperparameter tunning is better idea. If you have random data in each iteration, then you will not be able to understand whether variance in the model is coming due to random data or different hyperparameter. So, to eliminate variance in the model due to randomness in the data, we generally use static folds which can be created before grid or random search starts.
For hyperparameter tuning with cross validation, is it okay for the fold splits to be same for every Keeping same fold for hyperparameter tunning is better idea. If you have random data in each iteration, then you will not be able to understand whether variance in the model is coming due to random da
30,954
For hyperparameter tuning with cross validation, is it okay for the fold splits to be same for every hyperparameter trial?
Let me first rephrase the question to make it a little more precise: "I am wondering if it matters at all if we used the same k fold split for all trials or if it is important that we randomized the split for each trial?" Assume you perform hyperparameter tuning using fixed folds, and random folds. The two tunings will select, in general, different models as the best. The split method matters if those two models have significantly different performance. Conversely, if the difference in performance is negligible, the choice of fixed or random folds does not matter, because they both select equally good models. I'll set aside for the moment on how you decide if the two selected models are different (not trivial, but it's a separate topic). To my knowledge there is virtually no literature published on your question. I have used both methods, and have not noticed difference in performance, but have not explored the question systematically. But if the choice of random vs. fixed folds had a significant effect, there would have been published reports about it. My answer is, therefore, in practical sense it doesn't make a difference which method you use To be sure, cross-validation can produce heavily biased performance estimates for small sample sizes, but neither fixed nor random CV can solve the problem in such datasets. It can be alleviated, to a degree, using repeated CV and nested CV: https://jcheminf.biomedcentral.com/articles/10.1186/1758-2946-6-10
For hyperparameter tuning with cross validation, is it okay for the fold splits to be same for every
Let me first rephrase the question to make it a little more precise: "I am wondering if it matters at all if we used the same k fold split for all trials or if it is important that we randomized the s
For hyperparameter tuning with cross validation, is it okay for the fold splits to be same for every hyperparameter trial? Let me first rephrase the question to make it a little more precise: "I am wondering if it matters at all if we used the same k fold split for all trials or if it is important that we randomized the split for each trial?" Assume you perform hyperparameter tuning using fixed folds, and random folds. The two tunings will select, in general, different models as the best. The split method matters if those two models have significantly different performance. Conversely, if the difference in performance is negligible, the choice of fixed or random folds does not matter, because they both select equally good models. I'll set aside for the moment on how you decide if the two selected models are different (not trivial, but it's a separate topic). To my knowledge there is virtually no literature published on your question. I have used both methods, and have not noticed difference in performance, but have not explored the question systematically. But if the choice of random vs. fixed folds had a significant effect, there would have been published reports about it. My answer is, therefore, in practical sense it doesn't make a difference which method you use To be sure, cross-validation can produce heavily biased performance estimates for small sample sizes, but neither fixed nor random CV can solve the problem in such datasets. It can be alleviated, to a degree, using repeated CV and nested CV: https://jcheminf.biomedcentral.com/articles/10.1186/1758-2946-6-10
For hyperparameter tuning with cross validation, is it okay for the fold splits to be same for every Let me first rephrase the question to make it a little more precise: "I am wondering if it matters at all if we used the same k fold split for all trials or if it is important that we randomized the s
30,955
Statistically significant equality of sample sizes (does 50 equal to 53)***?
It does not look like good advice, but not knowing the authors' intention what to say? But, more important, it is not needed. Anova or t-tests with unequal sample sizes is not a problem (it might be inefficient, so in the planning phase try to avoid it.) See for instance Are unequal groups a problem for one-way ANOVA? and many similar posts you can find by searching this site!
Statistically significant equality of sample sizes (does 50 equal to 53)***?
It does not look like good advice, but not knowing the authors' intention what to say? But, more important, it is not needed. Anova or t-tests with unequal sample sizes is not a problem (it might be
Statistically significant equality of sample sizes (does 50 equal to 53)***? It does not look like good advice, but not knowing the authors' intention what to say? But, more important, it is not needed. Anova or t-tests with unequal sample sizes is not a problem (it might be inefficient, so in the planning phase try to avoid it.) See for instance Are unequal groups a problem for one-way ANOVA? and many similar posts you can find by searching this site!
Statistically significant equality of sample sizes (does 50 equal to 53)***? It does not look like good advice, but not knowing the authors' intention what to say? But, more important, it is not needed. Anova or t-tests with unequal sample sizes is not a problem (it might be
30,956
Statistically significant equality of sample sizes (does 50 equal to 53)***?
OK, I got relevant excerpt of the book from OP. It says that ANOVA needs equal smaple sizes across groups and chi-squared test may be used to check this assumption. Of course, ANOVA doesn't need equal samples, which is nicely explained in link provided by kjetil b halvorsen. But even if equal sample sizes were needed, testing this assumption makes no sense. It is because statistical test are trying to use information provided in sample from some population to say something about whole population. T-test, for example, tries to locate population mean given a sample from this population. Population mean is unknown and that is why we need testing procedure to tell something about it with, say, 95% certaininty. Notice now, that sample mean is very different from population mean. Sample mean is perfectly known so we just know where it is (with 100% certainity). So we know for sure if it differs from, say, 153.32. Problem is that sample mean is not very interesting quantity, it's population mean that is interesting. If equal sample sizes were needed we would be, contrary to common use of statistical tests, interested in some quantity (number of elements in this case) in sample, not in population. So we do not need any testing procedure. We can just count observations in our sample and can be 100% sure if it differs from what authors of the book call "ANOVA assumptions".
Statistically significant equality of sample sizes (does 50 equal to 53)***?
OK, I got relevant excerpt of the book from OP. It says that ANOVA needs equal smaple sizes across groups and chi-squared test may be used to check this assumption. Of course, ANOVA doesn't need equal
Statistically significant equality of sample sizes (does 50 equal to 53)***? OK, I got relevant excerpt of the book from OP. It says that ANOVA needs equal smaple sizes across groups and chi-squared test may be used to check this assumption. Of course, ANOVA doesn't need equal samples, which is nicely explained in link provided by kjetil b halvorsen. But even if equal sample sizes were needed, testing this assumption makes no sense. It is because statistical test are trying to use information provided in sample from some population to say something about whole population. T-test, for example, tries to locate population mean given a sample from this population. Population mean is unknown and that is why we need testing procedure to tell something about it with, say, 95% certaininty. Notice now, that sample mean is very different from population mean. Sample mean is perfectly known so we just know where it is (with 100% certainity). So we know for sure if it differs from, say, 153.32. Problem is that sample mean is not very interesting quantity, it's population mean that is interesting. If equal sample sizes were needed we would be, contrary to common use of statistical tests, interested in some quantity (number of elements in this case) in sample, not in population. So we do not need any testing procedure. We can just count observations in our sample and can be 100% sure if it differs from what authors of the book call "ANOVA assumptions".
Statistically significant equality of sample sizes (does 50 equal to 53)***? OK, I got relevant excerpt of the book from OP. It says that ANOVA needs equal smaple sizes across groups and chi-squared test may be used to check this assumption. Of course, ANOVA doesn't need equal
30,957
Training a neural network on chess data
I think you need to consider running it on a GPU. Google Colab is free and Amazon AWS is very cheap. You seem to know what you are doing so you can probably get up and running with PyTorch very quickly. Once you compare the performance of the same network implemented on GPU vs your single processor setup, you will be in a better to position to know where to go next.
Training a neural network on chess data
I think you need to consider running it on a GPU. Google Colab is free and Amazon AWS is very cheap. You seem to know what you are doing so you can probably get up and running with PyTorch very quickl
Training a neural network on chess data I think you need to consider running it on a GPU. Google Colab is free and Amazon AWS is very cheap. You seem to know what you are doing so you can probably get up and running with PyTorch very quickly. Once you compare the performance of the same network implemented on GPU vs your single processor setup, you will be in a better to position to know where to go next.
Training a neural network on chess data I think you need to consider running it on a GPU. Google Colab is free and Amazon AWS is very cheap. You seem to know what you are doing so you can probably get up and running with PyTorch very quickl
30,958
Training a neural network on chess data
You could also try the CPU-friendly NNUE alternative. It is currently been developed for chess by the Stockfish team and seems to give good results. It is easy to use and train the networks, and it should be much easier than the hard-way. I've been working on the Stockfish team, and I think I could also help you with your engine if you wish (I'm also working on my own chess engine). Regards and good luck!
Training a neural network on chess data
You could also try the CPU-friendly NNUE alternative. It is currently been developed for chess by the Stockfish team and seems to give good results. It is easy to use and train the networks, and it sh
Training a neural network on chess data You could also try the CPU-friendly NNUE alternative. It is currently been developed for chess by the Stockfish team and seems to give good results. It is easy to use and train the networks, and it should be much easier than the hard-way. I've been working on the Stockfish team, and I think I could also help you with your engine if you wish (I'm also working on my own chess engine). Regards and good luck!
Training a neural network on chess data You could also try the CPU-friendly NNUE alternative. It is currently been developed for chess by the Stockfish team and seems to give good results. It is easy to use and train the networks, and it sh
30,959
Poisson regression appropriate?
Poisson regression does not appear to be appropriate in your case. First off, Poisson regression models counts, and your events are binary, so if at all, logistic regression would be more appropriate. (Poisson regression can be used to model rare binary events, but I would assume you have so many 1s in your data that the Poisson regression would also expect a number of 2s and a few 3s, and their absence will make for a worse model than a logistic regression.) Also, dichotomizing data is bad practice, per many, many threads here and elsewhere. If your threshold is at a weight gain of 3 pounds, then you will treat two subjects with gains of 3 and of 20 pounds as exactly the same (both have an outcome of 1), also a subject with a gain of 2 pounds and one with a loss of 10 pounds (both are 0) - needless to say, this very much (and artificially) throws away a lot of data. I would much rather recommend an ANOVA style analysis, which can deal with continuous outcome variables. In your case, since you are dealing with repeated measurements (you should model the fact that a subject's weight measurements are correlated), a repeated measures ANOVA (also known as a "mixed model") would be appropriate. You can even specify that two measurements taken two months apart will be more highly correlated than two measurements taken four months apart (e.g., using a corCAR error correlation in R, and in similar ways in SAS). Repeated measures ANOVA can deal with predictors and interactions (then it's more commonly called "ANCOVA"). It can deal with different numbers of measurements on the different subjects. If you insist on dichotomizing your data, you can even run a repeated measurements logistic regression.
Poisson regression appropriate?
Poisson regression does not appear to be appropriate in your case. First off, Poisson regression models counts, and your events are binary, so if at all, logistic regression would be more appropriate.
Poisson regression appropriate? Poisson regression does not appear to be appropriate in your case. First off, Poisson regression models counts, and your events are binary, so if at all, logistic regression would be more appropriate. (Poisson regression can be used to model rare binary events, but I would assume you have so many 1s in your data that the Poisson regression would also expect a number of 2s and a few 3s, and their absence will make for a worse model than a logistic regression.) Also, dichotomizing data is bad practice, per many, many threads here and elsewhere. If your threshold is at a weight gain of 3 pounds, then you will treat two subjects with gains of 3 and of 20 pounds as exactly the same (both have an outcome of 1), also a subject with a gain of 2 pounds and one with a loss of 10 pounds (both are 0) - needless to say, this very much (and artificially) throws away a lot of data. I would much rather recommend an ANOVA style analysis, which can deal with continuous outcome variables. In your case, since you are dealing with repeated measurements (you should model the fact that a subject's weight measurements are correlated), a repeated measures ANOVA (also known as a "mixed model") would be appropriate. You can even specify that two measurements taken two months apart will be more highly correlated than two measurements taken four months apart (e.g., using a corCAR error correlation in R, and in similar ways in SAS). Repeated measures ANOVA can deal with predictors and interactions (then it's more commonly called "ANCOVA"). It can deal with different numbers of measurements on the different subjects. If you insist on dichotomizing your data, you can even run a repeated measurements logistic regression.
Poisson regression appropriate? Poisson regression does not appear to be appropriate in your case. First off, Poisson regression models counts, and your events are binary, so if at all, logistic regression would be more appropriate.
30,960
Why so much difference in SE areas in these graphs
This is straight up expected behavior for LOESS/LOWESS (and other scatterplot smoothers/nonparametric regression methods). LOESS (LOcally Estimated Scatterplot Smoother) more or less estimates the value of y using only some fraction of the x observations for a small stretch of x values, it repeats that estimation by shifting that 'small stretch' until all observed values of x have been covered. The result is: Not assuming a linear relationship between y and x, and (importantly for your question) Less confidence about the line of estimates. A few additional points This greater uncertainty about the line of estimates does not mean that nonparametric regression must have lower power than the corresponding linear regression: that is only true if the relationship between y and x is approximately linear (examine the size of the individual residuals from the best fitting straight line through a scattering of y data nonlinearly related to x to get a sense of why). LOESS and LOWESS, along with GAMs and other nonparametric regression models all rely on the 'small stretch' of x values mentioned above. This can be expressed as 'bandwidth' or 'span' (which describe the proportion of the observed total range of x values to be included in each estimation), or 'k nearest neighbors' (an absolute number of observed points on the x axis to include). When trying to decide whether to use a linear or nonparametric regression model I start with the latter, and ask whether a straight line will fit within the confidence band of the nonparametric regression; if yes, then I proceed to use linear regression, if no, I am done, unless I need parametric estimates for some reason (e.g., statistical inference, communication of model results, model transport to a different data set) in which case I proceed to use nonlinear least squares for a reasonable functional form as informed by the shape of the nonparametric model. NB: I am leaving a lot out about various parametric curve-fitting approaches here.
Why so much difference in SE areas in these graphs
This is straight up expected behavior for LOESS/LOWESS (and other scatterplot smoothers/nonparametric regression methods). LOESS (LOcally Estimated Scatterplot Smoother) more or less estimates the val
Why so much difference in SE areas in these graphs This is straight up expected behavior for LOESS/LOWESS (and other scatterplot smoothers/nonparametric regression methods). LOESS (LOcally Estimated Scatterplot Smoother) more or less estimates the value of y using only some fraction of the x observations for a small stretch of x values, it repeats that estimation by shifting that 'small stretch' until all observed values of x have been covered. The result is: Not assuming a linear relationship between y and x, and (importantly for your question) Less confidence about the line of estimates. A few additional points This greater uncertainty about the line of estimates does not mean that nonparametric regression must have lower power than the corresponding linear regression: that is only true if the relationship between y and x is approximately linear (examine the size of the individual residuals from the best fitting straight line through a scattering of y data nonlinearly related to x to get a sense of why). LOESS and LOWESS, along with GAMs and other nonparametric regression models all rely on the 'small stretch' of x values mentioned above. This can be expressed as 'bandwidth' or 'span' (which describe the proportion of the observed total range of x values to be included in each estimation), or 'k nearest neighbors' (an absolute number of observed points on the x axis to include). When trying to decide whether to use a linear or nonparametric regression model I start with the latter, and ask whether a straight line will fit within the confidence band of the nonparametric regression; if yes, then I proceed to use linear regression, if no, I am done, unless I need parametric estimates for some reason (e.g., statistical inference, communication of model results, model transport to a different data set) in which case I proceed to use nonlinear least squares for a reasonable functional form as informed by the shape of the nonparametric model. NB: I am leaving a lot out about various parametric curve-fitting approaches here.
Why so much difference in SE areas in these graphs This is straight up expected behavior for LOESS/LOWESS (and other scatterplot smoothers/nonparametric regression methods). LOESS (LOcally Estimated Scatterplot Smoother) more or less estimates the val
30,961
Why so much difference in SE areas in these graphs
I think the answer is that your two graphs measure two completely different Standard Errors and related Confidence Intervals. The first graph represents a Standard Error around the mean observation representing the actual straight regression line. By definition, this set of Confidence Intervals are going to be very narrow around such regression line. As you can observe these Confidence Intervals include just a very small fraction of the data points, instead of the customary 95% of such data points when the Confidence Intervals use + or - 1.96 Standard Errors. The second graph has what looks like more traditional much wider Standard Errors and Confidence Intervals that capture 95% or more of all data points within your model. I think this second set of Confidence Intervals are sometimes called Prediction Intervals. The two graphs are not wrong. They are both correct. They just represent something completely different that people confuse all the time.
Why so much difference in SE areas in these graphs
I think the answer is that your two graphs measure two completely different Standard Errors and related Confidence Intervals. The first graph represents a Standard Error around the mean observation
Why so much difference in SE areas in these graphs I think the answer is that your two graphs measure two completely different Standard Errors and related Confidence Intervals. The first graph represents a Standard Error around the mean observation representing the actual straight regression line. By definition, this set of Confidence Intervals are going to be very narrow around such regression line. As you can observe these Confidence Intervals include just a very small fraction of the data points, instead of the customary 95% of such data points when the Confidence Intervals use + or - 1.96 Standard Errors. The second graph has what looks like more traditional much wider Standard Errors and Confidence Intervals that capture 95% or more of all data points within your model. I think this second set of Confidence Intervals are sometimes called Prediction Intervals. The two graphs are not wrong. They are both correct. They just represent something completely different that people confuse all the time.
Why so much difference in SE areas in these graphs I think the answer is that your two graphs measure two completely different Standard Errors and related Confidence Intervals. The first graph represents a Standard Error around the mean observation
30,962
Is there a standard measure of fit to validate Exploratory factor analysis?
Both the KMO and Bartlett’s test of sphericity are commonly used to verify the feasibility of the data for Exploratory Factor Analysis (EFA). Kaiser-Meyer Olkin (KMO) model tests sampling adequacy by measuring the proportion of variance in the items that may be common variance. Values ranging between .80 and 1.00 indicate sampling adequacy (Cerny & Kaiser, 1977). Bartlett’s test of sphericity examines whether a correlation matrix is significantly different to the identity matrix, in which diagonal elements are unities and all off-diagonal elements are zeros (Bartlett, 1950). Significant results indicate that variables in the correlation matrix are suitable for factor analysis. The remaining four measures of fit can be used in EFA (see Aichholzer (2014) for an example) but in my experience, these fit measures are more commonly applied as part of the Confirmatory Factor Analysis and Structural Equation Modelling, in which you test whether your proposed model conforms to its expected factor structure, just like in the second paper you referenced). This pdf by Hooper (2008): Structural Equation Modelling: Guidelines for Determining Model Fit provides a concise and straight to the point summary of each fit statistic you listed and more. As of 2019, this is indeed quite a cited article with over 7,000 citations. Before providing a concise summary of the aforementioned fit statistics, it is worth noting that there are different classifications of fit indices, but one popular classification distinguishes between absolute fit indices and comparative fit indices. Classification of fit indices: Absolute and Comparative The logic behind absolute fit indices is essentially to test how well the model specified by the researcher reproduces the observed data. Commonly used absolute fit statistics include the $\chi^2$ fit statistic, RMSEA, SRMR. In contrast, comparative fit indices are based on a different logic, i.e. they assess how well a model specified by a researcher fits the observed sample data relative to a null model (i.e., a model that is based on the assumption that all observed variables are not correlated) (Miles & Shevlin, 2007). Popular comparative model fit indices are the CFI and TLI. The $\chi^2$ fit statistic The $\chi^2$ measures the discrepancy between the observed and the implied covariance matrices. The $\chi^2$ fit statistic is very popular and frequently reported in both CFA and SEM studies. However, it is notoriously sensitive to large sample sizes and increased model complexity (i.e. models with a large number of indicators and degrees of freedom). Therefore, the current practice is to report it mostly for historical reasons, and it rarely used to make decisions about the adequacy of model fit. The RMSEA The Root Mean Square Error of Approximation (RMSEA) provides information as to how well the model, with unknown but optimally chosen parameter estimates, would fit the population covariance matrix (Byrne, 1998). It is a very commonly used fit statistic. One of its key advantages is that the RMSEA calculates confidence intervals around its value. Values below $.060$ indicate close fit (Hu & Bentler, 1999). Values up to $.080$ are commonly accepted as adequate. The SRMR The Standardized Root Mean Residual (SRMR) is the square root of the difference between the residuals of the sample covariance matrix and the hypothesized covariance model. As SRMR is standardized, its values range between $0$ and $1$. Commonly, models with values below $.05$ threshold are considered to indicate good fit (Byrne, 1998). Also, values up to $.08$ are acceptable (Hu & Bentler, 1999). The CFI and TLI Two comparative fit indices commonly reported are the Comparative Fit Index (CFI) and the Tucker Lewis Index (TLI). The indices are similar; however, note that the CFI is normed while the TLI is not. Therefore, the CFI’s values range between zero and one, whereas the TLI’s values may fall below zero or be above one (Hair et al., 2013). For CFI and TLI values above .95 are indicative of good fit (Hu & Bentler, 1999). In practice, CFI and TLI values from $.90$ to $.95$ are considered acceptable. Note that the TLI is non-normed, so its values can go above $1.00$ EDIT: Further to the aforementioned information, Hoyle (2012) provides an excellent succinct summary of numerous fit indices. This table includes, for example, information on the indices' theoretical range, sensitivity to varying sample size and model complexity. Note that, in contrast to the indices introduced above, a great number of other indices exist, as illustrated in Hoyle's table. Yet, the frequency of their use is decreasing for various reasons. For example, RMR is non-normed and thus it is hard to interpret. Here these indices are shown below simply for everyone's general awareness, i.e. the fact that they exist, who developed them and what their statistical properties are. References Aichholzer, J. (2014). Random intercept EFA of personality scales. Journal of Research in Personality, 53, 1-4. Bartlett, M. S. (1950). Tests of significance in factor analysis. British Journal of Statistical Psychology, 3(2), 77-85. Byrne, B.M. (1998). Structural Equation Modeling with LISREL, PRELIS and SIMPLIS: Basic Concepts, Applications and Programming. Mahwah, NJ: Lawrence Erlbaum Associates. Cerny, B. A., & Kaiser, H. F. (1977). A study of a measure of sampling adequacy for factor-analytic correlation matrices. Multivariate Behavioural Research, 12(1), 43–47. Hair, R. D., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (2013). Multivariate data analysis. Englewood Cliffs, NJ: Prentice–Hall. Hooper, D., Coughlan, J., & Mullen, M. R. (2008). Structural equation modeling: Guidelines for determining model fit. Electronic Journal of Business Research Methods, 6(1), 53-60. Hoyle, R. H. (2012). Handbook of structural equation modeling. London: Guilford Press. Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modelling, 6(1), 1–55. Miles, J. & Shevlin, M. (2007). A time and a place for incremental fit indices. Personality and Individual Differences, 42(5), 869-74.
Is there a standard measure of fit to validate Exploratory factor analysis?
Both the KMO and Bartlett’s test of sphericity are commonly used to verify the feasibility of the data for Exploratory Factor Analysis (EFA). Kaiser-Meyer Olkin (KMO) model tests sampling adequacy
Is there a standard measure of fit to validate Exploratory factor analysis? Both the KMO and Bartlett’s test of sphericity are commonly used to verify the feasibility of the data for Exploratory Factor Analysis (EFA). Kaiser-Meyer Olkin (KMO) model tests sampling adequacy by measuring the proportion of variance in the items that may be common variance. Values ranging between .80 and 1.00 indicate sampling adequacy (Cerny & Kaiser, 1977). Bartlett’s test of sphericity examines whether a correlation matrix is significantly different to the identity matrix, in which diagonal elements are unities and all off-diagonal elements are zeros (Bartlett, 1950). Significant results indicate that variables in the correlation matrix are suitable for factor analysis. The remaining four measures of fit can be used in EFA (see Aichholzer (2014) for an example) but in my experience, these fit measures are more commonly applied as part of the Confirmatory Factor Analysis and Structural Equation Modelling, in which you test whether your proposed model conforms to its expected factor structure, just like in the second paper you referenced). This pdf by Hooper (2008): Structural Equation Modelling: Guidelines for Determining Model Fit provides a concise and straight to the point summary of each fit statistic you listed and more. As of 2019, this is indeed quite a cited article with over 7,000 citations. Before providing a concise summary of the aforementioned fit statistics, it is worth noting that there are different classifications of fit indices, but one popular classification distinguishes between absolute fit indices and comparative fit indices. Classification of fit indices: Absolute and Comparative The logic behind absolute fit indices is essentially to test how well the model specified by the researcher reproduces the observed data. Commonly used absolute fit statistics include the $\chi^2$ fit statistic, RMSEA, SRMR. In contrast, comparative fit indices are based on a different logic, i.e. they assess how well a model specified by a researcher fits the observed sample data relative to a null model (i.e., a model that is based on the assumption that all observed variables are not correlated) (Miles & Shevlin, 2007). Popular comparative model fit indices are the CFI and TLI. The $\chi^2$ fit statistic The $\chi^2$ measures the discrepancy between the observed and the implied covariance matrices. The $\chi^2$ fit statistic is very popular and frequently reported in both CFA and SEM studies. However, it is notoriously sensitive to large sample sizes and increased model complexity (i.e. models with a large number of indicators and degrees of freedom). Therefore, the current practice is to report it mostly for historical reasons, and it rarely used to make decisions about the adequacy of model fit. The RMSEA The Root Mean Square Error of Approximation (RMSEA) provides information as to how well the model, with unknown but optimally chosen parameter estimates, would fit the population covariance matrix (Byrne, 1998). It is a very commonly used fit statistic. One of its key advantages is that the RMSEA calculates confidence intervals around its value. Values below $.060$ indicate close fit (Hu & Bentler, 1999). Values up to $.080$ are commonly accepted as adequate. The SRMR The Standardized Root Mean Residual (SRMR) is the square root of the difference between the residuals of the sample covariance matrix and the hypothesized covariance model. As SRMR is standardized, its values range between $0$ and $1$. Commonly, models with values below $.05$ threshold are considered to indicate good fit (Byrne, 1998). Also, values up to $.08$ are acceptable (Hu & Bentler, 1999). The CFI and TLI Two comparative fit indices commonly reported are the Comparative Fit Index (CFI) and the Tucker Lewis Index (TLI). The indices are similar; however, note that the CFI is normed while the TLI is not. Therefore, the CFI’s values range between zero and one, whereas the TLI’s values may fall below zero or be above one (Hair et al., 2013). For CFI and TLI values above .95 are indicative of good fit (Hu & Bentler, 1999). In practice, CFI and TLI values from $.90$ to $.95$ are considered acceptable. Note that the TLI is non-normed, so its values can go above $1.00$ EDIT: Further to the aforementioned information, Hoyle (2012) provides an excellent succinct summary of numerous fit indices. This table includes, for example, information on the indices' theoretical range, sensitivity to varying sample size and model complexity. Note that, in contrast to the indices introduced above, a great number of other indices exist, as illustrated in Hoyle's table. Yet, the frequency of their use is decreasing for various reasons. For example, RMR is non-normed and thus it is hard to interpret. Here these indices are shown below simply for everyone's general awareness, i.e. the fact that they exist, who developed them and what their statistical properties are. References Aichholzer, J. (2014). Random intercept EFA of personality scales. Journal of Research in Personality, 53, 1-4. Bartlett, M. S. (1950). Tests of significance in factor analysis. British Journal of Statistical Psychology, 3(2), 77-85. Byrne, B.M. (1998). Structural Equation Modeling with LISREL, PRELIS and SIMPLIS: Basic Concepts, Applications and Programming. Mahwah, NJ: Lawrence Erlbaum Associates. Cerny, B. A., & Kaiser, H. F. (1977). A study of a measure of sampling adequacy for factor-analytic correlation matrices. Multivariate Behavioural Research, 12(1), 43–47. Hair, R. D., Black, W. C., Babin, B. J., Anderson, R. E., & Tatham, R. L. (2013). Multivariate data analysis. Englewood Cliffs, NJ: Prentice–Hall. Hooper, D., Coughlan, J., & Mullen, M. R. (2008). Structural equation modeling: Guidelines for determining model fit. Electronic Journal of Business Research Methods, 6(1), 53-60. Hoyle, R. H. (2012). Handbook of structural equation modeling. London: Guilford Press. Hu, L. T., & Bentler, P. M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modelling, 6(1), 1–55. Miles, J. & Shevlin, M. (2007). A time and a place for incremental fit indices. Personality and Individual Differences, 42(5), 869-74.
Is there a standard measure of fit to validate Exploratory factor analysis? Both the KMO and Bartlett’s test of sphericity are commonly used to verify the feasibility of the data for Exploratory Factor Analysis (EFA). Kaiser-Meyer Olkin (KMO) model tests sampling adequacy
30,963
Can deep learning determine if two samples of handwriting are by the same person?
This paper seems to do exactly what you want: recognize authorship of handwriting samples, even when the texts don't match. "DeepWriter: A Multi-Stream Deep CNN for Text-independent Writer Identification" Linjie Xing, Yu Qiao. 2016. Text-independent writer identification is challenging due to the huge variation of written contents and the ambiguous written styles of different writers. This paper proposes DeepWriter, a deep multi-stream CNN to learn deep powerful representation for recognizing writers. DeepWriter takes local handwritten patches as input and is trained with softmax classification loss. The main contributions are: 1) we design and optimize multi-stream structure for writer identification task; 2) we introduce data augmentation learning to enhance the performance of DeepWriter; 3) we introduce a patch scanning strategy to handle text image with different lengths. In addition, we find that different languages such as English and Chinese may share common features for writer identification, and joint training can yield better performance. Experimental results on IAM and HWDB datasets show that our models achieve high identification accuracy: 99.01% on 301 writers and 97.03% on 657 writers with one English sentence input, 93.85% on 300 writers with one Chinese character input, which outperform previous methods with a large margin. Moreover, our models obtain accuracy of 98.01% on 301 writers with only 4 English alphabets as input. Siamese networks are used to compare things like signatures; it seems reasonable to try and extend this method to handwriting analysis. One challenge would be that whereas signatures are kind of like "stamps" in the sense that the writer will want to reproduce the same symbol over and over, two handwriting samples might not be writing the same words and phrases. So the success or failure of the project hinges on whether the neural network can recognize the writing style as distinct from the words. Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Sickinger and Roopak Shah. "Signature Verification using a 'Siamese' Time Delay Neural Network." AT&T Bell Labs. 1994 This paper describes an algorithm for verification of signatures written on a pen-input tablet. The algorithm is based on a novel, artificial neural network, called a "Siamese" neural network. This network consists of two identical sub-networks joined at their outputs. During training the two sub-networks extract features from two signatures, while the joining neuron measures the distance between the two feature vectors. Verification consists of comparing an extracted feature vector with a stored feature vector for the signer. Signatures closer to this stored representation than a chosen threshold are accepted, all other signatures are rejected as forgeries. Another approach is to use the triplet-loss and embedding strategies such as that used in FaceNet. Then you compare the embeddings by some means to decide if two images have the same or a different author. The success on faces taken from different angles and different lightning conditions is promising, and perhaps a better fit for matching handwriting samples. Florian Schroff, Dmitry Kalenichenko, James Philbin. "FaceNet: A Unified Embedding for Face Recognition and Clustering" Despite significant recent advances in the field of face recognition, implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors. Our method uses a deep convolutional network trained to directly optimize the embedding itself, rather than an intermediate bottleneck layer as in previous deep learning approaches. To train, we use triplets of roughly aligned matching / non-matching face patches generated using a novel online triplet mining method. The benefit of our approach is much greater representational efficiency: we achieve state-of-the-art face recognition performance using only 128-bytes per face. On the widely used Labeled Faces in the Wild (LFW) dataset, our system achieves a new record accuracy of 99.63%. On YouTube Faces DB it achieves 95.12%. Our system cuts the error rate in comparison to the best published result by 30% on both datasets. We also introduce the concept of harmonic embeddings, and a harmonic triplet loss, which describe different versions of face embeddings (produced by different networks) that are compatible to each other and allow for direct comparison between each other.
Can deep learning determine if two samples of handwriting are by the same person?
This paper seems to do exactly what you want: recognize authorship of handwriting samples, even when the texts don't match. "DeepWriter: A Multi-Stream Deep CNN for Text-independent Writer Identificat
Can deep learning determine if two samples of handwriting are by the same person? This paper seems to do exactly what you want: recognize authorship of handwriting samples, even when the texts don't match. "DeepWriter: A Multi-Stream Deep CNN for Text-independent Writer Identification" Linjie Xing, Yu Qiao. 2016. Text-independent writer identification is challenging due to the huge variation of written contents and the ambiguous written styles of different writers. This paper proposes DeepWriter, a deep multi-stream CNN to learn deep powerful representation for recognizing writers. DeepWriter takes local handwritten patches as input and is trained with softmax classification loss. The main contributions are: 1) we design and optimize multi-stream structure for writer identification task; 2) we introduce data augmentation learning to enhance the performance of DeepWriter; 3) we introduce a patch scanning strategy to handle text image with different lengths. In addition, we find that different languages such as English and Chinese may share common features for writer identification, and joint training can yield better performance. Experimental results on IAM and HWDB datasets show that our models achieve high identification accuracy: 99.01% on 301 writers and 97.03% on 657 writers with one English sentence input, 93.85% on 300 writers with one Chinese character input, which outperform previous methods with a large margin. Moreover, our models obtain accuracy of 98.01% on 301 writers with only 4 English alphabets as input. Siamese networks are used to compare things like signatures; it seems reasonable to try and extend this method to handwriting analysis. One challenge would be that whereas signatures are kind of like "stamps" in the sense that the writer will want to reproduce the same symbol over and over, two handwriting samples might not be writing the same words and phrases. So the success or failure of the project hinges on whether the neural network can recognize the writing style as distinct from the words. Jane Bromley, Isabelle Guyon, Yann LeCun, Eduard Sickinger and Roopak Shah. "Signature Verification using a 'Siamese' Time Delay Neural Network." AT&T Bell Labs. 1994 This paper describes an algorithm for verification of signatures written on a pen-input tablet. The algorithm is based on a novel, artificial neural network, called a "Siamese" neural network. This network consists of two identical sub-networks joined at their outputs. During training the two sub-networks extract features from two signatures, while the joining neuron measures the distance between the two feature vectors. Verification consists of comparing an extracted feature vector with a stored feature vector for the signer. Signatures closer to this stored representation than a chosen threshold are accepted, all other signatures are rejected as forgeries. Another approach is to use the triplet-loss and embedding strategies such as that used in FaceNet. Then you compare the embeddings by some means to decide if two images have the same or a different author. The success on faces taken from different angles and different lightning conditions is promising, and perhaps a better fit for matching handwriting samples. Florian Schroff, Dmitry Kalenichenko, James Philbin. "FaceNet: A Unified Embedding for Face Recognition and Clustering" Despite significant recent advances in the field of face recognition, implementing face verification and recognition efficiently at scale presents serious challenges to current approaches. In this paper we present a system, called FaceNet, that directly learns a mapping from face images to a compact Euclidean space where distances directly correspond to a measure of face similarity. Once this space has been produced, tasks such as face recognition, verification and clustering can be easily implemented using standard techniques with FaceNet embeddings as feature vectors. Our method uses a deep convolutional network trained to directly optimize the embedding itself, rather than an intermediate bottleneck layer as in previous deep learning approaches. To train, we use triplets of roughly aligned matching / non-matching face patches generated using a novel online triplet mining method. The benefit of our approach is much greater representational efficiency: we achieve state-of-the-art face recognition performance using only 128-bytes per face. On the widely used Labeled Faces in the Wild (LFW) dataset, our system achieves a new record accuracy of 99.63%. On YouTube Faces DB it achieves 95.12%. Our system cuts the error rate in comparison to the best published result by 30% on both datasets. We also introduce the concept of harmonic embeddings, and a harmonic triplet loss, which describe different versions of face embeddings (produced by different networks) that are compatible to each other and allow for direct comparison between each other.
Can deep learning determine if two samples of handwriting are by the same person? This paper seems to do exactly what you want: recognize authorship of handwriting samples, even when the texts don't match. "DeepWriter: A Multi-Stream Deep CNN for Text-independent Writer Identificat
30,964
Intuition for nonmonotonicity of coefficient paths in ridge regression
My reasoning is somewhat similar to Cagdas', but I'd like to look at how the things develop as $\lambda$ goes into extremes. If I did my algebra right, the derivative of the coefficients is given by: $$\frac{\partial \hat{\beta}}{\partial\lambda} = -(X'X + \lambda I)^{-1}\hat{\beta}$$ Now, for $\lambda \rightarrow 0$, ridge regression approaches ordinary linear regression. The penalty term becomes negligible and you can approximate the derivative by: $$\frac{\partial \hat{\beta}}{\partial\lambda} \approx -(X'X)^{-1}\hat{\beta}$$ That is, the gradient of $\hat{\beta}$ is mostly determined by the distribution of your data. The direction of change of $\hat{\beta}$ with increasing $\lambda$ can be positive or negative for any of the coefficients, depending both on the coefficients and on your data. On the other hand, when $\lambda \rightarrow \infty$, $\lambda I \gg X'X$ and you can approximate the derivative by: $$\frac{\partial \hat{\beta}}{\partial\lambda} \approx -(\lambda I)^{-1}\hat{\beta}$$ Here, the gradient is determined almost entirely by the value of $\hat{\beta}$, and ridge regression tries to force it to a null vector. The path of $\hat{\beta}$ for large and increasing $\lambda$'s is almost a straight line towards the origin. Edit: To illustrate this behaviour, here a simple data set with correlated variables: set.seed(0) tb1 = tibble( x1 = seq(-1, 1, by=.01), x2 = -2*x1 + rnorm(length(x1)), y = -3*x1+x2 + rnorm(length(x1)) ) The path of the coefficients $\beta$ with an increasing $\lambda$ looks like this:
Intuition for nonmonotonicity of coefficient paths in ridge regression
My reasoning is somewhat similar to Cagdas', but I'd like to look at how the things develop as $\lambda$ goes into extremes. If I did my algebra right, the derivative of the coefficients is given by:
Intuition for nonmonotonicity of coefficient paths in ridge regression My reasoning is somewhat similar to Cagdas', but I'd like to look at how the things develop as $\lambda$ goes into extremes. If I did my algebra right, the derivative of the coefficients is given by: $$\frac{\partial \hat{\beta}}{\partial\lambda} = -(X'X + \lambda I)^{-1}\hat{\beta}$$ Now, for $\lambda \rightarrow 0$, ridge regression approaches ordinary linear regression. The penalty term becomes negligible and you can approximate the derivative by: $$\frac{\partial \hat{\beta}}{\partial\lambda} \approx -(X'X)^{-1}\hat{\beta}$$ That is, the gradient of $\hat{\beta}$ is mostly determined by the distribution of your data. The direction of change of $\hat{\beta}$ with increasing $\lambda$ can be positive or negative for any of the coefficients, depending both on the coefficients and on your data. On the other hand, when $\lambda \rightarrow \infty$, $\lambda I \gg X'X$ and you can approximate the derivative by: $$\frac{\partial \hat{\beta}}{\partial\lambda} \approx -(\lambda I)^{-1}\hat{\beta}$$ Here, the gradient is determined almost entirely by the value of $\hat{\beta}$, and ridge regression tries to force it to a null vector. The path of $\hat{\beta}$ for large and increasing $\lambda$'s is almost a straight line towards the origin. Edit: To illustrate this behaviour, here a simple data set with correlated variables: set.seed(0) tb1 = tibble( x1 = seq(-1, 1, by=.01), x2 = -2*x1 + rnorm(length(x1)), y = -3*x1+x2 + rnorm(length(x1)) ) The path of the coefficients $\beta$ with an increasing $\lambda$ looks like this:
Intuition for nonmonotonicity of coefficient paths in ridge regression My reasoning is somewhat similar to Cagdas', but I'd like to look at how the things develop as $\lambda$ goes into extremes. If I did my algebra right, the derivative of the coefficients is given by:
30,965
Intuition for nonmonotonicity of coefficient paths in ridge regression
Geometrical point of view The ridge path is not a straight line. See for instance an image from this question The limit of "unit-variance" ridge regression estimator when $\lambda\to\infty$ Note that the path crosses the points where the circles $\Vert \beta \Vert = constant$ and ellipses $\Vert y - X\beta \Vert = constant$ are touching. It is this elliptical shape that allows the path to pass 0 and be decreasing/increasing in various parts. An question with an image that shows this more dramatically is Why under joint least squares direction is it possible for some coefficients to decrease in LARS regression? Plot of lasso path as coordinates of $\beta$ This plot is for Lasso instead of ridge regression, but as shown in the first image the principle is the same. Typically parameters decrease when we increase the penalty, but due to correlation it might be better that parameters decrease while simultaneously increasing another. This happens in the image with parameter $b_1$. Increasing $b_1$ makes that decreasing $b_2$ and $b_3$ coincides with less increase of the squared error part of the loss function (the green surface). Shrinking the parameters in a straight line from the OLS solution to 0, means that the sum of squared error becomes high. Taking a detour allows to shrink the parameters with less reduction of the sum of least squared error.
Intuition for nonmonotonicity of coefficient paths in ridge regression
Geometrical point of view The ridge path is not a straight line. See for instance an image from this question The limit of "unit-variance" ridge regression estimator when $\lambda\to\infty$ Note that
Intuition for nonmonotonicity of coefficient paths in ridge regression Geometrical point of view The ridge path is not a straight line. See for instance an image from this question The limit of "unit-variance" ridge regression estimator when $\lambda\to\infty$ Note that the path crosses the points where the circles $\Vert \beta \Vert = constant$ and ellipses $\Vert y - X\beta \Vert = constant$ are touching. It is this elliptical shape that allows the path to pass 0 and be decreasing/increasing in various parts. An question with an image that shows this more dramatically is Why under joint least squares direction is it possible for some coefficients to decrease in LARS regression? Plot of lasso path as coordinates of $\beta$ This plot is for Lasso instead of ridge regression, but as shown in the first image the principle is the same. Typically parameters decrease when we increase the penalty, but due to correlation it might be better that parameters decrease while simultaneously increasing another. This happens in the image with parameter $b_1$. Increasing $b_1$ makes that decreasing $b_2$ and $b_3$ coincides with less increase of the squared error part of the loss function (the green surface). Shrinking the parameters in a straight line from the OLS solution to 0, means that the sum of squared error becomes high. Taking a detour allows to shrink the parameters with less reduction of the sum of least squared error.
Intuition for nonmonotonicity of coefficient paths in ridge regression Geometrical point of view The ridge path is not a straight line. See for instance an image from this question The limit of "unit-variance" ridge regression estimator when $\lambda\to\infty$ Note that
30,966
Intuition for nonmonotonicity of coefficient paths in ridge regression
I think the key intuition to think of here is a situation where you are interested in modelling a single outcome of interest, $Y$, with three possible independent variables, $W$, $X$, $Z$. Imagine in a standard regression, $W$ and $X$ are found significant whereas the coefficient on $Z$ is found insignificant and close to zero, although $Z$ and $Y$ are correlated one-on-one. Let's assume the reason that $Z$ is not significant in the full regression is because all the effect of $Z$ is subsumed in $W$ and $X$ (i.e. they jointly form a better predictor). Now in a ridge regression context, for some parameter values of $\lambda$ it might be the case that using $Z$ is preferred over having both $W$ and $X$ due to the inclusion of a parameter error term. This example is also illustrative why ridge regression is used less within an inferential context, because even when controlling for potential confounders, they might still be 'the last coefficient standing'. See below an example of an outcome $Y$ which is solely related to two variables, $Z$ and $X$. The variable $W$ is affected by both $Z$ and $X$ but does not affect $Y$. Therefore, the SSE (which is minimised in the regression contex) is much lower in a regression of the form $y_i = \beta_0 + \beta_x X + \beta_z Z + \beta_w W + u_i$ relative to one with only $W$: $y_i = \beta_0 + \beta_w W + u_i$. This is illustrated in the R-squared of the former (at 0.97) relative to the latter (at 0.35). However, the L1-norm is strictly higher in the former case than in the latter (namely $L_1 = \sqrt{\beta_0^2 + \beta_1^2 + \beta_2^2}$). As you drive up $\lambda$, the model will prioritise minimising the $L_1$-norm over the SSE. At some point, the model will prefer having just a single non-zero $\beta$ driving the $L_1$-norm, even at the cost of a much poorer model fit. Therefore, you can see that the coefficients on $\beta_x$ and $\beta_z$ start to decrease up to the point that only $\beta_w$ is non-zero. Also note that further increasing $\lambda$ makes the $L_1$-norm important enough to completely lose interest in minimising any SSE (and therefore generating any model fit) and drives all coefficients to zero. library(glmnet) x <- rnorm(100, 10, 2) z <- rnorm(100, 5, 2) w <- 0.8*x + 0.8*z + rnorm(100, 0, 1) u <- rnorm(100, 0, 1) y <- 2*x + 2*z + u # R-squared of 0.97 summary(lm(y ~ 1 + x + z + w)) # R-squared of 0.35 summary(lm(y ~ 1 + z)) fit <- glmnet(as.matrix(cbind(x, z, w)), as.matrix(y)) plot(fit)
Intuition for nonmonotonicity of coefficient paths in ridge regression
I think the key intuition to think of here is a situation where you are interested in modelling a single outcome of interest, $Y$, with three possible independent variables, $W$, $X$, $Z$. Imagine in
Intuition for nonmonotonicity of coefficient paths in ridge regression I think the key intuition to think of here is a situation where you are interested in modelling a single outcome of interest, $Y$, with three possible independent variables, $W$, $X$, $Z$. Imagine in a standard regression, $W$ and $X$ are found significant whereas the coefficient on $Z$ is found insignificant and close to zero, although $Z$ and $Y$ are correlated one-on-one. Let's assume the reason that $Z$ is not significant in the full regression is because all the effect of $Z$ is subsumed in $W$ and $X$ (i.e. they jointly form a better predictor). Now in a ridge regression context, for some parameter values of $\lambda$ it might be the case that using $Z$ is preferred over having both $W$ and $X$ due to the inclusion of a parameter error term. This example is also illustrative why ridge regression is used less within an inferential context, because even when controlling for potential confounders, they might still be 'the last coefficient standing'. See below an example of an outcome $Y$ which is solely related to two variables, $Z$ and $X$. The variable $W$ is affected by both $Z$ and $X$ but does not affect $Y$. Therefore, the SSE (which is minimised in the regression contex) is much lower in a regression of the form $y_i = \beta_0 + \beta_x X + \beta_z Z + \beta_w W + u_i$ relative to one with only $W$: $y_i = \beta_0 + \beta_w W + u_i$. This is illustrated in the R-squared of the former (at 0.97) relative to the latter (at 0.35). However, the L1-norm is strictly higher in the former case than in the latter (namely $L_1 = \sqrt{\beta_0^2 + \beta_1^2 + \beta_2^2}$). As you drive up $\lambda$, the model will prioritise minimising the $L_1$-norm over the SSE. At some point, the model will prefer having just a single non-zero $\beta$ driving the $L_1$-norm, even at the cost of a much poorer model fit. Therefore, you can see that the coefficients on $\beta_x$ and $\beta_z$ start to decrease up to the point that only $\beta_w$ is non-zero. Also note that further increasing $\lambda$ makes the $L_1$-norm important enough to completely lose interest in minimising any SSE (and therefore generating any model fit) and drives all coefficients to zero. library(glmnet) x <- rnorm(100, 10, 2) z <- rnorm(100, 5, 2) w <- 0.8*x + 0.8*z + rnorm(100, 0, 1) u <- rnorm(100, 0, 1) y <- 2*x + 2*z + u # R-squared of 0.97 summary(lm(y ~ 1 + x + z + w)) # R-squared of 0.35 summary(lm(y ~ 1 + z)) fit <- glmnet(as.matrix(cbind(x, z, w)), as.matrix(y)) plot(fit)
Intuition for nonmonotonicity of coefficient paths in ridge regression I think the key intuition to think of here is a situation where you are interested in modelling a single outcome of interest, $Y$, with three possible independent variables, $W$, $X$, $Z$. Imagine in
30,967
Intuition for nonmonotonicity of coefficient paths in ridge regression
Ridge solution: $$\hat{\beta_\lambda} = (X'X + \lambda I)^{-1}X'y$$ If my matrix algebra is right, derivative of Ridge with respect to $\lambda$: $$\frac{\partial \hat{\beta_\lambda}}{\partial\lambda} = -(X'X + \lambda I)^{-2}X'y$$ which is: $$\frac{\partial \hat{\beta_\lambda}}{\partial\lambda} = -(X'X + \lambda I)^{-1}\hat{\beta_\lambda} = -A_\lambda \hat{\beta_\lambda}$$ The derivative of each component of $\hat{\beta_\lambda}$ at each $\lambda$ depends on the values of other components of $\hat{\beta_\lambda}$. At this point it would be unreasonable to think that the derivative of a particular component of $\hat{\beta_\lambda}$ will not have a zero crossing when other components are changing. Suppose that $X$s are not co-linear and are whitened before regression. In that case $\frac{X'X}{n}$ is $I$ where $n$ is the number of data points. Hence: $$\frac{\partial \hat{\beta_\lambda}}{\partial\lambda} = -(n + \lambda)^{-1}\hat{\beta_\lambda} = -\frac{\hat{\beta_\lambda}}{n+\lambda} = 0$$ which will be $0$ when a component of $\hat{\beta_\lambda}$ becomes $0$. In this case we should have a monotonic scenario. Basically if we start with no correlation between regressors they will go to $0$ with increasing $\lambda$ without changing direction starting from their cross-covariance with $y$ as their initial values.
Intuition for nonmonotonicity of coefficient paths in ridge regression
Ridge solution: $$\hat{\beta_\lambda} = (X'X + \lambda I)^{-1}X'y$$ If my matrix algebra is right, derivative of Ridge with respect to $\lambda$: $$\frac{\partial \hat{\beta_\lambda}}{\partial\lambda}
Intuition for nonmonotonicity of coefficient paths in ridge regression Ridge solution: $$\hat{\beta_\lambda} = (X'X + \lambda I)^{-1}X'y$$ If my matrix algebra is right, derivative of Ridge with respect to $\lambda$: $$\frac{\partial \hat{\beta_\lambda}}{\partial\lambda} = -(X'X + \lambda I)^{-2}X'y$$ which is: $$\frac{\partial \hat{\beta_\lambda}}{\partial\lambda} = -(X'X + \lambda I)^{-1}\hat{\beta_\lambda} = -A_\lambda \hat{\beta_\lambda}$$ The derivative of each component of $\hat{\beta_\lambda}$ at each $\lambda$ depends on the values of other components of $\hat{\beta_\lambda}$. At this point it would be unreasonable to think that the derivative of a particular component of $\hat{\beta_\lambda}$ will not have a zero crossing when other components are changing. Suppose that $X$s are not co-linear and are whitened before regression. In that case $\frac{X'X}{n}$ is $I$ where $n$ is the number of data points. Hence: $$\frac{\partial \hat{\beta_\lambda}}{\partial\lambda} = -(n + \lambda)^{-1}\hat{\beta_\lambda} = -\frac{\hat{\beta_\lambda}}{n+\lambda} = 0$$ which will be $0$ when a component of $\hat{\beta_\lambda}$ becomes $0$. In this case we should have a monotonic scenario. Basically if we start with no correlation between regressors they will go to $0$ with increasing $\lambda$ without changing direction starting from their cross-covariance with $y$ as their initial values.
Intuition for nonmonotonicity of coefficient paths in ridge regression Ridge solution: $$\hat{\beta_\lambda} = (X'X + \lambda I)^{-1}X'y$$ If my matrix algebra is right, derivative of Ridge with respect to $\lambda$: $$\frac{\partial \hat{\beta_\lambda}}{\partial\lambda}
30,968
Understanding the Cullen and Frey plot
This plot used to be commonly called a Pearson plot (it also had several other names), though sometimes with skewness rather than its square being plotted. It was used long before Cullen and Frey wrote about it (a fact they clearly acknowledge in their text, though their own mention of having seen it in a book written in the late 60s still considerably underestimates its age). The aim of such a plot was to help identify a suitable Pearson distribution. The Cullen and Frey version of the plot doesn't show all the Pearson family on the plot; you can't see from that plot whether the skewness and kurtosis would correspond to that of a Pearson IV or VI distribution because they leave the dividing line off the plot (which corresponds to a shifted and scaled inverse Gamma) By transforming (squeezing and rotating) the plot to fit the one here it turns out that it's in the region of a Pearson IV, but you can see from the histogram that the skewness and kurtosis not a sufficient way to summarize the distribution -- no Pearson IV distribution is shaped like that; nor are a couple of other candidates that would correspond to that approximate region. Another thing to note is that the sample kurtosis tends to underestimate the population kurtosis, and that selection by matching third and fourth cumulants isn't usually an especially good way to go about choosing a model. Indeed, it is likely that no simple, commonly used distribution will fit very well. You might get an adequate fit with a mixture (as you suggest); I'd expect at least 4-5 components from some suitable family might be required. However, there are few applications where it's really necessary to identify a distributional form like this -- you'd be much better to explain what you'd be using such a distribution for, because there's probably something better that you can do than this.
Understanding the Cullen and Frey plot
This plot used to be commonly called a Pearson plot (it also had several other names), though sometimes with skewness rather than its square being plotted. It was used long before Cullen and Frey wrot
Understanding the Cullen and Frey plot This plot used to be commonly called a Pearson plot (it also had several other names), though sometimes with skewness rather than its square being plotted. It was used long before Cullen and Frey wrote about it (a fact they clearly acknowledge in their text, though their own mention of having seen it in a book written in the late 60s still considerably underestimates its age). The aim of such a plot was to help identify a suitable Pearson distribution. The Cullen and Frey version of the plot doesn't show all the Pearson family on the plot; you can't see from that plot whether the skewness and kurtosis would correspond to that of a Pearson IV or VI distribution because they leave the dividing line off the plot (which corresponds to a shifted and scaled inverse Gamma) By transforming (squeezing and rotating) the plot to fit the one here it turns out that it's in the region of a Pearson IV, but you can see from the histogram that the skewness and kurtosis not a sufficient way to summarize the distribution -- no Pearson IV distribution is shaped like that; nor are a couple of other candidates that would correspond to that approximate region. Another thing to note is that the sample kurtosis tends to underestimate the population kurtosis, and that selection by matching third and fourth cumulants isn't usually an especially good way to go about choosing a model. Indeed, it is likely that no simple, commonly used distribution will fit very well. You might get an adequate fit with a mixture (as you suggest); I'd expect at least 4-5 components from some suitable family might be required. However, there are few applications where it's really necessary to identify a distributional form like this -- you'd be much better to explain what you'd be using such a distribution for, because there's probably something better that you can do than this.
Understanding the Cullen and Frey plot This plot used to be commonly called a Pearson plot (it also had several other names), though sometimes with skewness rather than its square being plotted. It was used long before Cullen and Frey wrot
30,969
Understanding the Cullen and Frey plot
Your data looks like a mixture to me: there seems to be one component with a mean fragment length of 100 nt, one around 200 nt and another with a mean fragment length around 300 nt (you can see 'bumps' in the histogram). Is there anything about how the library was prepared that would explain why there is more than one component in the mixture? I would fit a mixture of 3 Gaussians to the data. I use the R package mixtools. Edit: to check goodness of fit, you might try this function: https://rdrr.io/cran/AdaptGauss/man/Chi2testMixtures.html
Understanding the Cullen and Frey plot
Your data looks like a mixture to me: there seems to be one component with a mean fragment length of 100 nt, one around 200 nt and another with a mean fragment length around 300 nt (you can see 'bumps
Understanding the Cullen and Frey plot Your data looks like a mixture to me: there seems to be one component with a mean fragment length of 100 nt, one around 200 nt and another with a mean fragment length around 300 nt (you can see 'bumps' in the histogram). Is there anything about how the library was prepared that would explain why there is more than one component in the mixture? I would fit a mixture of 3 Gaussians to the data. I use the R package mixtools. Edit: to check goodness of fit, you might try this function: https://rdrr.io/cran/AdaptGauss/man/Chi2testMixtures.html
Understanding the Cullen and Frey plot Your data looks like a mixture to me: there seems to be one component with a mean fragment length of 100 nt, one around 200 nt and another with a mean fragment length around 300 nt (you can see 'bumps
30,970
Linear regression minimising MAD in sklearn
The expected MAD is minimized by the median of the distribution (Hanley, 2001, The American Statistician; see also Why does minimizing the MAE lead to forecasting the median and not the mean?). Therefore, you are looking for a model that will yield the conditional median, instead of the conditional mean. This is a special case of quantile-regression, specifically for the 50% quantile. Roger Koenker is the main guru for quantile regression; see in particular his book Quantile Regression. There are ways to do quantile regression in Python. This tutorial may be helpful. If you are open to using R, you can use the quantreg package.
Linear regression minimising MAD in sklearn
The expected MAD is minimized by the median of the distribution (Hanley, 2001, The American Statistician; see also Why does minimizing the MAE lead to forecasting the median and not the mean?). Theref
Linear regression minimising MAD in sklearn The expected MAD is minimized by the median of the distribution (Hanley, 2001, The American Statistician; see also Why does minimizing the MAE lead to forecasting the median and not the mean?). Therefore, you are looking for a model that will yield the conditional median, instead of the conditional mean. This is a special case of quantile-regression, specifically for the 50% quantile. Roger Koenker is the main guru for quantile regression; see in particular his book Quantile Regression. There are ways to do quantile regression in Python. This tutorial may be helpful. If you are open to using R, you can use the quantreg package.
Linear regression minimising MAD in sklearn The expected MAD is minimized by the median of the distribution (Hanley, 2001, The American Statistician; see also Why does minimizing the MAE lead to forecasting the median and not the mean?). Theref
30,971
What is the purpose of "transformed variables" in Stan?
Objects declared in the transformed parameters block of a Stan program are: Unknown but are known given the values of the objects in the parameters block Saved in the output and hence should be of interest to the researcher Are usually the arguments to the log-likelihood function that is evaluated in the model block, although in hierarchical models the line between the prior and the likelihood can be drawn in multiple ways (if the third point is not the case, the object should usually be declared in the generated quantities block of a Stan program) The purpose of declaring such things in the transformed parameters block rather than the parameters block is often to obtain more efficient sampling from the posterior distribution. If there is a posterior PDF $f\left(\left.\boldsymbol{\theta}\right|\mbox{data}\right)$, then for any bijective transformation from $\boldsymbol{\alpha}$ to $\boldsymbol{\theta}$, the posterior PDF of $\boldsymbol{\alpha}$ is simply $f\left(\left.\boldsymbol{\theta}\left(\boldsymbol{\alpha}\right)\right|\mbox{data}\right)\mathrm{abs}\left|\mathbf{J}\right|$, where $\left|\mathbf{J}\right|$ is the determinant of the Jacobian matrix of the transformation from $\boldsymbol{\alpha}$ to $\boldsymbol{\theta}$. Thus, you can make the same inferences about (functions of) $\boldsymbol{\theta}$ either by drawing from the posterior whose PDF is $f\left(\left.\boldsymbol{\theta}\right|\mbox{data}\right)$ where $\boldsymbol{\theta}$ are the parameters or the posterior whose PDF is $f\left(\left.\boldsymbol{\theta}\left(\boldsymbol{\alpha}\right)\right|\mbox{data}\right)\mathrm{abs}\left|\mathbf{J}\right|$ where $\boldsymbol{\alpha}$ are parameters and $\boldsymbol{\theta}$ are transformed parameters. Since the posterior inferences about (functions of) $\boldsymbol{\theta}$ are the same, you are free to choose a transformation that enhances the efficiency of the sampling by making $\boldsymbol{\alpha}$ less correlated, unit scaled, more Gaussian, etc. than is $\boldsymbol{\theta}$.
What is the purpose of "transformed variables" in Stan?
Objects declared in the transformed parameters block of a Stan program are: Unknown but are known given the values of the objects in the parameters block Saved in the output and hence should be of in
What is the purpose of "transformed variables" in Stan? Objects declared in the transformed parameters block of a Stan program are: Unknown but are known given the values of the objects in the parameters block Saved in the output and hence should be of interest to the researcher Are usually the arguments to the log-likelihood function that is evaluated in the model block, although in hierarchical models the line between the prior and the likelihood can be drawn in multiple ways (if the third point is not the case, the object should usually be declared in the generated quantities block of a Stan program) The purpose of declaring such things in the transformed parameters block rather than the parameters block is often to obtain more efficient sampling from the posterior distribution. If there is a posterior PDF $f\left(\left.\boldsymbol{\theta}\right|\mbox{data}\right)$, then for any bijective transformation from $\boldsymbol{\alpha}$ to $\boldsymbol{\theta}$, the posterior PDF of $\boldsymbol{\alpha}$ is simply $f\left(\left.\boldsymbol{\theta}\left(\boldsymbol{\alpha}\right)\right|\mbox{data}\right)\mathrm{abs}\left|\mathbf{J}\right|$, where $\left|\mathbf{J}\right|$ is the determinant of the Jacobian matrix of the transformation from $\boldsymbol{\alpha}$ to $\boldsymbol{\theta}$. Thus, you can make the same inferences about (functions of) $\boldsymbol{\theta}$ either by drawing from the posterior whose PDF is $f\left(\left.\boldsymbol{\theta}\right|\mbox{data}\right)$ where $\boldsymbol{\theta}$ are the parameters or the posterior whose PDF is $f\left(\left.\boldsymbol{\theta}\left(\boldsymbol{\alpha}\right)\right|\mbox{data}\right)\mathrm{abs}\left|\mathbf{J}\right|$ where $\boldsymbol{\alpha}$ are parameters and $\boldsymbol{\theta}$ are transformed parameters. Since the posterior inferences about (functions of) $\boldsymbol{\theta}$ are the same, you are free to choose a transformation that enhances the efficiency of the sampling by making $\boldsymbol{\alpha}$ less correlated, unit scaled, more Gaussian, etc. than is $\boldsymbol{\theta}$.
What is the purpose of "transformed variables" in Stan? Objects declared in the transformed parameters block of a Stan program are: Unknown but are known given the values of the objects in the parameters block Saved in the output and hence should be of in
30,972
Formal definition of the qqline used in a Q-Q plot
Sort of "both" - the line depends both on the observed quantiles (which define the y-axis of the QQ plot) and the expected/theoretical/reference quantiles (which the define the x-axis). The documentation (which you quote) should always be taken as the canonical reference: ‘qqline’ adds a line to a “theoretical”, by default normal, quantile-quantile plot which passes through the ‘probs’ quantiles, by default the first and third quartiles. If in doubt, USTL ("Use the Source, Luke") , which can be found here: here's a slightly abridged and commented version ## quantiles (.25 and 0.75 by default) of data y <- quantile(y, probs, names=FALSE, type=qtype, na.rm = TRUE) ## quantiles of reference/theoretical distribution x <- distribution(probs) ## ... slope <- diff(y)/diff(x) ## observed slope between quantiles int <- y[1L]-slope*x[1L] ## intercept abline(int, slope, ...) ## draw the line For what it's worth, I believe that this approach (line connecting central quantiles) is used because it fulfills the following criteria for exploratory/diagnostic approaches: quick (e.g. no need to run a linear regression, just find the quantiles and draw a straight line) robust (it only depends on the behavior of the central part of the distribution, won't be thrown off by weird tails)
Formal definition of the qqline used in a Q-Q plot
Sort of "both" - the line depends both on the observed quantiles (which define the y-axis of the QQ plot) and the expected/theoretical/reference quantiles (which the define the x-axis). The documentat
Formal definition of the qqline used in a Q-Q plot Sort of "both" - the line depends both on the observed quantiles (which define the y-axis of the QQ plot) and the expected/theoretical/reference quantiles (which the define the x-axis). The documentation (which you quote) should always be taken as the canonical reference: ‘qqline’ adds a line to a “theoretical”, by default normal, quantile-quantile plot which passes through the ‘probs’ quantiles, by default the first and third quartiles. If in doubt, USTL ("Use the Source, Luke") , which can be found here: here's a slightly abridged and commented version ## quantiles (.25 and 0.75 by default) of data y <- quantile(y, probs, names=FALSE, type=qtype, na.rm = TRUE) ## quantiles of reference/theoretical distribution x <- distribution(probs) ## ... slope <- diff(y)/diff(x) ## observed slope between quantiles int <- y[1L]-slope*x[1L] ## intercept abline(int, slope, ...) ## draw the line For what it's worth, I believe that this approach (line connecting central quantiles) is used because it fulfills the following criteria for exploratory/diagnostic approaches: quick (e.g. no need to run a linear regression, just find the quantiles and draw a straight line) robust (it only depends on the behavior of the central part of the distribution, won't be thrown off by weird tails)
Formal definition of the qqline used in a Q-Q plot Sort of "both" - the line depends both on the observed quantiles (which define the y-axis of the QQ plot) and the expected/theoretical/reference quantiles (which the define the x-axis). The documentat
30,973
Formal definition of the qqline used in a Q-Q plot
I think it simply adds a line segment between the points (x1, y1) & (x2, y2) for given probabilities (p1, p2) (x1, x2) are the quantiles of the theoretical distribution; (y1, y2) for the data comparison. Function qline has simple code under the hood. This is a simple e.g. in R # sample data set.seed(2) y <- rt(100, df = 5) # get the values probs <- c(0.25, 0.75) x1 <- qnorm(probs[1]) x2 <- qnorm(probs[2]) y1 <- quantile(y, probs[1]) y2 <- quantile(y, probs[2]) # plot qqnorm(y) segments(x1, y1, x2, y2, col = "red", lwd = 2) qqline(y, lty = 2) # theoretical match is straight line. If you add more samples, qqline should # converge to this abline(0,1)
Formal definition of the qqline used in a Q-Q plot
I think it simply adds a line segment between the points (x1, y1) & (x2, y2) for given probabilities (p1, p2) (x1, x2) are the quantiles of the theoretical distribution; (y1, y2) for the data comparis
Formal definition of the qqline used in a Q-Q plot I think it simply adds a line segment between the points (x1, y1) & (x2, y2) for given probabilities (p1, p2) (x1, x2) are the quantiles of the theoretical distribution; (y1, y2) for the data comparison. Function qline has simple code under the hood. This is a simple e.g. in R # sample data set.seed(2) y <- rt(100, df = 5) # get the values probs <- c(0.25, 0.75) x1 <- qnorm(probs[1]) x2 <- qnorm(probs[2]) y1 <- quantile(y, probs[1]) y2 <- quantile(y, probs[2]) # plot qqnorm(y) segments(x1, y1, x2, y2, col = "red", lwd = 2) qqline(y, lty = 2) # theoretical match is straight line. If you add more samples, qqline should # converge to this abline(0,1)
Formal definition of the qqline used in a Q-Q plot I think it simply adds a line segment between the points (x1, y1) & (x2, y2) for given probabilities (p1, p2) (x1, x2) are the quantiles of the theoretical distribution; (y1, y2) for the data comparis
30,974
Why is the Hazard function not a pdf?
The argument of a conditional pdf cannot depend on the conditioning event in any way, shape or form. In $$f_{T\mid A}(t\mid A) = \lim_{\delta\to 0} \frac{P\{t < T \leq t+\delta\mid A\}}{\delta},$$ $A$ can be a fixed event such as $\{T>5\}$ but not something that depends on $t$ such as $\{T > t\}$. Another important reason why a hazard function $h(t)$ (or any scalar submultiple thereof) cannot possibly be a pdf is that $$\int_0^\infty h(t)\, \mathrm dt = \infty$$ whereas pdfs of lifetimes have more mundane values for their integrals: $$\int_0^\infty f_T(t)\, \mathrm dt = 1.$$
Why is the Hazard function not a pdf?
The argument of a conditional pdf cannot depend on the conditioning event in any way, shape or form. In $$f_{T\mid A}(t\mid A) = \lim_{\delta\to 0} \frac{P\{t < T \leq t+\delta\mid A\}}{\delta},$$ $
Why is the Hazard function not a pdf? The argument of a conditional pdf cannot depend on the conditioning event in any way, shape or form. In $$f_{T\mid A}(t\mid A) = \lim_{\delta\to 0} \frac{P\{t < T \leq t+\delta\mid A\}}{\delta},$$ $A$ can be a fixed event such as $\{T>5\}$ but not something that depends on $t$ such as $\{T > t\}$. Another important reason why a hazard function $h(t)$ (or any scalar submultiple thereof) cannot possibly be a pdf is that $$\int_0^\infty h(t)\, \mathrm dt = \infty$$ whereas pdfs of lifetimes have more mundane values for their integrals: $$\int_0^\infty f_T(t)\, \mathrm dt = 1.$$
Why is the Hazard function not a pdf? The argument of a conditional pdf cannot depend on the conditioning event in any way, shape or form. In $$f_{T\mid A}(t\mid A) = \lim_{\delta\to 0} \frac{P\{t < T \leq t+\delta\mid A\}}{\delta},$$ $
30,975
Why is the Hazard function not a pdf?
I think the counterexample Björn suggested is a enough to answer the question. Let me write it out in more detail: Björn's Counterexample Let $T\stackrel{d}{=}\mathrm{Exp}(\lambda)$ for any $\lambda > 0$ then $f_T(t) = \lambda e^{-\lambda t}$ which implies $S(t) = \mathcal{P}(T>t) = 1-(1-e^{-\lambda t}) = e^{-\lambda t}$. The hazard function can be calculated as $\lambda(t) = \dfrac{f_T(t)}{S(t)} = \dfrac{\lambda e^{-\lambda t}}{e^{-\lambda t}} = \lambda$. (hence the notation $\lambda(t)$ for a hazard function). Clearly this implies that $\lambda(t)$ is not a probability density function, since $\int_0^{+\infty} \lambda(u) \mathrm{d}u \not = 1$. Back to your question(s) The first part of your question was: `Why is one a pdf and the other not' As far as I remember, $f_T(t)$ is defined as a pdf if certain conditions hold. One of these conditions is $\int_\Omega f_T(t) \mathrm{d} t =1$. (Which can clearly be violated in the case of $\lambda(t)$). The density function $f_T(t)$ can be viewed as the limit of a cumulative distribution function in a narrow timeslot, which is what the notation $f_T(t) = \lim_{\delta \to 0} \frac{\mathcal{P}(t\leqslant T<t+\delta)}{\delta}$ suggests. I would not call this the definition since it lacks the conditions. I guess is the reason this is not a pdf because the conditioning is not on a single event $T=t$ but rather on $T\geqslant t$? If it were on a single event $T=t$, would this be a pdf? Notice how: $$ \mathcal{P}(t\leqslant T<t+\delta \mid T = t) = 1 $$ which implies that conditioning on $T=t$ isn't that usefull. A conditional probability function $f_{Y\mid X}(y\mid X=x)$ for continious r.v. is defined as $$ f_{Y\mid X}(y\mid X=x) = \dfrac{f_{Y,X}(Y=y,X=x)}{f_X(x)} $$ and the hazard function does not fit here. You need (at least) two different random variables. You could look at something like this: $$ f_{T \mid Z}(t\mid Z = 1) $$ for a certain $Z$ as an indicator of treatment or control. Then cleary this is a probability function.
Why is the Hazard function not a pdf?
I think the counterexample Björn suggested is a enough to answer the question. Let me write it out in more detail: Björn's Counterexample Let $T\stackrel{d}{=}\mathrm{Exp}(\lambda)$ for any $\lambda >
Why is the Hazard function not a pdf? I think the counterexample Björn suggested is a enough to answer the question. Let me write it out in more detail: Björn's Counterexample Let $T\stackrel{d}{=}\mathrm{Exp}(\lambda)$ for any $\lambda > 0$ then $f_T(t) = \lambda e^{-\lambda t}$ which implies $S(t) = \mathcal{P}(T>t) = 1-(1-e^{-\lambda t}) = e^{-\lambda t}$. The hazard function can be calculated as $\lambda(t) = \dfrac{f_T(t)}{S(t)} = \dfrac{\lambda e^{-\lambda t}}{e^{-\lambda t}} = \lambda$. (hence the notation $\lambda(t)$ for a hazard function). Clearly this implies that $\lambda(t)$ is not a probability density function, since $\int_0^{+\infty} \lambda(u) \mathrm{d}u \not = 1$. Back to your question(s) The first part of your question was: `Why is one a pdf and the other not' As far as I remember, $f_T(t)$ is defined as a pdf if certain conditions hold. One of these conditions is $\int_\Omega f_T(t) \mathrm{d} t =1$. (Which can clearly be violated in the case of $\lambda(t)$). The density function $f_T(t)$ can be viewed as the limit of a cumulative distribution function in a narrow timeslot, which is what the notation $f_T(t) = \lim_{\delta \to 0} \frac{\mathcal{P}(t\leqslant T<t+\delta)}{\delta}$ suggests. I would not call this the definition since it lacks the conditions. I guess is the reason this is not a pdf because the conditioning is not on a single event $T=t$ but rather on $T\geqslant t$? If it were on a single event $T=t$, would this be a pdf? Notice how: $$ \mathcal{P}(t\leqslant T<t+\delta \mid T = t) = 1 $$ which implies that conditioning on $T=t$ isn't that usefull. A conditional probability function $f_{Y\mid X}(y\mid X=x)$ for continious r.v. is defined as $$ f_{Y\mid X}(y\mid X=x) = \dfrac{f_{Y,X}(Y=y,X=x)}{f_X(x)} $$ and the hazard function does not fit here. You need (at least) two different random variables. You could look at something like this: $$ f_{T \mid Z}(t\mid Z = 1) $$ for a certain $Z$ as an indicator of treatment or control. Then cleary this is a probability function.
Why is the Hazard function not a pdf? I think the counterexample Björn suggested is a enough to answer the question. Let me write it out in more detail: Björn's Counterexample Let $T\stackrel{d}{=}\mathrm{Exp}(\lambda)$ for any $\lambda >
30,976
Why is the Hazard function not a pdf?
The Hazard function can be equivalently seen as the ratio of the pdf $f(t)$ to the survival function $S(t)$: $$\lambda(t)=\lim_{\delta \to 0} \frac{P(t\leqslant T < t+\delta \mid T\geqslant t)}{\delta}$$ by definition of conditional probability: $$=\lim_{\delta \to 0} \frac{P(t\leqslant T < t+\delta \cap T\geqslant t) / P(T \ge t)}{\delta}$$ $$=\lim_{\delta \to 0} \frac{P(t\leqslant T < t+\delta) / P(T \ge t)}{\delta}$$ by definition of $f(t):$ $$= \frac{f(t)}{P(T \ge t)}$$ by definition of survival function: $$ =\frac{f(t)}{S(t)}$$ Thus since the hazard function is the pdf scaled by the survival function, it will not generally integrate to $1$, and hence is not a pdf.
Why is the Hazard function not a pdf?
The Hazard function can be equivalently seen as the ratio of the pdf $f(t)$ to the survival function $S(t)$: $$\lambda(t)=\lim_{\delta \to 0} \frac{P(t\leqslant T < t+\delta \mid T\geqslant t)}{\delta
Why is the Hazard function not a pdf? The Hazard function can be equivalently seen as the ratio of the pdf $f(t)$ to the survival function $S(t)$: $$\lambda(t)=\lim_{\delta \to 0} \frac{P(t\leqslant T < t+\delta \mid T\geqslant t)}{\delta}$$ by definition of conditional probability: $$=\lim_{\delta \to 0} \frac{P(t\leqslant T < t+\delta \cap T\geqslant t) / P(T \ge t)}{\delta}$$ $$=\lim_{\delta \to 0} \frac{P(t\leqslant T < t+\delta) / P(T \ge t)}{\delta}$$ by definition of $f(t):$ $$= \frac{f(t)}{P(T \ge t)}$$ by definition of survival function: $$ =\frac{f(t)}{S(t)}$$ Thus since the hazard function is the pdf scaled by the survival function, it will not generally integrate to $1$, and hence is not a pdf.
Why is the Hazard function not a pdf? The Hazard function can be equivalently seen as the ratio of the pdf $f(t)$ to the survival function $S(t)$: $$\lambda(t)=\lim_{\delta \to 0} \frac{P(t\leqslant T < t+\delta \mid T\geqslant t)}{\delta
30,977
Results of lm() function with a dependent ordered categorical variable?
In this case, what lm() is doing is converting your "categorical" variable into a numeric sequence in order. To make this clearer, I'll adapt Bolker's code a bit to make the X variable more obviously categorical: set.seed(101) d <- data.frame(x=sample(1:4,size=30,replace=TRUE)) d$y <- rnorm(30,1+2*d$x,sd=0.01) d$x = factor(d$x, labels=c("none", "some", "more", "a lot")) coef(lm(y~x, d)) # (Intercept) xsome xmore xa lot # 3.001627 1.991260 3.995619 5.999098 So here, the mean of x=None is in the intercept, and the deviation from that is indicated for each category. coef(lm(y~ordered(x), d)) # (Intercept) ordered(x).L ordered(x).Q ordered(x).C # 5.998121421 4.472505514 0.006109021 -0.003125958 Conceptually, what's happened here is that the ordered() function converted x into newx using (something similar to): if (x=="None") newx=-.67 if (x=="some") newx=-.22 if (x=="more") newx=.22 if (x=="a lot") newx=.67 and then it fitted (something like) the model: $$y = a + b_0 \times newx + b_1 \times newx^2 + b_2 \times newx^3$$ where you have linear $newx$, quadratic $newx^2$, and cubic $newx^3$ components. Note, I said that it's something like that, because the problem with the model described there is that $newx$, $newx^2$, and $newx^3$ are not at all independent. What lm() does instead is uses a set of contrasts generated by contr.poly(4). These contrasts ensure orthogonality, so that the linear, quadratic and cubic components are independent. But the principle is similar - when fitting ordered factors, lm() fits a linear, quadratic, cubic, etc... component. You can see this by comparing: coef(lm(y~ordered(x), d)) # (Intercept) ordered(x).L ordered(x).Q ordered(x).C # 5.998121421 4.472505514 0.006109021 -0.003125958 with contrasts(d$x) <- contr.poly(4) coef(lm(y~x, d)) # (Intercept) ordered(x).L ordered(x).Q ordered(x).C # 5.998121421 4.472505514 0.006109021 -0.003125958 Exactly identical. So if you want a fuller understanding of what happened, take a closer look at contr.poly() and orthogonal polynomial contrasts in general. One thing to note is that there is an implicit assumption hidden in here: the difference between each two levels is assumed to be equal. So "None" is as far from "some" as "some" is from "more", and "more" is from "a lot".
Results of lm() function with a dependent ordered categorical variable?
In this case, what lm() is doing is converting your "categorical" variable into a numeric sequence in order. To make this clearer, I'll adapt Bolker's code a bit to make the X variable more obviously
Results of lm() function with a dependent ordered categorical variable? In this case, what lm() is doing is converting your "categorical" variable into a numeric sequence in order. To make this clearer, I'll adapt Bolker's code a bit to make the X variable more obviously categorical: set.seed(101) d <- data.frame(x=sample(1:4,size=30,replace=TRUE)) d$y <- rnorm(30,1+2*d$x,sd=0.01) d$x = factor(d$x, labels=c("none", "some", "more", "a lot")) coef(lm(y~x, d)) # (Intercept) xsome xmore xa lot # 3.001627 1.991260 3.995619 5.999098 So here, the mean of x=None is in the intercept, and the deviation from that is indicated for each category. coef(lm(y~ordered(x), d)) # (Intercept) ordered(x).L ordered(x).Q ordered(x).C # 5.998121421 4.472505514 0.006109021 -0.003125958 Conceptually, what's happened here is that the ordered() function converted x into newx using (something similar to): if (x=="None") newx=-.67 if (x=="some") newx=-.22 if (x=="more") newx=.22 if (x=="a lot") newx=.67 and then it fitted (something like) the model: $$y = a + b_0 \times newx + b_1 \times newx^2 + b_2 \times newx^3$$ where you have linear $newx$, quadratic $newx^2$, and cubic $newx^3$ components. Note, I said that it's something like that, because the problem with the model described there is that $newx$, $newx^2$, and $newx^3$ are not at all independent. What lm() does instead is uses a set of contrasts generated by contr.poly(4). These contrasts ensure orthogonality, so that the linear, quadratic and cubic components are independent. But the principle is similar - when fitting ordered factors, lm() fits a linear, quadratic, cubic, etc... component. You can see this by comparing: coef(lm(y~ordered(x), d)) # (Intercept) ordered(x).L ordered(x).Q ordered(x).C # 5.998121421 4.472505514 0.006109021 -0.003125958 with contrasts(d$x) <- contr.poly(4) coef(lm(y~x, d)) # (Intercept) ordered(x).L ordered(x).Q ordered(x).C # 5.998121421 4.472505514 0.006109021 -0.003125958 Exactly identical. So if you want a fuller understanding of what happened, take a closer look at contr.poly() and orthogonal polynomial contrasts in general. One thing to note is that there is an implicit assumption hidden in here: the difference between each two levels is assumed to be equal. So "None" is as far from "some" as "some" is from "more", and "more" is from "a lot".
Results of lm() function with a dependent ordered categorical variable? In this case, what lm() is doing is converting your "categorical" variable into a numeric sequence in order. To make this clearer, I'll adapt Bolker's code a bit to make the X variable more obviously
30,978
Multi-label or multi-class...or both?
Definitions. In a classification task, your goal is to learn a mapping $h: X\rightarrow Y$ (with your favourite ML algorithm, e.g CNNs). We make two common distinctions: Binary vs multiclass: In binary classification, $\left|Y\right|=2$ (e.g, a positive category, and a negative category). In multiclass classifcation, $\left|Y\right|=k$ for some $k\in\mathbb{N}$. In other words, this is just a matter of "how many possible answers are there". Single-label vs multilabel: This refers to how many possible outcomes are possible for a single example $x\in X$. This refers to whether your chosen categories are mutually exclusive, or not. For example, if you are trying to predict the color of an object, then you're probably doing single label classification: a red object can not be a black object at the same time. On the other hand, if you're doing object detection in an image, then since one image can contain multiple objects in it, you're doing multi-label classification. Effect on network architecture. The first distinction determines the number of output units (i.e, number of neurons in the final layer). The second distinction determines which choice of activation function for the final layer + loss function you should you. For single-label, the standard choice is softmax with categorical cross-entropy; for multi-label, switch to sigmoid activations with binary-cross entropy. See here for a more detailed discussion on this question. Creating "hybrid" combinations. I'll describe an example similar to the one in your question. Suppose I'm trying to classify animals, and I'm interested in recognizing the following: color (black, white, orange) size (small, medium, large) type (cat, dog, chimpanzee) This looks confusing: some of the labels are mutually exclusive (an animal can't be both black and orange) and others aren't (it can be a black dog). In this case, the solution is to perform multi-class classification with $k=3\cdot 3=9$ (or generally, number of categories times the size of the largest category; in this case all categories were of equal length, 3). You just have to define the loss function carefully: You would apply a softmax activation for each group of 3 (each category) and compare that to the true label. I created a little sketch which I think makes it clear: So the final loss is $L(\hat y, y)=CE_{color} + CE_{size}$. The entire idea here is that we exploited information about the structure of the labels (which are mutually exclusive and which aren't) to significantly reduce the number of outputs (from an exponential number - all combinations, in this case $3^3$ - to a multiplicative number, $3\cdot 3$).
Multi-label or multi-class...or both?
Definitions. In a classification task, your goal is to learn a mapping $h: X\rightarrow Y$ (with your favourite ML algorithm, e.g CNNs). We make two common distinctions: Binary vs multiclass: In bin
Multi-label or multi-class...or both? Definitions. In a classification task, your goal is to learn a mapping $h: X\rightarrow Y$ (with your favourite ML algorithm, e.g CNNs). We make two common distinctions: Binary vs multiclass: In binary classification, $\left|Y\right|=2$ (e.g, a positive category, and a negative category). In multiclass classifcation, $\left|Y\right|=k$ for some $k\in\mathbb{N}$. In other words, this is just a matter of "how many possible answers are there". Single-label vs multilabel: This refers to how many possible outcomes are possible for a single example $x\in X$. This refers to whether your chosen categories are mutually exclusive, or not. For example, if you are trying to predict the color of an object, then you're probably doing single label classification: a red object can not be a black object at the same time. On the other hand, if you're doing object detection in an image, then since one image can contain multiple objects in it, you're doing multi-label classification. Effect on network architecture. The first distinction determines the number of output units (i.e, number of neurons in the final layer). The second distinction determines which choice of activation function for the final layer + loss function you should you. For single-label, the standard choice is softmax with categorical cross-entropy; for multi-label, switch to sigmoid activations with binary-cross entropy. See here for a more detailed discussion on this question. Creating "hybrid" combinations. I'll describe an example similar to the one in your question. Suppose I'm trying to classify animals, and I'm interested in recognizing the following: color (black, white, orange) size (small, medium, large) type (cat, dog, chimpanzee) This looks confusing: some of the labels are mutually exclusive (an animal can't be both black and orange) and others aren't (it can be a black dog). In this case, the solution is to perform multi-class classification with $k=3\cdot 3=9$ (or generally, number of categories times the size of the largest category; in this case all categories were of equal length, 3). You just have to define the loss function carefully: You would apply a softmax activation for each group of 3 (each category) and compare that to the true label. I created a little sketch which I think makes it clear: So the final loss is $L(\hat y, y)=CE_{color} + CE_{size}$. The entire idea here is that we exploited information about the structure of the labels (which are mutually exclusive and which aren't) to significantly reduce the number of outputs (from an exponential number - all combinations, in this case $3^3$ - to a multiplicative number, $3\cdot 3$).
Multi-label or multi-class...or both? Definitions. In a classification task, your goal is to learn a mapping $h: X\rightarrow Y$ (with your favourite ML algorithm, e.g CNNs). We make two common distinctions: Binary vs multiclass: In bin
30,979
What is the expectation of a random variable divided by an average $E\left[\frac{X_i}{\bar{X}}\right]$?
Let $X_1,\dots,X_n$ be independent and identically distributed random variables and define $$\bar{X}=\frac{X_1+X_2\dots+X_n}{n}.$$ Suppose that $\Pr\{\bar{X}\ne 0\}=1$. Since the $X_i$'s are identically distributed, symmetry tells us that, for $i=1,\dots n$, the (dependent) random variables $X_i/\bar{X}$ have the same distribution: $$ \frac{X_1}{\bar{X}} \sim \frac{X_2}{\bar{X}} \sim \dots \sim \frac{X_n}{\bar{X}}. $$ If the expectations $\mathrm{E}[X_i/\bar{X}]$ exist (this is a crucial point), then $$ \mathrm{E}\left[ \frac{X_1}{\bar{X}} \right] = \mathrm{E}\left[ \frac{X_2}{\bar{X}} \right] = \dots = \mathrm{E}\left[ \frac{X_n}{\bar{X}} \right], $$ and, for $i=1,\dots,n$, we have $$ \begin{align} \mathrm{E}\left[ \frac{X_i}{\bar{X}} \right] &= \frac{1}{n} \left( \mathrm{E}\left[ \frac{X_1}{\bar{X}} \right] + \mathrm{E}\left[ \frac{X_2}{\bar{X}} \right] + \dots + \mathrm{E}\left[ \frac{X_n}{\bar{X}} \right] \right) \\ &= \frac{1}{n}\,\mathrm{E}\left[ \frac{X_1}{\bar{X}} + \frac{X_2}{\bar{X}} + \dots + \frac{X_n}{\bar{X}} \right] \\ &= \frac{1}{n}\,\mathrm{E}\left[ \frac{X_1+X_2+\dots+X_n}{\bar{X}} \right] \\ &= \frac{1}{n}\,\mathrm{E}\left[ \frac{n\bar{X}}{\bar{X}} \right] \\ &= \frac{n}{n}\,\mathrm{E}\left[ \frac{\bar{X}}{\bar{X}} \right] = 1. \end{align} $$ Let's see if we can check this by simple Monte Carlo. x <- matrix(rgamma(10^6, 1, 1), nrow = 10^5) mean(x[, 3] / rowMeans(x)) [1] 1.00511 Fine, and the results don't change much under repetition.
What is the expectation of a random variable divided by an average $E\left[\frac{X_i}{\bar{X}}\right
Let $X_1,\dots,X_n$ be independent and identically distributed random variables and define $$\bar{X}=\frac{X_1+X_2\dots+X_n}{n}.$$ Suppose that $\Pr\{\bar{X}\ne 0\}=1$. Since the $X_i$'s are identical
What is the expectation of a random variable divided by an average $E\left[\frac{X_i}{\bar{X}}\right]$? Let $X_1,\dots,X_n$ be independent and identically distributed random variables and define $$\bar{X}=\frac{X_1+X_2\dots+X_n}{n}.$$ Suppose that $\Pr\{\bar{X}\ne 0\}=1$. Since the $X_i$'s are identically distributed, symmetry tells us that, for $i=1,\dots n$, the (dependent) random variables $X_i/\bar{X}$ have the same distribution: $$ \frac{X_1}{\bar{X}} \sim \frac{X_2}{\bar{X}} \sim \dots \sim \frac{X_n}{\bar{X}}. $$ If the expectations $\mathrm{E}[X_i/\bar{X}]$ exist (this is a crucial point), then $$ \mathrm{E}\left[ \frac{X_1}{\bar{X}} \right] = \mathrm{E}\left[ \frac{X_2}{\bar{X}} \right] = \dots = \mathrm{E}\left[ \frac{X_n}{\bar{X}} \right], $$ and, for $i=1,\dots,n$, we have $$ \begin{align} \mathrm{E}\left[ \frac{X_i}{\bar{X}} \right] &= \frac{1}{n} \left( \mathrm{E}\left[ \frac{X_1}{\bar{X}} \right] + \mathrm{E}\left[ \frac{X_2}{\bar{X}} \right] + \dots + \mathrm{E}\left[ \frac{X_n}{\bar{X}} \right] \right) \\ &= \frac{1}{n}\,\mathrm{E}\left[ \frac{X_1}{\bar{X}} + \frac{X_2}{\bar{X}} + \dots + \frac{X_n}{\bar{X}} \right] \\ &= \frac{1}{n}\,\mathrm{E}\left[ \frac{X_1+X_2+\dots+X_n}{\bar{X}} \right] \\ &= \frac{1}{n}\,\mathrm{E}\left[ \frac{n\bar{X}}{\bar{X}} \right] \\ &= \frac{n}{n}\,\mathrm{E}\left[ \frac{\bar{X}}{\bar{X}} \right] = 1. \end{align} $$ Let's see if we can check this by simple Monte Carlo. x <- matrix(rgamma(10^6, 1, 1), nrow = 10^5) mean(x[, 3] / rowMeans(x)) [1] 1.00511 Fine, and the results don't change much under repetition.
What is the expectation of a random variable divided by an average $E\left[\frac{X_i}{\bar{X}}\right Let $X_1,\dots,X_n$ be independent and identically distributed random variables and define $$\bar{X}=\frac{X_1+X_2\dots+X_n}{n}.$$ Suppose that $\Pr\{\bar{X}\ne 0\}=1$. Since the $X_i$'s are identical
30,980
Is there an example of two causally dependent events being logically (probabilistically) independent?
Consider an Exclusive-OR (XOR) gate which is an electronic circuit (logic gate) with two inputs $X$ and $Y$ and an output $Z$ where $X,Y,Z$ take on values in the discrete set $\{0, 1\}$. Think of these as Boolean variables (or Bernouiii random variables if you like). $Z$ is causally related to $X$ and $Y$ by the Exclusive-OR operation: $$Z = X\oplus Y = X\bar{Y} \,\vee\, \bar{X}Y$$ if you are a Booleander or $$Z = X(1-Y)+(1-X)Y= X + Y -2XY$$ if you are a Bernoullist. Be that as it may, suppose that $X$ and $Y$ are independent (meaning that $P(X=a,Y=b) = P(X=a)P(Y=b)$ for all $a,b$ in $\{0, 1\}$. Then, \begin{align}P(Z=1) &= P(X\neq Y)\\ &=P(X=1, Y=0) + P(X=0, Y=1)\\ &= P(X=1)P(Y=0) + P(X=0)P(Y=1).\end{align} Everything OK thus far? Now suppose that $P(X=1) = P(Y=1)= \frac 12$. Then it is easy to verify that $P(Z=1) = \frac 12$ also. Now, $Z$ and $X$ are very definitely causally related: the output of an XOR gate does depend on its input(s). But, the event $\{Z=1,X=1\}$ occurs if and only if the event $\{X=1, Y=0\}$ occurs and so $$P(Z=1, X=1) = P(X=1,Y=0) = \frac 14 = P(Z=1)P(X=1) = \frac 12\times \frac 12$$ showing that the causally related events $\{Z=1\}$ and $\{X=1\}$ are in fact probabilistically independent. Similarly, $\{Z=1\}$ and $\{Y=1\}$ independent, in fact, the three events $\{X=1\}$, $\{Y=1\}$, and $\{Z=1\}$ are pairwise independent but not mutually independent since $$P(X=1, Y=1, Z=1) = 0 \neq P(X=1)P(Y=1)P(Z=1) = \frac 18.$$ Thus, causal dependence need not be reflected in probabilistic dependence; it is possible to have causally dependent events be probabilistically independent. I will also say that this probabilistic independence is purely a property of the probability measure: if we take $P(X=1)$ or $P(Y=1)$ to be any number in $(0,1)$ other than the $\frac 12$ that I sneakily chose above, the probabilistic independence disappears and the causally dependent events are also probabilistically dependent. Lest you think that this is an oddball example that will hardly ever be encountered in real life, consider the gold standard in statistical theory and practice: three standard normal random variables $X,Y,Z$. Now suppose that their joint density $f_{X,Y,Z}(x,y,z)$ is not $\phi(x)\phi(y)\phi(z)$ where $\phi(\cdot)$ is the standard normal density (as would be the case if $X,Y,Z$ were mutually independent standard normal random variables), but rather $$f_{X,Y,Z}(x,y,z) = \begin{cases} 2\phi(x)\phi(y)\phi(z) & ~~~~\text{if}~ x \geq 0, y\geq 0, z \geq 0,\\ & \text{or if}~ x < 0, y < 0, z \geq 0,\\ & \text{or if}~ x < 0, y\geq 0, z < 0,\\ & \text{or if}~ x \geq 0, y< 0, z < 0,\\ 0 & \text{otherwise.} \end{cases}\tag{1}$$ Note that $X$, $Y$, and $Z$ are not a set of three jointly normal random variables (that is, they don't have a multivariate normal distribution) but it can be shown that any two of these is indeed a pair of independent standard normal random variables. For details of the verification, see the latter half of this answer of mine.
Is there an example of two causally dependent events being logically (probabilistically) independent
Consider an Exclusive-OR (XOR) gate which is an electronic circuit (logic gate) with two inputs $X$ and $Y$ and an output $Z$ where $X,Y,Z$ take on values in the discrete set $\{0, 1\}$. Think of the
Is there an example of two causally dependent events being logically (probabilistically) independent? Consider an Exclusive-OR (XOR) gate which is an electronic circuit (logic gate) with two inputs $X$ and $Y$ and an output $Z$ where $X,Y,Z$ take on values in the discrete set $\{0, 1\}$. Think of these as Boolean variables (or Bernouiii random variables if you like). $Z$ is causally related to $X$ and $Y$ by the Exclusive-OR operation: $$Z = X\oplus Y = X\bar{Y} \,\vee\, \bar{X}Y$$ if you are a Booleander or $$Z = X(1-Y)+(1-X)Y= X + Y -2XY$$ if you are a Bernoullist. Be that as it may, suppose that $X$ and $Y$ are independent (meaning that $P(X=a,Y=b) = P(X=a)P(Y=b)$ for all $a,b$ in $\{0, 1\}$. Then, \begin{align}P(Z=1) &= P(X\neq Y)\\ &=P(X=1, Y=0) + P(X=0, Y=1)\\ &= P(X=1)P(Y=0) + P(X=0)P(Y=1).\end{align} Everything OK thus far? Now suppose that $P(X=1) = P(Y=1)= \frac 12$. Then it is easy to verify that $P(Z=1) = \frac 12$ also. Now, $Z$ and $X$ are very definitely causally related: the output of an XOR gate does depend on its input(s). But, the event $\{Z=1,X=1\}$ occurs if and only if the event $\{X=1, Y=0\}$ occurs and so $$P(Z=1, X=1) = P(X=1,Y=0) = \frac 14 = P(Z=1)P(X=1) = \frac 12\times \frac 12$$ showing that the causally related events $\{Z=1\}$ and $\{X=1\}$ are in fact probabilistically independent. Similarly, $\{Z=1\}$ and $\{Y=1\}$ independent, in fact, the three events $\{X=1\}$, $\{Y=1\}$, and $\{Z=1\}$ are pairwise independent but not mutually independent since $$P(X=1, Y=1, Z=1) = 0 \neq P(X=1)P(Y=1)P(Z=1) = \frac 18.$$ Thus, causal dependence need not be reflected in probabilistic dependence; it is possible to have causally dependent events be probabilistically independent. I will also say that this probabilistic independence is purely a property of the probability measure: if we take $P(X=1)$ or $P(Y=1)$ to be any number in $(0,1)$ other than the $\frac 12$ that I sneakily chose above, the probabilistic independence disappears and the causally dependent events are also probabilistically dependent. Lest you think that this is an oddball example that will hardly ever be encountered in real life, consider the gold standard in statistical theory and practice: three standard normal random variables $X,Y,Z$. Now suppose that their joint density $f_{X,Y,Z}(x,y,z)$ is not $\phi(x)\phi(y)\phi(z)$ where $\phi(\cdot)$ is the standard normal density (as would be the case if $X,Y,Z$ were mutually independent standard normal random variables), but rather $$f_{X,Y,Z}(x,y,z) = \begin{cases} 2\phi(x)\phi(y)\phi(z) & ~~~~\text{if}~ x \geq 0, y\geq 0, z \geq 0,\\ & \text{or if}~ x < 0, y < 0, z \geq 0,\\ & \text{or if}~ x < 0, y\geq 0, z < 0,\\ & \text{or if}~ x \geq 0, y< 0, z < 0,\\ 0 & \text{otherwise.} \end{cases}\tag{1}$$ Note that $X$, $Y$, and $Z$ are not a set of three jointly normal random variables (that is, they don't have a multivariate normal distribution) but it can be shown that any two of these is indeed a pair of independent standard normal random variables. For details of the verification, see the latter half of this answer of mine.
Is there an example of two causally dependent events being logically (probabilistically) independent Consider an Exclusive-OR (XOR) gate which is an electronic circuit (logic gate) with two inputs $X$ and $Y$ and an output $Z$ where $X,Y,Z$ take on values in the discrete set $\{0, 1\}$. Think of the
30,981
Is there an example of two causally dependent events being logically (probabilistically) independent?
In causal modelling, this kind of thing is possible in cases where there are multiple causal effects, and they happen to exactly cancel each other out in a probabilistic sense. Hence, it is possible that $\mathcal{A}$ causes $\mathcal{B}$, but it also causes $\mathcal{C}$, $\mathcal{D}$ and $\mathcal{E}$, and these latter events have a negative causal effect on $\mathcal{B}$, in a way that exactly cancels out the direct causal effect from $\mathcal{A}$. In models of probabilistic causality this kind of pathological situation is usually ruled out by a faithfulness assumption, which assumes that the probabilistic relations are "faithful" to the underlying causal structure, and do not cancel out. A basic primer on probabilistic causality and the faithfulness assumption can be found in the Stanford Encyclopaedia of Philosophy.
Is there an example of two causally dependent events being logically (probabilistically) independent
In causal modelling, this kind of thing is possible in cases where there are multiple causal effects, and they happen to exactly cancel each other out in a probabilistic sense. Hence, it is possible
Is there an example of two causally dependent events being logically (probabilistically) independent? In causal modelling, this kind of thing is possible in cases where there are multiple causal effects, and they happen to exactly cancel each other out in a probabilistic sense. Hence, it is possible that $\mathcal{A}$ causes $\mathcal{B}$, but it also causes $\mathcal{C}$, $\mathcal{D}$ and $\mathcal{E}$, and these latter events have a negative causal effect on $\mathcal{B}$, in a way that exactly cancels out the direct causal effect from $\mathcal{A}$. In models of probabilistic causality this kind of pathological situation is usually ruled out by a faithfulness assumption, which assumes that the probabilistic relations are "faithful" to the underlying causal structure, and do not cancel out. A basic primer on probabilistic causality and the faithfulness assumption can be found in the Stanford Encyclopaedia of Philosophy.
Is there an example of two causally dependent events being logically (probabilistically) independent In causal modelling, this kind of thing is possible in cases where there are multiple causal effects, and they happen to exactly cancel each other out in a probabilistic sense. Hence, it is possible
30,982
Is there an example of two causally dependent events being logically (probabilistically) independent?
Examples can be created at will, because causation concerns truth, but probability concerns logic. Suppose it is a fact that $A \in \mathcal{A}$ and $B \in \mathcal{B}$ and $A = x$ causes $B = y$. Now consider given information $\mathcal{I} \equiv \unicode{8220}\text{$A \in \mathcal{A}$ and $B \in \mathcal{B}$}\unicode{8221}$. Then \begin{align} \mathrm{prob}(A = a, B = b | \mathcal{I}) &= \mathrm{prob}(A = a | \mathcal{I}) \: \mathrm{prob}(B = b | \mathcal{I}) \\ &= \frac{1}{|\mathcal{A}||\mathcal{B}|} \end{align} because independence is the maximum entropy distribution consistent with $\mathcal{I}$. The events are then logically independent, given $\mathcal{I}$, despite being causally dependent. It makes no sense to speak of events being logically independent in the absence of any given assumptions: logic requires assumptions. Causes, on the other hand, exist independently of our assumptions. Ideas about causation, of course, are themselves logical, and quite distinct from causes themselves. So if we seek to compare causal ideas about events with logical ideas about those events, in fact they are one and the same thing. For example, if we have $\mathcal{I} \equiv \unicode{8220} \text{$A \in \mathcal{A}$ and $B \in \mathcal{B}$ and $A = x$ causes $B = y$}\unicode{8221}$, then \begin{align} \mathrm{prob}(A = a, B = b | \mathcal{I}) &= \mathrm{prob}(B = b | A = a, \mathcal{I}) \: \mathrm{prob}(A = a | \mathcal{I}) \\ &= \frac{1}{|\mathcal{A}|} \begin{cases} \delta_{b y} & A = x \\ \frac{1}{|\mathcal{B}|} & A \neq x \end{cases} \end{align} whereupon the logic expresses the causal idea.
Is there an example of two causally dependent events being logically (probabilistically) independent
Examples can be created at will, because causation concerns truth, but probability concerns logic. Suppose it is a fact that $A \in \mathcal{A}$ and $B \in \mathcal{B}$ and $A = x$ causes $B = y$. Now
Is there an example of two causally dependent events being logically (probabilistically) independent? Examples can be created at will, because causation concerns truth, but probability concerns logic. Suppose it is a fact that $A \in \mathcal{A}$ and $B \in \mathcal{B}$ and $A = x$ causes $B = y$. Now consider given information $\mathcal{I} \equiv \unicode{8220}\text{$A \in \mathcal{A}$ and $B \in \mathcal{B}$}\unicode{8221}$. Then \begin{align} \mathrm{prob}(A = a, B = b | \mathcal{I}) &= \mathrm{prob}(A = a | \mathcal{I}) \: \mathrm{prob}(B = b | \mathcal{I}) \\ &= \frac{1}{|\mathcal{A}||\mathcal{B}|} \end{align} because independence is the maximum entropy distribution consistent with $\mathcal{I}$. The events are then logically independent, given $\mathcal{I}$, despite being causally dependent. It makes no sense to speak of events being logically independent in the absence of any given assumptions: logic requires assumptions. Causes, on the other hand, exist independently of our assumptions. Ideas about causation, of course, are themselves logical, and quite distinct from causes themselves. So if we seek to compare causal ideas about events with logical ideas about those events, in fact they are one and the same thing. For example, if we have $\mathcal{I} \equiv \unicode{8220} \text{$A \in \mathcal{A}$ and $B \in \mathcal{B}$ and $A = x$ causes $B = y$}\unicode{8221}$, then \begin{align} \mathrm{prob}(A = a, B = b | \mathcal{I}) &= \mathrm{prob}(B = b | A = a, \mathcal{I}) \: \mathrm{prob}(A = a | \mathcal{I}) \\ &= \frac{1}{|\mathcal{A}|} \begin{cases} \delta_{b y} & A = x \\ \frac{1}{|\mathcal{B}|} & A \neq x \end{cases} \end{align} whereupon the logic expresses the causal idea.
Is there an example of two causally dependent events being logically (probabilistically) independent Examples can be created at will, because causation concerns truth, but probability concerns logic. Suppose it is a fact that $A \in \mathcal{A}$ and $B \in \mathcal{B}$ and $A = x$ causes $B = y$. Now
30,983
Is there an example of two causally dependent events being logically (probabilistically) independent?
Depends on what you mean by probability. If you are a frequentist, then conditional probability P(A|B) means something akin to how much B causes A. If you are Bayesian, P(A|B) only measures the logical connection between B and A. To sum up, for a Bayesian, the answer to your question is yes, for a frequentist, it is no. For ones, who are a mix of both, it is a tricky question.
Is there an example of two causally dependent events being logically (probabilistically) independent
Depends on what you mean by probability. If you are a frequentist, then conditional probability P(A|B) means something akin to how much B causes A. If you are Bayesian, P(A|B) only measures the logica
Is there an example of two causally dependent events being logically (probabilistically) independent? Depends on what you mean by probability. If you are a frequentist, then conditional probability P(A|B) means something akin to how much B causes A. If you are Bayesian, P(A|B) only measures the logical connection between B and A. To sum up, for a Bayesian, the answer to your question is yes, for a frequentist, it is no. For ones, who are a mix of both, it is a tricky question.
Is there an example of two causally dependent events being logically (probabilistically) independent Depends on what you mean by probability. If you are a frequentist, then conditional probability P(A|B) means something akin to how much B causes A. If you are Bayesian, P(A|B) only measures the logica
30,984
Is Naive Bayes becoming more popular? Why?
I'd be cautious about over interpreting Google trends. Here's naive bayes (blue) vs. k-means (red). What does it mean? I can make up a story that common variation is due to machine learning classes that teach both naive bayes and k-means. But that's just an educated guess, not an answer. I really don't know. And unless we start surveying people who search for "naive bayes" I don't see how anyone can positively answer this either.
Is Naive Bayes becoming more popular? Why?
I'd be cautious about over interpreting Google trends. Here's naive bayes (blue) vs. k-means (red). What does it mean? I can make up a story that common variation is due to machine learning classes th
Is Naive Bayes becoming more popular? Why? I'd be cautious about over interpreting Google trends. Here's naive bayes (blue) vs. k-means (red). What does it mean? I can make up a story that common variation is due to machine learning classes that teach both naive bayes and k-means. But that's just an educated guess, not an answer. I really don't know. And unless we start surveying people who search for "naive bayes" I don't see how anyone can positively answer this either.
Is Naive Bayes becoming more popular? Why? I'd be cautious about over interpreting Google trends. Here's naive bayes (blue) vs. k-means (red). What does it mean? I can make up a story that common variation is due to machine learning classes th
30,985
Interpreting Poisson output in R [duplicate]
Since it's a Poisson model, the expected value of the dependent variable is related to the independent variables by inverse of the log link, which is to say $E(y) = \exp(\beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_1 x_2)$ where here, x1 = 0 if female and 1 if male, x2 = age, and the $\beta_0$ to $\beta_3$ are the estimated coefficients in the order shown in the R output. The three independent variables here are all equal to zero when you have a female with age zero. So the expected number of visits for a female with age zero is $\exp(-1.466168) = 0.23$ That's the meaning of the intercept. If you take its exponential, you get the baseline number of visits, where the baseline means that all the independent variables are set to zero. The expected number of visits for a male with age zero is $\exp(-1.466168 - 0.801987) = 0.10$ or $\exp(-.801987) = 0.45$ times the expected number of visits for a female with age zero. As you increase the age by one, the expected number of visits for a female increases by a factor of $\exp(0.009322) = 1.009$ or about 1%. As you increase the age by one, the expected number of visits for a male increases by a factor of $\exp(0.009322 + 0.012186) = 1.022$ or about 2%. So, overall, you expect about half the number of visits for newborn males compared to females, but the expected number of visits increases with age at about twice the rate it does for females. The AIC isn't helpful in isolation. You'd compare it to the AIC of some alternative model. Roughly speaking, whichever model has a lower AIC has a better fit after adjusting for the number of parameters. You can use the deviance to do a goodness-of-fit test; basically, whether whatever unexplained variation is due to the kind of random variation you'd expect from a Poisson distribution. There isn't a closed-form solution for the parameters of the Poisson model in general; they have to be computed using numerical methods. The Fisher scoring iterations tell how many iterations the optimizer had to go through before the deviance (I think) was minimized to within some acceptable tolerance. You would probably only worry about this if the number of iterations were really high, which might point to a poorly-specified model (which you would probably spot from abnormally large parameter values and/or standard errors anyway).
Interpreting Poisson output in R [duplicate]
Since it's a Poisson model, the expected value of the dependent variable is related to the independent variables by inverse of the log link, which is to say $E(y) = \exp(\beta_0 + \beta_1 x_1 + \beta_
Interpreting Poisson output in R [duplicate] Since it's a Poisson model, the expected value of the dependent variable is related to the independent variables by inverse of the log link, which is to say $E(y) = \exp(\beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_3 x_1 x_2)$ where here, x1 = 0 if female and 1 if male, x2 = age, and the $\beta_0$ to $\beta_3$ are the estimated coefficients in the order shown in the R output. The three independent variables here are all equal to zero when you have a female with age zero. So the expected number of visits for a female with age zero is $\exp(-1.466168) = 0.23$ That's the meaning of the intercept. If you take its exponential, you get the baseline number of visits, where the baseline means that all the independent variables are set to zero. The expected number of visits for a male with age zero is $\exp(-1.466168 - 0.801987) = 0.10$ or $\exp(-.801987) = 0.45$ times the expected number of visits for a female with age zero. As you increase the age by one, the expected number of visits for a female increases by a factor of $\exp(0.009322) = 1.009$ or about 1%. As you increase the age by one, the expected number of visits for a male increases by a factor of $\exp(0.009322 + 0.012186) = 1.022$ or about 2%. So, overall, you expect about half the number of visits for newborn males compared to females, but the expected number of visits increases with age at about twice the rate it does for females. The AIC isn't helpful in isolation. You'd compare it to the AIC of some alternative model. Roughly speaking, whichever model has a lower AIC has a better fit after adjusting for the number of parameters. You can use the deviance to do a goodness-of-fit test; basically, whether whatever unexplained variation is due to the kind of random variation you'd expect from a Poisson distribution. There isn't a closed-form solution for the parameters of the Poisson model in general; they have to be computed using numerical methods. The Fisher scoring iterations tell how many iterations the optimizer had to go through before the deviance (I think) was minimized to within some acceptable tolerance. You would probably only worry about this if the number of iterations were really high, which might point to a poorly-specified model (which you would probably spot from abnormally large parameter values and/or standard errors anyway).
Interpreting Poisson output in R [duplicate] Since it's a Poisson model, the expected value of the dependent variable is related to the independent variables by inverse of the log link, which is to say $E(y) = \exp(\beta_0 + \beta_1 x_1 + \beta_
30,986
High $R^2$ squared and high $p$-value for simple linear regression
Yes, it is possible. The $R^2$ and the $t$ statistic (used to compute the p-value) are related exactly by: $ |t| = \sqrt{\frac{R^2}{(1- R^2)}(n -2)} $ Therefore, you can have a high $R^2$ with a high p-value (a low $|t|$) if you have a small sample. For instance, take $n = 3$. For this sample size to give you a (two-sided) p-value less then 10% you would need an $R^2$ greater than 85% -- anything less than that would give you "non-significant" p-value. As a concrete example, the simulation below produces an $R^2$ close to 0.5 with a p-value of $0.516$. set.seed(10) n <- 3 x <- rnorm(n, 0, 1) y <- 1 + x + rnorm(n, 0, 1) summary(m1 <- lm(y ~ x)) Call: lm(formula = y ~ x) Residuals: 1 2 3 -0.36552 0.42802 -0.06251 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.7756 0.4261 1.82 0.320 x 0.5065 0.5333 0.95 0.516 Residual standard error: 0.5663 on 1 degrees of freedom Multiple R-squared: 0.4743, Adjusted R-squared: -0.05148 F-statistic: 0.9021 on 1 and 1 DF, p-value: 0.5164 For the opposite case (low p-value with low $R^2$), you can trivially obtain that by setting a regression where $x$ has a low explanatory power and let $n \to \infty$ to get a p-value as small as you want.
High $R^2$ squared and high $p$-value for simple linear regression
Yes, it is possible. The $R^2$ and the $t$ statistic (used to compute the p-value) are related exactly by: $ |t| = \sqrt{\frac{R^2}{(1- R^2)}(n -2)} $ Therefore, you can have a high $R^2$ with a high
High $R^2$ squared and high $p$-value for simple linear regression Yes, it is possible. The $R^2$ and the $t$ statistic (used to compute the p-value) are related exactly by: $ |t| = \sqrt{\frac{R^2}{(1- R^2)}(n -2)} $ Therefore, you can have a high $R^2$ with a high p-value (a low $|t|$) if you have a small sample. For instance, take $n = 3$. For this sample size to give you a (two-sided) p-value less then 10% you would need an $R^2$ greater than 85% -- anything less than that would give you "non-significant" p-value. As a concrete example, the simulation below produces an $R^2$ close to 0.5 with a p-value of $0.516$. set.seed(10) n <- 3 x <- rnorm(n, 0, 1) y <- 1 + x + rnorm(n, 0, 1) summary(m1 <- lm(y ~ x)) Call: lm(formula = y ~ x) Residuals: 1 2 3 -0.36552 0.42802 -0.06251 Coefficients: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.7756 0.4261 1.82 0.320 x 0.5065 0.5333 0.95 0.516 Residual standard error: 0.5663 on 1 degrees of freedom Multiple R-squared: 0.4743, Adjusted R-squared: -0.05148 F-statistic: 0.9021 on 1 and 1 DF, p-value: 0.5164 For the opposite case (low p-value with low $R^2$), you can trivially obtain that by setting a regression where $x$ has a low explanatory power and let $n \to \infty$ to get a p-value as small as you want.
High $R^2$ squared and high $p$-value for simple linear regression Yes, it is possible. The $R^2$ and the $t$ statistic (used to compute the p-value) are related exactly by: $ |t| = \sqrt{\frac{R^2}{(1- R^2)}(n -2)} $ Therefore, you can have a high $R^2$ with a high
30,987
High $R^2$ squared and high $p$-value for simple linear regression
This looks like a self-study, so I'll offer a hint: Is either or both of these measures (R-square and p-value) related to the sample size?
High $R^2$ squared and high $p$-value for simple linear regression
This looks like a self-study, so I'll offer a hint: Is either or both of these measures (R-square and p-value) related to the sample size?
High $R^2$ squared and high $p$-value for simple linear regression This looks like a self-study, so I'll offer a hint: Is either or both of these measures (R-square and p-value) related to the sample size?
High $R^2$ squared and high $p$-value for simple linear regression This looks like a self-study, so I'll offer a hint: Is either or both of these measures (R-square and p-value) related to the sample size?
30,988
High $R^2$ squared and high $p$-value for simple linear regression
Here is another example: $y_1 = c + \epsilon,y_2 = c,\ y_3 = \epsilon,$ where $c$ is a constant and $\epsilon \sim \mathcal{N}(0, \sigma^2)$ is a Gaussian noise. Consider the two regression problems: (1) $y_1 = \hat{\beta}_2 y_2 +\epsilon_2$ (2) $y_1 = \hat{\beta}_3 y_3 +\epsilon_3$ Could you tell in which case, we have a high $R^2$ and a high $p$-value; and in which case, we have a low $R^2$ and a low $p$-value? p.s. $\frac{R^2}{1-R^2}$ in the formula of Carlos' answer is signal-to-noise ratio of the regression.
High $R^2$ squared and high $p$-value for simple linear regression
Here is another example: $y_1 = c + \epsilon,y_2 = c,\ y_3 = \epsilon,$ where $c$ is a constant and $\epsilon \sim \mathcal{N}(0, \sigma^2)$ is a Gaussian noise. Consider the two regression problems:
High $R^2$ squared and high $p$-value for simple linear regression Here is another example: $y_1 = c + \epsilon,y_2 = c,\ y_3 = \epsilon,$ where $c$ is a constant and $\epsilon \sim \mathcal{N}(0, \sigma^2)$ is a Gaussian noise. Consider the two regression problems: (1) $y_1 = \hat{\beta}_2 y_2 +\epsilon_2$ (2) $y_1 = \hat{\beta}_3 y_3 +\epsilon_3$ Could you tell in which case, we have a high $R^2$ and a high $p$-value; and in which case, we have a low $R^2$ and a low $p$-value? p.s. $\frac{R^2}{1-R^2}$ in the formula of Carlos' answer is signal-to-noise ratio of the regression.
High $R^2$ squared and high $p$-value for simple linear regression Here is another example: $y_1 = c + \epsilon,y_2 = c,\ y_3 = \epsilon,$ where $c$ is a constant and $\epsilon \sim \mathcal{N}(0, \sigma^2)$ is a Gaussian noise. Consider the two regression problems:
30,989
Wildly different $R^2$ between linear regression in statsmodels and sklearn
In your scikit-learn model, you included an intercept using the fit_intercept=True method. This fit both your intercept and the slope. In statsmodels, if you want to include an intercept, you need to run the command x1 = stat.add_constant(x1) in order to create a column of constants. Then running the sm.OLS() command would yield an R-squared value of around 0.056. It's also important to note that when constructing a model in statsmodels, you want to put your y1 first and x1 second rather than x1, then y1. The arguments are reversed from statsmodels to scikit-learn.
Wildly different $R^2$ between linear regression in statsmodels and sklearn
In your scikit-learn model, you included an intercept using the fit_intercept=True method. This fit both your intercept and the slope. In statsmodels, if you want to include an intercept, you need to
Wildly different $R^2$ between linear regression in statsmodels and sklearn In your scikit-learn model, you included an intercept using the fit_intercept=True method. This fit both your intercept and the slope. In statsmodels, if you want to include an intercept, you need to run the command x1 = stat.add_constant(x1) in order to create a column of constants. Then running the sm.OLS() command would yield an R-squared value of around 0.056. It's also important to note that when constructing a model in statsmodels, you want to put your y1 first and x1 second rather than x1, then y1. The arguments are reversed from statsmodels to scikit-learn.
Wildly different $R^2$ between linear regression in statsmodels and sklearn In your scikit-learn model, you included an intercept using the fit_intercept=True method. This fit both your intercept and the slope. In statsmodels, if you want to include an intercept, you need to
30,990
Proof of variance of stationary time series
Since this post has attracted so many answers, it seems worthwhile to show the idea. Here is a diagram of the covariance matrix $\Sigma = \operatorname{Cov}(X_1,X_2,\ldots, X_n).$ Values that are necessarily equal receive the same color. It has this diagonal striped pattern because the covariances depend only on the absolute lags--and the lags index the diagonals. The variance of a sum of random variables $X_1+\cdots +X_n$ is the sum of all their variances and covariances, taken in all orders. This is a consequence of the multilinear property of covariance. It is easily demonstrated by observing $X_1+\cdots +X_n$ is the dot product of the random vector $\mathbf{X}=(X_1,\cdots,X_n)$ and the vector $\mathbf{1}=(1,1,\ldots 1)$ (with $n$ components). Therefore the variance of the sum is $$\operatorname{Var}(X_1+\cdots+X_n) = \mathbf{1}^\prime \Sigma \mathbf{1},$$ which the rules of matrix multiplication tell us is the sum of all the entries of $\Sigma.$ The formula in the question sums the entries of $\Sigma$ by color: There are $n$ copies of $\gamma_0$ (in red, on the diagonal). There are $2(n-1)$ copies of $\gamma_1$ (in orange, on both sides of the diagonal: this is where the factor of $2$ comes from). There are $2(n-2)$ copies of $\gamma_2$ (in yellow). ... and so on, up to $2$ copies of $\gamma_{n-1}$ (in blue). Therefore, by merely looking at the figure, we obtain $$\operatorname{Var}(X_1+\cdots+X_n) = n\gamma_0 + 2(n-1)\gamma_1 + 2(n-2)\gamma_2 + \cdots + 2\gamma_{n-1}.$$ The general pattern is There are $n$ copies of $\gamma_0$ and $2(n-m)$ copies of $\gamma_m$ for $m=1,2,\ldots, n-1.$ The question asks for the variance of $1/n$ times this sum. Again, according to the multilinear property of variance, we must multiply the variance of the sum by $1/n^2.$ Doing that to the preceding formula gives the answer, $$\operatorname{Var}((X_1+\cdots+X_n)/n) = \frac{1}{n^2}\left[n\gamma_0 + \sum_{m=1}^{n-1} 2(n-m)\gamma_m\right].$$ Comparing this to the formula in the question helps us interpret the question's "$1/n$" factors as really being $1/n=n/n^2,$ $(1-1/n)/n= (n-1)/n^2,$ and so on down to $(1-(n-1)/n)/n = 1/n^2.$
Proof of variance of stationary time series
Since this post has attracted so many answers, it seems worthwhile to show the idea. Here is a diagram of the covariance matrix $\Sigma = \operatorname{Cov}(X_1,X_2,\ldots, X_n).$ Values that are nec
Proof of variance of stationary time series Since this post has attracted so many answers, it seems worthwhile to show the idea. Here is a diagram of the covariance matrix $\Sigma = \operatorname{Cov}(X_1,X_2,\ldots, X_n).$ Values that are necessarily equal receive the same color. It has this diagonal striped pattern because the covariances depend only on the absolute lags--and the lags index the diagonals. The variance of a sum of random variables $X_1+\cdots +X_n$ is the sum of all their variances and covariances, taken in all orders. This is a consequence of the multilinear property of covariance. It is easily demonstrated by observing $X_1+\cdots +X_n$ is the dot product of the random vector $\mathbf{X}=(X_1,\cdots,X_n)$ and the vector $\mathbf{1}=(1,1,\ldots 1)$ (with $n$ components). Therefore the variance of the sum is $$\operatorname{Var}(X_1+\cdots+X_n) = \mathbf{1}^\prime \Sigma \mathbf{1},$$ which the rules of matrix multiplication tell us is the sum of all the entries of $\Sigma.$ The formula in the question sums the entries of $\Sigma$ by color: There are $n$ copies of $\gamma_0$ (in red, on the diagonal). There are $2(n-1)$ copies of $\gamma_1$ (in orange, on both sides of the diagonal: this is where the factor of $2$ comes from). There are $2(n-2)$ copies of $\gamma_2$ (in yellow). ... and so on, up to $2$ copies of $\gamma_{n-1}$ (in blue). Therefore, by merely looking at the figure, we obtain $$\operatorname{Var}(X_1+\cdots+X_n) = n\gamma_0 + 2(n-1)\gamma_1 + 2(n-2)\gamma_2 + \cdots + 2\gamma_{n-1}.$$ The general pattern is There are $n$ copies of $\gamma_0$ and $2(n-m)$ copies of $\gamma_m$ for $m=1,2,\ldots, n-1.$ The question asks for the variance of $1/n$ times this sum. Again, according to the multilinear property of variance, we must multiply the variance of the sum by $1/n^2.$ Doing that to the preceding formula gives the answer, $$\operatorname{Var}((X_1+\cdots+X_n)/n) = \frac{1}{n^2}\left[n\gamma_0 + \sum_{m=1}^{n-1} 2(n-m)\gamma_m\right].$$ Comparing this to the formula in the question helps us interpret the question's "$1/n$" factors as really being $1/n=n/n^2,$ $(1-1/n)/n= (n-1)/n^2,$ and so on down to $(1-(n-1)/n)/n = 1/n^2.$
Proof of variance of stationary time series Since this post has attracted so many answers, it seems worthwhile to show the idea. Here is a diagram of the covariance matrix $\Sigma = \operatorname{Cov}(X_1,X_2,\ldots, X_n).$ Values that are nec
30,991
Proof of variance of stationary time series
You are almost there! Now you just need to recognise that auto-correlation only depends on the lag, so you have $\gamma(m) = \gamma(|m|)$, which means that the entire summand depends on $m$ only through $|m|$ (i.e., it is symmetric around $m=0$). This allows you to split the sum into the middle element ($m=0$) and two lots of the symmetric part ($|m| = 1,...,n$), which gives you: $$\begin{equation} \begin{aligned} \text{Var}(\bar{X}) &= \frac{1}{n} \sum_{m=-n}^{n} \Big( 1-\frac{|m|}{n} \Big) \gamma(m) \\[6pt] &= \frac{1}{n} \sum_{m=-n}^{n} \Big( 1-\frac{|m|}{n} \Big) \gamma(|m|) \\[6pt] &= \frac{1}{n} \Bigg[ \gamma(0) +2\sum_{|m|=1}^n \Big( 1-\frac{|m|}{n} \Big) \gamma(|m|) \Bigg] \\[6pt] &= \frac{\gamma(0)}{n} + \frac{2}{n} \sum_{m=1}^n \Big( 1-\frac{m}{n} \Big) \gamma(m) \\[6pt] &= \frac{\gamma(0)}{n} + \frac{2}{n} \sum_{m=1}^{n-1} \Big( 1-\frac{m}{n} \Big) \gamma(m). \\[6pt] \end{aligned} \end{equation}$$ (The last step follows from the fact that $1-\tfrac{m}{n} = 0$ for $m=n$.) This method of splitting symmetric sums around their mid-point is a common trick used in these kinds of cases to simplify the sum by taking it only over positive arguments. It is a worthwhile trick to learn in general.
Proof of variance of stationary time series
You are almost there! Now you just need to recognise that auto-correlation only depends on the lag, so you have $\gamma(m) = \gamma(|m|)$, which means that the entire summand depends on $m$ only thro
Proof of variance of stationary time series You are almost there! Now you just need to recognise that auto-correlation only depends on the lag, so you have $\gamma(m) = \gamma(|m|)$, which means that the entire summand depends on $m$ only through $|m|$ (i.e., it is symmetric around $m=0$). This allows you to split the sum into the middle element ($m=0$) and two lots of the symmetric part ($|m| = 1,...,n$), which gives you: $$\begin{equation} \begin{aligned} \text{Var}(\bar{X}) &= \frac{1}{n} \sum_{m=-n}^{n} \Big( 1-\frac{|m|}{n} \Big) \gamma(m) \\[6pt] &= \frac{1}{n} \sum_{m=-n}^{n} \Big( 1-\frac{|m|}{n} \Big) \gamma(|m|) \\[6pt] &= \frac{1}{n} \Bigg[ \gamma(0) +2\sum_{|m|=1}^n \Big( 1-\frac{|m|}{n} \Big) \gamma(|m|) \Bigg] \\[6pt] &= \frac{\gamma(0)}{n} + \frac{2}{n} \sum_{m=1}^n \Big( 1-\frac{m}{n} \Big) \gamma(m) \\[6pt] &= \frac{\gamma(0)}{n} + \frac{2}{n} \sum_{m=1}^{n-1} \Big( 1-\frac{m}{n} \Big) \gamma(m). \\[6pt] \end{aligned} \end{equation}$$ (The last step follows from the fact that $1-\tfrac{m}{n} = 0$ for $m=n$.) This method of splitting symmetric sums around their mid-point is a common trick used in these kinds of cases to simplify the sum by taking it only over positive arguments. It is a worthwhile trick to learn in general.
Proof of variance of stationary time series You are almost there! Now you just need to recognise that auto-correlation only depends on the lag, so you have $\gamma(m) = \gamma(|m|)$, which means that the entire summand depends on $m$ only thro
30,992
Proof of variance of stationary time series
first, fixing the definition of the problem, the index is $m$ instead of $u$, to make simpler I will use only the index $i$ and $j$. We want to prove that $\operatorname{Var}\left(\frac{X_1+X_2+...+X_n}{n}\right) = \dfrac{\gamma(0)}{n} + \dfrac{2}{n} \sum_{i=1}^{n-1} \left(1−\dfrac{i}{n}\right) \gamma(i).$ The begin is correct, $$\operatorname{Var}(\bar{X}) = \dfrac{1}{n^2} \sum_{i=1}^n\sum_{j=1}^n \operatorname{Cov}(X_i,X_j)$$ We can notice that $\operatorname{Cov}(X_i,X_j) = \operatorname{Cov}(X_j,X_i)$ and, from our assumptions about the problem, that $\operatorname{Cov}(X_i,X_i+h) = \operatorname{Cov}(X_i,X_i-h) = \gamma(h)$ for any $i$ and $h$. We can visualize the sum of covariances in $i$ and $j$ as follows $$\left| \begin{array}{ccccc} \operatorname{Cov}(1,1) & \operatorname{Cov}(1,2) & \cdots & \operatorname{Cov}(1,n-1) & \operatorname{Cov}(1,n)\\ \operatorname{Cov}(2,1) & \operatorname{Cov}(2,2) & \cdots & \operatorname{Cov}(2,n-1) & Cov(2,n)\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ \operatorname{Cov}(n-1,1)& \operatorname{Cov}(1,2) & \cdots & \operatorname{Cov}(n-1,n-1) &\operatorname{Cov}(n-1,n)\\ \operatorname{Cov}(n,1) & \operatorname{Cov}(n,2) & \cdots & \operatorname{Cov}(n,n-1) &\operatorname{Cov}(n,n) \end{array} \right|$$ What is equal to $$\left| \begin{array}{ccccc} \gamma(0) & \gamma(1) & \cdots & \gamma(n-1)\\ \gamma(1) & \gamma(0) & \cdots & \gamma(n-2)\\ \vdots & \vdots & \ddots & \vdots\\ \gamma(n-1)& \gamma(n-2) & \cdots & \gamma(0)\\ \end{array} \right|$$ To sum all the elements we can first sum the main diagonal, and as it is symmetric sum twice the other diagonals $$\sum_{i=1}^n\sum_{j=1}^n \operatorname{Cov}(X_i,X_j) = n \gamma(0) + 2\sum_{i=1}^{n-1}(n-i)\gamma(i)$$. Back to the main equation $$\operatorname{Var}\left(\frac{X_1+X_2+...+X_n}{n}\right) = \dfrac{\gamma(0)}{n}+\dfrac{2}{n^2}\sum_{i=1}^{n-1}(n-i)\gamma(i) = \dfrac{\gamma(0)}{n}+\dfrac{2}{n}\sum_{i=1}^{n-1}(1-\dfrac{i}{n})\gamma(i).$$
Proof of variance of stationary time series
first, fixing the definition of the problem, the index is $m$ instead of $u$, to make simpler I will use only the index $i$ and $j$. We want to prove that $\operatorname{Var}\left(\frac{X_1+X_2+...+X_
Proof of variance of stationary time series first, fixing the definition of the problem, the index is $m$ instead of $u$, to make simpler I will use only the index $i$ and $j$. We want to prove that $\operatorname{Var}\left(\frac{X_1+X_2+...+X_n}{n}\right) = \dfrac{\gamma(0)}{n} + \dfrac{2}{n} \sum_{i=1}^{n-1} \left(1−\dfrac{i}{n}\right) \gamma(i).$ The begin is correct, $$\operatorname{Var}(\bar{X}) = \dfrac{1}{n^2} \sum_{i=1}^n\sum_{j=1}^n \operatorname{Cov}(X_i,X_j)$$ We can notice that $\operatorname{Cov}(X_i,X_j) = \operatorname{Cov}(X_j,X_i)$ and, from our assumptions about the problem, that $\operatorname{Cov}(X_i,X_i+h) = \operatorname{Cov}(X_i,X_i-h) = \gamma(h)$ for any $i$ and $h$. We can visualize the sum of covariances in $i$ and $j$ as follows $$\left| \begin{array}{ccccc} \operatorname{Cov}(1,1) & \operatorname{Cov}(1,2) & \cdots & \operatorname{Cov}(1,n-1) & \operatorname{Cov}(1,n)\\ \operatorname{Cov}(2,1) & \operatorname{Cov}(2,2) & \cdots & \operatorname{Cov}(2,n-1) & Cov(2,n)\\ \vdots & \vdots & \ddots & \vdots & \vdots\\ \operatorname{Cov}(n-1,1)& \operatorname{Cov}(1,2) & \cdots & \operatorname{Cov}(n-1,n-1) &\operatorname{Cov}(n-1,n)\\ \operatorname{Cov}(n,1) & \operatorname{Cov}(n,2) & \cdots & \operatorname{Cov}(n,n-1) &\operatorname{Cov}(n,n) \end{array} \right|$$ What is equal to $$\left| \begin{array}{ccccc} \gamma(0) & \gamma(1) & \cdots & \gamma(n-1)\\ \gamma(1) & \gamma(0) & \cdots & \gamma(n-2)\\ \vdots & \vdots & \ddots & \vdots\\ \gamma(n-1)& \gamma(n-2) & \cdots & \gamma(0)\\ \end{array} \right|$$ To sum all the elements we can first sum the main diagonal, and as it is symmetric sum twice the other diagonals $$\sum_{i=1}^n\sum_{j=1}^n \operatorname{Cov}(X_i,X_j) = n \gamma(0) + 2\sum_{i=1}^{n-1}(n-i)\gamma(i)$$. Back to the main equation $$\operatorname{Var}\left(\frac{X_1+X_2+...+X_n}{n}\right) = \dfrac{\gamma(0)}{n}+\dfrac{2}{n^2}\sum_{i=1}^{n-1}(n-i)\gamma(i) = \dfrac{\gamma(0)}{n}+\dfrac{2}{n}\sum_{i=1}^{n-1}(1-\dfrac{i}{n})\gamma(i).$$
Proof of variance of stationary time series first, fixing the definition of the problem, the index is $m$ instead of $u$, to make simpler I will use only the index $i$ and $j$. We want to prove that $\operatorname{Var}\left(\frac{X_1+X_2+...+X_
30,993
Proof of variance of stationary time series
We have, $Var(\bar{X})=Var\left(\frac{\sum\limits_{i=1}^{n}{X_i}}{n}\right)=\frac{1}{n^2}Var\left(\sum\limits_{i=1}^{n}{X_i}\right)=\frac{1}{n^2}\left(\sum\limits_{i=1}^{n}{Var(X_i)}+2\underset{1\leq i<j\leq n}{\sum\sum}cov(X_i,X_j)\right)$ Also, by definition of the covariance function (corresponding to different lag values) of a weakly stationary time series, we have $\gamma(0)=cov(X_1,X_1)=cov(X_2,X_2)=\ldots=cov(X_n,X_n)$, i.e., $\begin{split} \gamma(0)&=&Var(X_1)&=&Var(X_2)&=&\ldots&=&Var(X_n)& \quad \text{there are $n$ terms} \\[-1pt] \gamma(1)&=&cov(X_1,X_2)&=&cov(X_2,X_3)&=&\ldots&=&cov(X_{n-1},X_n) & \quad\text{there are $(n-1)$ terms} \\ \gamma(2)&=&cov(X_1,X_3)&=&cov(X_2,X_4)&=&\ldots&=&cov(X_{n-2},X_n) & \quad\text{there are $(n-2)$ terms} \\ \ldots&&&&\ldots&&&&\ldots& \\ \gamma(n-2)&=&cov(X_1,X_{n-2})&=&cov(X_2,X_{n-1})&&&& & \quad\text{there are $2$ terms} \\ \gamma(n-1)&=&cov(X_1,X_{n-1})&&&&&& & \quad\text{there is $1$ term} \\ \end{split}$ Hence, we have, $\sum\limits_{i=1}^{n}{Var(X_i)}=n\gamma(0)$ and $\underset{i<j}{\sum\sum}cov(X_i,X_j)=(n-1)\gamma(1)+(n-2)\gamma(2)+\ldots+2\gamma(n-2)+\gamma(n-1)$ $\quad\quad\quad\quad\quad\quad\quad\quad=\sum\limits_{m=1}^{n-1}{(n-m)\gamma(m)}$ $\implies Var(\bar{X})=\frac{1}{n^2}\left(\sum\limits_{i=1}^{n}{Var(X_i)}+2\underset{1\leq i<j \leq n}{\sum\sum}cov(X_i,X_j)\right)$ $\quad\quad\quad\quad\quad\quad=\frac{1}{n^2}\left(n\gamma(0)+2\sum\limits_{m=1}^{n-1}{(n-m)\gamma(m)}\right)$ $\implies Var(\bar{X})=\frac{\gamma(0)}{n}+\frac{2}{n}\sum\limits_{m=1}^{n-1}{(1-\frac{m}{n})\gamma(m)}$
Proof of variance of stationary time series
We have, $Var(\bar{X})=Var\left(\frac{\sum\limits_{i=1}^{n}{X_i}}{n}\right)=\frac{1}{n^2}Var\left(\sum\limits_{i=1}^{n}{X_i}\right)=\frac{1}{n^2}\left(\sum\limits_{i=1}^{n}{Var(X_i)}+2\underset{1\leq
Proof of variance of stationary time series We have, $Var(\bar{X})=Var\left(\frac{\sum\limits_{i=1}^{n}{X_i}}{n}\right)=\frac{1}{n^2}Var\left(\sum\limits_{i=1}^{n}{X_i}\right)=\frac{1}{n^2}\left(\sum\limits_{i=1}^{n}{Var(X_i)}+2\underset{1\leq i<j\leq n}{\sum\sum}cov(X_i,X_j)\right)$ Also, by definition of the covariance function (corresponding to different lag values) of a weakly stationary time series, we have $\gamma(0)=cov(X_1,X_1)=cov(X_2,X_2)=\ldots=cov(X_n,X_n)$, i.e., $\begin{split} \gamma(0)&=&Var(X_1)&=&Var(X_2)&=&\ldots&=&Var(X_n)& \quad \text{there are $n$ terms} \\[-1pt] \gamma(1)&=&cov(X_1,X_2)&=&cov(X_2,X_3)&=&\ldots&=&cov(X_{n-1},X_n) & \quad\text{there are $(n-1)$ terms} \\ \gamma(2)&=&cov(X_1,X_3)&=&cov(X_2,X_4)&=&\ldots&=&cov(X_{n-2},X_n) & \quad\text{there are $(n-2)$ terms} \\ \ldots&&&&\ldots&&&&\ldots& \\ \gamma(n-2)&=&cov(X_1,X_{n-2})&=&cov(X_2,X_{n-1})&&&& & \quad\text{there are $2$ terms} \\ \gamma(n-1)&=&cov(X_1,X_{n-1})&&&&&& & \quad\text{there is $1$ term} \\ \end{split}$ Hence, we have, $\sum\limits_{i=1}^{n}{Var(X_i)}=n\gamma(0)$ and $\underset{i<j}{\sum\sum}cov(X_i,X_j)=(n-1)\gamma(1)+(n-2)\gamma(2)+\ldots+2\gamma(n-2)+\gamma(n-1)$ $\quad\quad\quad\quad\quad\quad\quad\quad=\sum\limits_{m=1}^{n-1}{(n-m)\gamma(m)}$ $\implies Var(\bar{X})=\frac{1}{n^2}\left(\sum\limits_{i=1}^{n}{Var(X_i)}+2\underset{1\leq i<j \leq n}{\sum\sum}cov(X_i,X_j)\right)$ $\quad\quad\quad\quad\quad\quad=\frac{1}{n^2}\left(n\gamma(0)+2\sum\limits_{m=1}^{n-1}{(n-m)\gamma(m)}\right)$ $\implies Var(\bar{X})=\frac{\gamma(0)}{n}+\frac{2}{n}\sum\limits_{m=1}^{n-1}{(1-\frac{m}{n})\gamma(m)}$
Proof of variance of stationary time series We have, $Var(\bar{X})=Var\left(\frac{\sum\limits_{i=1}^{n}{X_i}}{n}\right)=\frac{1}{n^2}Var\left(\sum\limits_{i=1}^{n}{X_i}\right)=\frac{1}{n^2}\left(\sum\limits_{i=1}^{n}{Var(X_i)}+2\underset{1\leq
30,994
Why KNN and SVM with a gaussian are non-parametric models?
kNN and SVM are not models, they are algorithms. Your model is what you are going to assume for you data. For example, : Gaussian with unknown mean but fixed variance. Gaussian with both unknown mean and variance. bounded. continuous. In the first two cases, the model is parametric because it is fully known except for a finite number of parameters (one -the mean- in the first case, and two -the mean and the variance- in the second case). In the third and fourth case, the model is nonparametric because you need an infinite number of parameters to specify the data (you can't describe the set of all bounded or continuous data with a finite number of parameters). In a parametric model, you have a lot more information and you can use very specific algorithms (in our example algorithms which only work for gaussian data). In the nonparametric setting, you have to use more general algorithms, since you assume way less on the data. Nothing prevents you of using general nonparametric algorithms for parametric models, they would work, but you can generally do better by using specific algorithms. kNN (even defined with gaussian weights) is a nonparametric algorithm devised to work for nonparametric models, i.e. very general models. SVM are more complicated to label. Basic SVM are linear classifiers, and as such parametric algorithms. Advanced SVM can work for nonlinear data, and if you have a SVM working for data not constrained to be in a family described by a finite number of parameters, then it is nonparametric. PS: Like Nicolas stated, all theses algorithms, parametric or nonparametric (which refer to the model they work for) also have parameters you have to chose (for kNN, $k$, the number of nearest neighbors).
Why KNN and SVM with a gaussian are non-parametric models?
kNN and SVM are not models, they are algorithms. Your model is what you are going to assume for you data. For example, : Gaussian with unknown mean but fixed variance. Gaussian with both unknown mea
Why KNN and SVM with a gaussian are non-parametric models? kNN and SVM are not models, they are algorithms. Your model is what you are going to assume for you data. For example, : Gaussian with unknown mean but fixed variance. Gaussian with both unknown mean and variance. bounded. continuous. In the first two cases, the model is parametric because it is fully known except for a finite number of parameters (one -the mean- in the first case, and two -the mean and the variance- in the second case). In the third and fourth case, the model is nonparametric because you need an infinite number of parameters to specify the data (you can't describe the set of all bounded or continuous data with a finite number of parameters). In a parametric model, you have a lot more information and you can use very specific algorithms (in our example algorithms which only work for gaussian data). In the nonparametric setting, you have to use more general algorithms, since you assume way less on the data. Nothing prevents you of using general nonparametric algorithms for parametric models, they would work, but you can generally do better by using specific algorithms. kNN (even defined with gaussian weights) is a nonparametric algorithm devised to work for nonparametric models, i.e. very general models. SVM are more complicated to label. Basic SVM are linear classifiers, and as such parametric algorithms. Advanced SVM can work for nonlinear data, and if you have a SVM working for data not constrained to be in a family described by a finite number of parameters, then it is nonparametric. PS: Like Nicolas stated, all theses algorithms, parametric or nonparametric (which refer to the model they work for) also have parameters you have to chose (for kNN, $k$, the number of nearest neighbors).
Why KNN and SVM with a gaussian are non-parametric models? kNN and SVM are not models, they are algorithms. Your model is what you are going to assume for you data. For example, : Gaussian with unknown mean but fixed variance. Gaussian with both unknown mea
30,995
Can we really sample from a Continuous distribution (Scipy function) and what does it mean?
In practice the functions that sample from continuous distributions at best sample only to some level of accuracy. For example, if we're sampling from a uniform on the unit interval, typically what happens is there's an algorithm that samples uniformly over some (very large) range of integers (say $0,1,...,m-1$) and these may be converted to numbers in $[0,1)$ by dividing by $m$. So you can see $n/m$ or $(n-1)/m$ or $(n+1)/m$ but not values in between. If you think of these discrete values as representing values within a range ($n/m$ in some sense "stands for" values in $[n/m, (n+1)/m)$ then there's a sense in which the sampled values could be regarded as standing for an interval of truly continuous values; while such considerations become somewhat more complex once you start transforming them, nevertheless in many situations the endpoints of the intervals can be tracked through such transformations and the process maintained as needed. Note that your professor's comment doesn't seem to be talking about what is usually done but rather what we could do. In that case whuber's comment at the linked post is relevant: One (inefficient) way is to generate each successive binary digit independently until the number is known sufficiently precisely for the calculations. One way to look at that is that we can (as before) regard any current representation as a proxy for an interval of values but we can generate as many additional digits as required when we need them. In that case, a given generated value has always only been partly generated; the process of generating to more precision may be undertaken as the precision is needed. In reality all our representations of continuous quantities (not just in random generation, but in any measurement) are limited in accuracy; normally this doesn't do any harm to our notions of continuous variables as a suitable model for what we are doing.
Can we really sample from a Continuous distribution (Scipy function) and what does it mean?
In practice the functions that sample from continuous distributions at best sample only to some level of accuracy. For example, if we're sampling from a uniform on the unit interval, typically what ha
Can we really sample from a Continuous distribution (Scipy function) and what does it mean? In practice the functions that sample from continuous distributions at best sample only to some level of accuracy. For example, if we're sampling from a uniform on the unit interval, typically what happens is there's an algorithm that samples uniformly over some (very large) range of integers (say $0,1,...,m-1$) and these may be converted to numbers in $[0,1)$ by dividing by $m$. So you can see $n/m$ or $(n-1)/m$ or $(n+1)/m$ but not values in between. If you think of these discrete values as representing values within a range ($n/m$ in some sense "stands for" values in $[n/m, (n+1)/m)$ then there's a sense in which the sampled values could be regarded as standing for an interval of truly continuous values; while such considerations become somewhat more complex once you start transforming them, nevertheless in many situations the endpoints of the intervals can be tracked through such transformations and the process maintained as needed. Note that your professor's comment doesn't seem to be talking about what is usually done but rather what we could do. In that case whuber's comment at the linked post is relevant: One (inefficient) way is to generate each successive binary digit independently until the number is known sufficiently precisely for the calculations. One way to look at that is that we can (as before) regard any current representation as a proxy for an interval of values but we can generate as many additional digits as required when we need them. In that case, a given generated value has always only been partly generated; the process of generating to more precision may be undertaken as the precision is needed. In reality all our representations of continuous quantities (not just in random generation, but in any measurement) are limited in accuracy; normally this doesn't do any harm to our notions of continuous variables as a suitable model for what we are doing.
Can we really sample from a Continuous distribution (Scipy function) and what does it mean? In practice the functions that sample from continuous distributions at best sample only to some level of accuracy. For example, if we're sampling from a uniform on the unit interval, typically what ha
30,996
Can we really sample from a Continuous distribution (Scipy function) and what does it mean?
You are correct that random number generators are really sampling from a discrete, granular distribution. Floating point numbers have only finite precision (8 bit, 16 bit, etc.) and computers can only generate floating point numbers within a specified precision. Generating a truly random complete real variable would require specifying an infinite amount of information, and isn't necessary -- finite precision results are, with care, more than precise enough.
Can we really sample from a Continuous distribution (Scipy function) and what does it mean?
You are correct that random number generators are really sampling from a discrete, granular distribution. Floating point numbers have only finite precision (8 bit, 16 bit, etc.) and computers can only
Can we really sample from a Continuous distribution (Scipy function) and what does it mean? You are correct that random number generators are really sampling from a discrete, granular distribution. Floating point numbers have only finite precision (8 bit, 16 bit, etc.) and computers can only generate floating point numbers within a specified precision. Generating a truly random complete real variable would require specifying an infinite amount of information, and isn't necessary -- finite precision results are, with care, more than precise enough.
Can we really sample from a Continuous distribution (Scipy function) and what does it mean? You are correct that random number generators are really sampling from a discrete, granular distribution. Floating point numbers have only finite precision (8 bit, 16 bit, etc.) and computers can only
30,997
What is the impact of doubling a sample size on a p-value
For the T test we have rules like "Doubling the sample size increases the test statistic by $\sqrt{2}$ ". This might make you think that there is a simple relationship between sample size and p-value. In fact the relationship between sample size and p-value depends on the relationship between sample size and the test statistic, and the relationship between test statistic and p-value. Those relationships will be different for every test. For the most simple case, the one sided Z test, we can see what this relationship is. Suppose a random variable $X$ has mean $\mu$ and variance $\sigma^2$. Uppose that we are testing if the mean of $X$ is significantly different from $\nu$. The test statistic $Z$ is $\frac{(\bar{x}-\nu)\sqrt{n}}{\sigma}$. The p value is equal to one minus the CDF of the $Z$ statistic (this assumes that the difference between means is positive, a similar argument works if the difference is negative). For the normal distribution the CDF is $\Phi(t)=0.5+0.5\cdot erf(\frac{x-\mu_t}{\sigma_t \sqrt{2}})$. Where erf(x) is the error function. Under the null hypothesis of equal means the $Z$ statistic has a mean $0$ and variance $1$. The actual distribution of $Z$ has a mean of $\frac{(\bar{x}-\nu)\sqrt{n}}{\sigma}$ and variance $1$. The effect size of the difference between the means is $\frac{(\bar{x}-\nu)}{\sigma}$. Call the effect size $b$, then the expected value of $Z$ is $b\sqrt{n}$. For $Z$ the CDF is $\Phi(z)=0.5+0.5\cdot erf(\frac{z}{\sqrt{2}})$. Where erf(x) is the error function. Of course the $Z$ statistic is a random variable, here we'll just look at the relationship between sample size and p-value for the expected value of $Z$. It follows that the CDF of the $Z$ statistic is $\Phi(z)=0.5+0.5\cdot erf(\frac{b\sqrt{n}}{\sqrt{2}})$ This is the relationship between the p value and sample size $p=0.5-0.5\cdot erf(\frac{b\sqrt{n}}{\sqrt{2}})$ The relationship varies according to the value of $n$. For very large $n$ we can use a series expansion to see the limiting behavior. According to wolfram alpha that is: $\lim_{n \to \infty}p = e^{-0.5b^2n} \left(\frac{1}{eb\sqrt{n}}+O\left(\frac{1}{(b\sqrt{n})^2} \right) \right)$ That is quite a quick decay towards 0. There is a big dependence on the effect size, of course if the difference between means is greater then the p value will shrink more quickly as your sampling improves. Again, remember that this is only for the Z and T test, it doesn't apply to other tests.
What is the impact of doubling a sample size on a p-value
For the T test we have rules like "Doubling the sample size increases the test statistic by $\sqrt{2}$ ". This might make you think that there is a simple relationship between sample size and p-value.
What is the impact of doubling a sample size on a p-value For the T test we have rules like "Doubling the sample size increases the test statistic by $\sqrt{2}$ ". This might make you think that there is a simple relationship between sample size and p-value. In fact the relationship between sample size and p-value depends on the relationship between sample size and the test statistic, and the relationship between test statistic and p-value. Those relationships will be different for every test. For the most simple case, the one sided Z test, we can see what this relationship is. Suppose a random variable $X$ has mean $\mu$ and variance $\sigma^2$. Uppose that we are testing if the mean of $X$ is significantly different from $\nu$. The test statistic $Z$ is $\frac{(\bar{x}-\nu)\sqrt{n}}{\sigma}$. The p value is equal to one minus the CDF of the $Z$ statistic (this assumes that the difference between means is positive, a similar argument works if the difference is negative). For the normal distribution the CDF is $\Phi(t)=0.5+0.5\cdot erf(\frac{x-\mu_t}{\sigma_t \sqrt{2}})$. Where erf(x) is the error function. Under the null hypothesis of equal means the $Z$ statistic has a mean $0$ and variance $1$. The actual distribution of $Z$ has a mean of $\frac{(\bar{x}-\nu)\sqrt{n}}{\sigma}$ and variance $1$. The effect size of the difference between the means is $\frac{(\bar{x}-\nu)}{\sigma}$. Call the effect size $b$, then the expected value of $Z$ is $b\sqrt{n}$. For $Z$ the CDF is $\Phi(z)=0.5+0.5\cdot erf(\frac{z}{\sqrt{2}})$. Where erf(x) is the error function. Of course the $Z$ statistic is a random variable, here we'll just look at the relationship between sample size and p-value for the expected value of $Z$. It follows that the CDF of the $Z$ statistic is $\Phi(z)=0.5+0.5\cdot erf(\frac{b\sqrt{n}}{\sqrt{2}})$ This is the relationship between the p value and sample size $p=0.5-0.5\cdot erf(\frac{b\sqrt{n}}{\sqrt{2}})$ The relationship varies according to the value of $n$. For very large $n$ we can use a series expansion to see the limiting behavior. According to wolfram alpha that is: $\lim_{n \to \infty}p = e^{-0.5b^2n} \left(\frac{1}{eb\sqrt{n}}+O\left(\frac{1}{(b\sqrt{n})^2} \right) \right)$ That is quite a quick decay towards 0. There is a big dependence on the effect size, of course if the difference between means is greater then the p value will shrink more quickly as your sampling improves. Again, remember that this is only for the Z and T test, it doesn't apply to other tests.
What is the impact of doubling a sample size on a p-value For the T test we have rules like "Doubling the sample size increases the test statistic by $\sqrt{2}$ ". This might make you think that there is a simple relationship between sample size and p-value.
30,998
What is the impact of doubling a sample size on a p-value
Let us first investigate the effect on the t-value. We can then immediately infer the effect on p-value. This is perhaps best illustrated by a well-chosen simulation example which illustrates the most salient features. Since we're looking at $H_0$ being false (and we're essentially considering properties related to power) it makes sense to focus on a one-tailed test (in the "correct" direction) since looking at the wrong tail won't see much action and won't tell us much of interest. So here we have a situation (at n=100) where the effect is large enough that the statistic is sometimes significant. We then add to that first sample a second drawing from the same continuous distribution of x-values (here uniform but it's not critical to the observed effect) of the same size as the first, leading to a doubling of the sample size, but entirely including the first sample. What we observe is not that the p-value goes down, only that it tends to go down (more points lie above the diagonal line than below it); we can see that the variation in t-values reduces, so there are fewer in the region of 0. Many p-values go up. Quite a number of samples that were insignificant became significant when we added more data, but some that were significant became insignificant. [Here we're looking at the t-statistic for the slope coefficient in a simple regression, though qualitatively the issues are similar more broadly.] A plot of p-values instead of t-values conveys essentially the same information. Indeed if you put tick-markings at the right intervals on the axes above, you could label them with p-values instead ... but the top (and right) would show low p-values and the bottom (/left) would be labelled with larger p-values. [Actually plotting the p-values just squashes everything up into the corner, and it's less clear what's going on.]
What is the impact of doubling a sample size on a p-value
Let us first investigate the effect on the t-value. We can then immediately infer the effect on p-value. This is perhaps best illustrated by a well-chosen simulation example which illustrates the most
What is the impact of doubling a sample size on a p-value Let us first investigate the effect on the t-value. We can then immediately infer the effect on p-value. This is perhaps best illustrated by a well-chosen simulation example which illustrates the most salient features. Since we're looking at $H_0$ being false (and we're essentially considering properties related to power) it makes sense to focus on a one-tailed test (in the "correct" direction) since looking at the wrong tail won't see much action and won't tell us much of interest. So here we have a situation (at n=100) where the effect is large enough that the statistic is sometimes significant. We then add to that first sample a second drawing from the same continuous distribution of x-values (here uniform but it's not critical to the observed effect) of the same size as the first, leading to a doubling of the sample size, but entirely including the first sample. What we observe is not that the p-value goes down, only that it tends to go down (more points lie above the diagonal line than below it); we can see that the variation in t-values reduces, so there are fewer in the region of 0. Many p-values go up. Quite a number of samples that were insignificant became significant when we added more data, but some that were significant became insignificant. [Here we're looking at the t-statistic for the slope coefficient in a simple regression, though qualitatively the issues are similar more broadly.] A plot of p-values instead of t-values conveys essentially the same information. Indeed if you put tick-markings at the right intervals on the axes above, you could label them with p-values instead ... but the top (and right) would show low p-values and the bottom (/left) would be labelled with larger p-values. [Actually plotting the p-values just squashes everything up into the corner, and it's less clear what's going on.]
What is the impact of doubling a sample size on a p-value Let us first investigate the effect on the t-value. We can then immediately infer the effect on p-value. This is perhaps best illustrated by a well-chosen simulation example which illustrates the most
30,999
What is the impact of doubling a sample size on a p-value
In general, when the respective null is false, expect decay of the p-values as in the figure below, where I report average p-values from little simulation study for multiples of samples of size n=25 ranging from bb*n=25to bb*n=29*25 for a simple linear regression coefficient equal to 0.1 and error standard deviation of $\sigma_u=0.5$. Since the p-values are bounded from below by zero, the decay must ultimately flatten out. The 90% confidence interval (shaded blue area) indicates that, moreover, the variability of the p-values also decreases with sample size. Evidently, when either $\sigma_u$ is smaller or $n$ larger, the p-values will be close to zero faster when increasing bb, so that the appearance of the plot will be flatter. Code: reps <- 5000 B <- seq(1,30,by=2) n <- 25 sigma.u <- .5 pvalues <- matrix(NA,reps,length(B)) for (bb in 1:length(B)){ for (i in 1:reps){ x <- rnorm(B[bb]*n) y <- .1*x + rnorm(B[bb]*n,sd=sigma.u) pvalues[i,bb] <- summary(lm(y~x))$coefficients[2,4] } } plot(B,colMeans(pvalues),type="l", lwd=2, col="purple", ylim=c(0,.9)) ConfidenceInterval <- apply(pvalues, 2, quantile, probs = c(.1,.9)) x.ax <- c(B,rev(B)) y.ax <- c(ConfidenceInterval[1,], rev(ConfidenceInterval[2,])) polygon(x.ax,y.ax, col=alpha("blue", alpha = .2), border=NA)
What is the impact of doubling a sample size on a p-value
In general, when the respective null is false, expect decay of the p-values as in the figure below, where I report average p-values from little simulation study for multiples of samples of size n=25 r
What is the impact of doubling a sample size on a p-value In general, when the respective null is false, expect decay of the p-values as in the figure below, where I report average p-values from little simulation study for multiples of samples of size n=25 ranging from bb*n=25to bb*n=29*25 for a simple linear regression coefficient equal to 0.1 and error standard deviation of $\sigma_u=0.5$. Since the p-values are bounded from below by zero, the decay must ultimately flatten out. The 90% confidence interval (shaded blue area) indicates that, moreover, the variability of the p-values also decreases with sample size. Evidently, when either $\sigma_u$ is smaller or $n$ larger, the p-values will be close to zero faster when increasing bb, so that the appearance of the plot will be flatter. Code: reps <- 5000 B <- seq(1,30,by=2) n <- 25 sigma.u <- .5 pvalues <- matrix(NA,reps,length(B)) for (bb in 1:length(B)){ for (i in 1:reps){ x <- rnorm(B[bb]*n) y <- .1*x + rnorm(B[bb]*n,sd=sigma.u) pvalues[i,bb] <- summary(lm(y~x))$coefficients[2,4] } } plot(B,colMeans(pvalues),type="l", lwd=2, col="purple", ylim=c(0,.9)) ConfidenceInterval <- apply(pvalues, 2, quantile, probs = c(.1,.9)) x.ax <- c(B,rev(B)) y.ax <- c(ConfidenceInterval[1,], rev(ConfidenceInterval[2,])) polygon(x.ax,y.ax, col=alpha("blue", alpha = .2), border=NA)
What is the impact of doubling a sample size on a p-value In general, when the respective null is false, expect decay of the p-values as in the figure below, where I report average p-values from little simulation study for multiples of samples of size n=25 r
31,000
Regression definition
Regression is far broader in purpose and scope than classification or machine learning (however the latter might be understood). There is, however, much overlap. Relationships Relationships analyzed by regression may consist of Association Dependence Causation Classification provides information about the first two, but is silent about causation. Both regression and machine learning have been used--sometimes successfully, often problematically--to draw conclusions about causation. Purposes of Regression To get a summary of multivariate data. To set aside the effect of a variable that might confuse the issue. Contribute to attempts at causal analysis. Measure the size of an effect. Try to discover a mathematical or empirical law. Prediction. Exclusion: getting $x$ "out of the way" when we want to study the relationship between two other variables that might be affected by $x$. (After Mosteller & Tukey, Data Analysis and Regression, Chapter 12B.) Classification achieves almost none of these purposes. In limited ways it might provide some kind of summary (1) and help with discovery (5). Machine learning aims at prediction (6) almost exclusively. Most techniques of machine learning, ranging from random forests through neural networks to support vector models, are opaque to the understanding: they specifically do not aim to summarize data (1), remove the effects of confounding variables (2 and 7), or help us discover regularities that can be embodied in an empirical law (5). This post is a slight expansion of an introductory presentation I made recently for a semester course in regression. Many more materials on the aims and practice of regression are available there.
Regression definition
Regression is far broader in purpose and scope than classification or machine learning (however the latter might be understood). There is, however, much overlap. Relationships Relationships analyzed
Regression definition Regression is far broader in purpose and scope than classification or machine learning (however the latter might be understood). There is, however, much overlap. Relationships Relationships analyzed by regression may consist of Association Dependence Causation Classification provides information about the first two, but is silent about causation. Both regression and machine learning have been used--sometimes successfully, often problematically--to draw conclusions about causation. Purposes of Regression To get a summary of multivariate data. To set aside the effect of a variable that might confuse the issue. Contribute to attempts at causal analysis. Measure the size of an effect. Try to discover a mathematical or empirical law. Prediction. Exclusion: getting $x$ "out of the way" when we want to study the relationship between two other variables that might be affected by $x$. (After Mosteller & Tukey, Data Analysis and Regression, Chapter 12B.) Classification achieves almost none of these purposes. In limited ways it might provide some kind of summary (1) and help with discovery (5). Machine learning aims at prediction (6) almost exclusively. Most techniques of machine learning, ranging from random forests through neural networks to support vector models, are opaque to the understanding: they specifically do not aim to summarize data (1), remove the effects of confounding variables (2 and 7), or help us discover regularities that can be embodied in an empirical law (5). This post is a slight expansion of an introductory presentation I made recently for a semester course in regression. Many more materials on the aims and practice of regression are available there.
Regression definition Regression is far broader in purpose and scope than classification or machine learning (however the latter might be understood). There is, however, much overlap. Relationships Relationships analyzed