idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
33,701
Low $R^2$ value in social science or education research?
A paper by Abelson (1985) titled "A variance explanation paradox: When a little is a lot", published in Psychological Bulletin, addresses (part of) this issue. In particular, Abelson shows that the proportion of variance shared between a dichotomous and a continuous variable can be surprisingly small, even when intuition would dictate a very large $R^2$ (he uses the example of whether a baseball batter would hit a ball or not, as a function of the batter's batting average--yielding a whopping $R^2 < .001$). Abelson goes on to explain that even such a tiny $R^2$ can be meaningful, as long as the effect under investigation can make itself felt over time. P.S.: I used this paper a few months ago to respond to a reviewer who was unimpressed with our low $R^2$'s, and it hit the mark--our paper is now in press :) Reference: Abelson, R. P. (1985). A variance explanation paradox: When a little is a lot. Psychological Bulletin, 97, 129-133.
Low $R^2$ value in social science or education research?
A paper by Abelson (1985) titled "A variance explanation paradox: When a little is a lot", published in Psychological Bulletin, addresses (part of) this issue. In particular, Abelson shows that the pr
Low $R^2$ value in social science or education research? A paper by Abelson (1985) titled "A variance explanation paradox: When a little is a lot", published in Psychological Bulletin, addresses (part of) this issue. In particular, Abelson shows that the proportion of variance shared between a dichotomous and a continuous variable can be surprisingly small, even when intuition would dictate a very large $R^2$ (he uses the example of whether a baseball batter would hit a ball or not, as a function of the batter's batting average--yielding a whopping $R^2 < .001$). Abelson goes on to explain that even such a tiny $R^2$ can be meaningful, as long as the effect under investigation can make itself felt over time. P.S.: I used this paper a few months ago to respond to a reviewer who was unimpressed with our low $R^2$'s, and it hit the mark--our paper is now in press :) Reference: Abelson, R. P. (1985). A variance explanation paradox: When a little is a lot. Psychological Bulletin, 97, 129-133.
Low $R^2$ value in social science or education research? A paper by Abelson (1985) titled "A variance explanation paradox: When a little is a lot", published in Psychological Bulletin, addresses (part of) this issue. In particular, Abelson shows that the pr
33,702
Low $R^2$ value in social science or education research?
An arm-waving argument that nevertheless has much force works backwards. What would perfect prediction imply? For example, it would imply that we can predict students' performance exactly by just knowing their age, sex, race, class, etc. Yet we know that is absurd; it contradicts much else of what we know in social science, not to say everyday life. Moreover, although this is a different issue: many of us would not want to live in such a world.
Low $R^2$ value in social science or education research?
An arm-waving argument that nevertheless has much force works backwards. What would perfect prediction imply? For example, it would imply that we can predict students' performance exactly by just know
Low $R^2$ value in social science or education research? An arm-waving argument that nevertheless has much force works backwards. What would perfect prediction imply? For example, it would imply that we can predict students' performance exactly by just knowing their age, sex, race, class, etc. Yet we know that is absurd; it contradicts much else of what we know in social science, not to say everyday life. Moreover, although this is a different issue: many of us would not want to live in such a world.
Low $R^2$ value in social science or education research? An arm-waving argument that nevertheless has much force works backwards. What would perfect prediction imply? For example, it would imply that we can predict students' performance exactly by just know
33,703
Low $R^2$ value in social science or education research?
I find your question a bit vague, it probably depends on what you want to do in social science or education research. But more generally, like every indicator, $R^2$ is good for checking what it is designed to check, bad for the rest. Precisely, $R^2$ can be defined as $R^2 = \frac{SSE}{SST} = 1 - \frac{SSR}{SST}$, so that it explains how much of the data you can explain by your model, how well data fit a statistical model. The domain where it is the most important is when you want to do prediction : if you want to predict your outcome, it is necessary that your model explains nearly all of what is happening if the data. On the contrary, if you are interested -it is often the case- in the influence of one variable/parameter, you do not care at all about the $R^2$, all you care is that your effects are for instance significant, with the hypothesis needed verified. I have no precise reference in mind, but any introductory econometrics textbook will have a chapter or section on it (e.g. mostly harmless econometrics or Wooldridge's Introductory Econometrics: A Modern Approach).
Low $R^2$ value in social science or education research?
I find your question a bit vague, it probably depends on what you want to do in social science or education research. But more generally, like every indicator, $R^2$ is good for checking what it is de
Low $R^2$ value in social science or education research? I find your question a bit vague, it probably depends on what you want to do in social science or education research. But more generally, like every indicator, $R^2$ is good for checking what it is designed to check, bad for the rest. Precisely, $R^2$ can be defined as $R^2 = \frac{SSE}{SST} = 1 - \frac{SSR}{SST}$, so that it explains how much of the data you can explain by your model, how well data fit a statistical model. The domain where it is the most important is when you want to do prediction : if you want to predict your outcome, it is necessary that your model explains nearly all of what is happening if the data. On the contrary, if you are interested -it is often the case- in the influence of one variable/parameter, you do not care at all about the $R^2$, all you care is that your effects are for instance significant, with the hypothesis needed verified. I have no precise reference in mind, but any introductory econometrics textbook will have a chapter or section on it (e.g. mostly harmless econometrics or Wooldridge's Introductory Econometrics: A Modern Approach).
Low $R^2$ value in social science or education research? I find your question a bit vague, it probably depends on what you want to do in social science or education research. But more generally, like every indicator, $R^2$ is good for checking what it is de
33,704
Low $R^2$ value in social science or education research?
Abelson's point could be summarised: What is improbable becomes probable in case of sufficiently many repetitions. Evolution is build on this principle: It is improbable that a mutation would be an advantage to the mutant. But, in case of sufficiently many mutations, it is likely that a few are advantageous. By means of selection and progeny, the improable afterwards becomes probable in the population. In both cases, there is a selection mechanism that makes success decisive, and failure not a disaster (for the species at least). Jesper Juul's book about gaming, "The Art of Failure", adds another dimension to Abelson's considerations. Juul's point is that it is not fascinating to play games where you never loose. Actually, there must be a balance between skills and the frequency of failures/successes, before it becomes attactive to play and improve your performance. Gaming and training ensure that failure is not a disaster, and then the selection mechanism is effective and low R2 values are no problem, they may even be preferable. Inversely, when failure is a disaster, high R2-values are very important. More generally, R2 values are important where the event is a game changer. Moreover, gamechanging events often cannot be reduced to a binarity, failure/success: The possible outcomes are multiple and have multiple effects. In that case, the outcome has historical/biographical salience. In case, events are historical and have never happened before, it is basically impossible to estimate R2, even though some analytical description may reduce randomness because history to some extent may resemble itself. In short, you may experience the combination of small R2 and gamechanging events. ... Well, that is life, sometimes ;-)
Low $R^2$ value in social science or education research?
Abelson's point could be summarised: What is improbable becomes probable in case of sufficiently many repetitions. Evolution is build on this principle: It is improbable that a mutation would be an a
Low $R^2$ value in social science or education research? Abelson's point could be summarised: What is improbable becomes probable in case of sufficiently many repetitions. Evolution is build on this principle: It is improbable that a mutation would be an advantage to the mutant. But, in case of sufficiently many mutations, it is likely that a few are advantageous. By means of selection and progeny, the improable afterwards becomes probable in the population. In both cases, there is a selection mechanism that makes success decisive, and failure not a disaster (for the species at least). Jesper Juul's book about gaming, "The Art of Failure", adds another dimension to Abelson's considerations. Juul's point is that it is not fascinating to play games where you never loose. Actually, there must be a balance between skills and the frequency of failures/successes, before it becomes attactive to play and improve your performance. Gaming and training ensure that failure is not a disaster, and then the selection mechanism is effective and low R2 values are no problem, they may even be preferable. Inversely, when failure is a disaster, high R2-values are very important. More generally, R2 values are important where the event is a game changer. Moreover, gamechanging events often cannot be reduced to a binarity, failure/success: The possible outcomes are multiple and have multiple effects. In that case, the outcome has historical/biographical salience. In case, events are historical and have never happened before, it is basically impossible to estimate R2, even though some analytical description may reduce randomness because history to some extent may resemble itself. In short, you may experience the combination of small R2 and gamechanging events. ... Well, that is life, sometimes ;-)
Low $R^2$ value in social science or education research? Abelson's point could be summarised: What is improbable becomes probable in case of sufficiently many repetitions. Evolution is build on this principle: It is improbable that a mutation would be an a
33,705
Which kind of statistical exact test should I use?
Peter Flom's answer makes a good point, but strictly speaking, the situation is more complicated than that. Treatment 2 is administered conditionally on the failure of treatment 1. Suppose the probability of success with treatment 1 is $p_1$, (respectively $p_2$ for treatment 2.) Each patient who receives treatment 2 and is cured thereby contributes $(1-p_1)p_2$ to the likelihood. With 4 patients, that gives $(1-p_1)^4 p_2^4$. You could then maximize the likelihood for $p_1$ and $p_2$ and compare, via a likelihood ratio test to the likelihood when $p_1=p_2$. However, in your case, since no one was cured by the old treatment, MLE's for $p_1$ and $p_2$ are 0 and 1 respectively and that test will breakdown. But there is another way: Assuming $p_1=p_2$, the MLE for the NULL model is $p=0.5$. The probability of getting what you got, or worse (only there isn't any worse) is then $0.5^8=0.004$. However 4 patients is pretty small ... and from a design perspective, I would like to see a randomized trial. Or at least a cross-over. I don't know what the illness is, but if it were the common cold, that's exactly the result you would get if you applied two bogus treatments. The first would fail, and the patient would get better in time.
Which kind of statistical exact test should I use?
Peter Flom's answer makes a good point, but strictly speaking, the situation is more complicated than that. Treatment 2 is administered conditionally on the failure of treatment 1. Suppose the probabi
Which kind of statistical exact test should I use? Peter Flom's answer makes a good point, but strictly speaking, the situation is more complicated than that. Treatment 2 is administered conditionally on the failure of treatment 1. Suppose the probability of success with treatment 1 is $p_1$, (respectively $p_2$ for treatment 2.) Each patient who receives treatment 2 and is cured thereby contributes $(1-p_1)p_2$ to the likelihood. With 4 patients, that gives $(1-p_1)^4 p_2^4$. You could then maximize the likelihood for $p_1$ and $p_2$ and compare, via a likelihood ratio test to the likelihood when $p_1=p_2$. However, in your case, since no one was cured by the old treatment, MLE's for $p_1$ and $p_2$ are 0 and 1 respectively and that test will breakdown. But there is another way: Assuming $p_1=p_2$, the MLE for the NULL model is $p=0.5$. The probability of getting what you got, or worse (only there isn't any worse) is then $0.5^8=0.004$. However 4 patients is pretty small ... and from a design perspective, I would like to see a randomized trial. Or at least a cross-over. I don't know what the illness is, but if it were the common cold, that's exactly the result you would get if you applied two bogus treatments. The first would fail, and the patient would get better in time.
Which kind of statistical exact test should I use? Peter Flom's answer makes a good point, but strictly speaking, the situation is more complicated than that. Treatment 2 is administered conditionally on the failure of treatment 1. Suppose the probabi
33,706
Which kind of statistical exact test should I use?
You could use some sort of permutation test, but with only 4 people there are only $2^4 = 16$ possible combinations. Yours is the most extreme one, so its p-value will be $\frac{1}{16} = 0.0625$ (This is a case where one can do the permutation test in one's head! That doesn't happen too often.)
Which kind of statistical exact test should I use?
You could use some sort of permutation test, but with only 4 people there are only $2^4 = 16$ possible combinations. Yours is the most extreme one, so its p-value will be $\frac{1}{16} = 0.0625$ (This
Which kind of statistical exact test should I use? You could use some sort of permutation test, but with only 4 people there are only $2^4 = 16$ possible combinations. Yours is the most extreme one, so its p-value will be $\frac{1}{16} = 0.0625$ (This is a case where one can do the permutation test in one's head! That doesn't happen too often.)
Which kind of statistical exact test should I use? You could use some sort of permutation test, but with only 4 people there are only $2^4 = 16$ possible combinations. Yours is the most extreme one, so its p-value will be $\frac{1}{16} = 0.0625$ (This
33,707
Which kind of statistical exact test should I use?
It is impossible to even begin to interpret these data without knowing the natural history of the disease. As Placidia mentioned, if the disease is self-limiting (like a common cold), then these results are meaningless. The second treatment, whatever it is (even placebo), will seem to be effective. In contrast, if the disease is relentlessly progressive, these are impressive results that encourage you to set up a proper study. It seems to me that this kind of study is worth doing to get a sense of whether the new drug works, as a prelude to a proper controlled study. I doubt that any kind of statistical analysis will prove useful.
Which kind of statistical exact test should I use?
It is impossible to even begin to interpret these data without knowing the natural history of the disease. As Placidia mentioned, if the disease is self-limiting (like a common cold), then these resu
Which kind of statistical exact test should I use? It is impossible to even begin to interpret these data without knowing the natural history of the disease. As Placidia mentioned, if the disease is self-limiting (like a common cold), then these results are meaningless. The second treatment, whatever it is (even placebo), will seem to be effective. In contrast, if the disease is relentlessly progressive, these are impressive results that encourage you to set up a proper study. It seems to me that this kind of study is worth doing to get a sense of whether the new drug works, as a prelude to a proper controlled study. I doubt that any kind of statistical analysis will prove useful.
Which kind of statistical exact test should I use? It is impossible to even begin to interpret these data without knowing the natural history of the disease. As Placidia mentioned, if the disease is self-limiting (like a common cold), then these resu
33,708
Chi-squared confidence interval for variance
Because the chi-squared distribution is skewed, the sample variance is not generally at the center of a 95% CI for the variance (for normal data). You are correct to say that you can often get a narrower interval by taking something like probability 2% from one tail and 3% from the other, than by taking 2.5% from each tail. For practical purposes, the narrowest 95% interval may put almost all of the 5% probability in one tail, thus becoming nearly a one-sided interval. This may or may not be useful. Thus, it has become more or less standard to use probability-symmetric intervals in general practice. If you are not showing a probability-symmetric interval, it is a good idea to report that you are not, and to explain why. Example: Consider a normal sample of size $n=20$ with variance $\sigma^2 = 25.$ set.seed(2022) x = rnorm(20, 50, 5) v = var(x); v [1] 25.01484 Seven 2-sided 95% CIs for $\sigma^2$ and their widths: CI.1 = 19*v/qchisq(c(.97, .02), 19) CI.1; diff(CI.1) [1] 14.77971 55.47799 [1] 40.69828 CI.2 = 19*v/qchisq(c(.975, .025), 19) CI.2; diff(CI.2) [1] 14.46722 53.36339 [1] 38.89617 # probability-symmetric CI.3 = 19*v/qchisq(c(.98, .03), 19) CI.3; diff(CI.3) [1] 14.10859 51.65860 [1] 37.55002 CI.4 = 19*v/qchisq(c(.99, .04), 19) CI.4; diff(CI.4) [1] 13.13265 49.00681 [1] 35.87417 CI.5 = 19*v/qchisq(c(.995, .045), 19) CI.5; diff(CI.5) [1] 12.31867 47.93333 [1] 35.61466 # shortest on this list CI.6 = 19*v/qchisq(c(.999, .049), 19) CI.6; diff(CI.6) [1] 10.84618 47.16119 [1] 36.31501 # longer than above CI.7 = 19*v/qchisq(c(.99999, .04999), 19) CI.7; diff(CI.7) [1] 8.284141 46.980289 [1] 38.69615 # 'almost' one sided Note: The relevant one-sided 95% CI would give the upper bound $46.97848.$ Depending on the application, that might be exactly what you want.
Chi-squared confidence interval for variance
Because the chi-squared distribution is skewed, the sample variance is not generally at the center of a 95% CI for the variance (for normal data). You are correct to say that you can often get a narro
Chi-squared confidence interval for variance Because the chi-squared distribution is skewed, the sample variance is not generally at the center of a 95% CI for the variance (for normal data). You are correct to say that you can often get a narrower interval by taking something like probability 2% from one tail and 3% from the other, than by taking 2.5% from each tail. For practical purposes, the narrowest 95% interval may put almost all of the 5% probability in one tail, thus becoming nearly a one-sided interval. This may or may not be useful. Thus, it has become more or less standard to use probability-symmetric intervals in general practice. If you are not showing a probability-symmetric interval, it is a good idea to report that you are not, and to explain why. Example: Consider a normal sample of size $n=20$ with variance $\sigma^2 = 25.$ set.seed(2022) x = rnorm(20, 50, 5) v = var(x); v [1] 25.01484 Seven 2-sided 95% CIs for $\sigma^2$ and their widths: CI.1 = 19*v/qchisq(c(.97, .02), 19) CI.1; diff(CI.1) [1] 14.77971 55.47799 [1] 40.69828 CI.2 = 19*v/qchisq(c(.975, .025), 19) CI.2; diff(CI.2) [1] 14.46722 53.36339 [1] 38.89617 # probability-symmetric CI.3 = 19*v/qchisq(c(.98, .03), 19) CI.3; diff(CI.3) [1] 14.10859 51.65860 [1] 37.55002 CI.4 = 19*v/qchisq(c(.99, .04), 19) CI.4; diff(CI.4) [1] 13.13265 49.00681 [1] 35.87417 CI.5 = 19*v/qchisq(c(.995, .045), 19) CI.5; diff(CI.5) [1] 12.31867 47.93333 [1] 35.61466 # shortest on this list CI.6 = 19*v/qchisq(c(.999, .049), 19) CI.6; diff(CI.6) [1] 10.84618 47.16119 [1] 36.31501 # longer than above CI.7 = 19*v/qchisq(c(.99999, .04999), 19) CI.7; diff(CI.7) [1] 8.284141 46.980289 [1] 38.69615 # 'almost' one sided Note: The relevant one-sided 95% CI would give the upper bound $46.97848.$ Depending on the application, that might be exactly what you want.
Chi-squared confidence interval for variance Because the chi-squared distribution is skewed, the sample variance is not generally at the center of a 95% CI for the variance (for normal data). You are correct to say that you can often get a narro
33,709
Chi-squared confidence interval for variance
For univariate continuous asymmetric distributions the highest density region (HDR) can be found by solving a constrained optimisation problem for the boundary points. You are correct that this involves placing non-equal weight in the tails. You can find a detailed analysis of this problem in O'Neill (2021), including a statement of the optimisation problem at issue and its solution. This paper also goes through the problem of finding the optimal confidence interval for the variance. To save you from reinventing the wheel, it is worth noting that HDRs for all standard univariate distributions are available in the stat.extend package in R. The available families include the chi-squared distribution, the gamma distribution, and the inverse gamma distribution. These can be used to manually compute the optimal confidence interval. Alternatively, there are also direct functions for optimal confidence intervals, including the optimal confidence interval for the variance. In the code below we use the CONF.var function to compute the optimal 95% confidence interval for some mock data. #Load library library(stat.extend) #Create some mock data (same data as used by BruceET) set.seed(2022) x = rnorm(20, 50, 5) #Compute optimal confidence interval #Assumes a mesokurtic distribution (kurt = 3) CONF.var(x, alpha = 0.05, kurt = 3) Confidence Interval (CI) 95.00% CI for variance parameter for infinite population Interval uses 20 data points from data x with sample variance = 25.0148 and assumed kurtosis = 3.0000 Computed using nlm optimisation with 8 iterations (code = 1) [12.4006846357447, 48.0126609150707]
Chi-squared confidence interval for variance
For univariate continuous asymmetric distributions the highest density region (HDR) can be found by solving a constrained optimisation problem for the boundary points. You are correct that this invol
Chi-squared confidence interval for variance For univariate continuous asymmetric distributions the highest density region (HDR) can be found by solving a constrained optimisation problem for the boundary points. You are correct that this involves placing non-equal weight in the tails. You can find a detailed analysis of this problem in O'Neill (2021), including a statement of the optimisation problem at issue and its solution. This paper also goes through the problem of finding the optimal confidence interval for the variance. To save you from reinventing the wheel, it is worth noting that HDRs for all standard univariate distributions are available in the stat.extend package in R. The available families include the chi-squared distribution, the gamma distribution, and the inverse gamma distribution. These can be used to manually compute the optimal confidence interval. Alternatively, there are also direct functions for optimal confidence intervals, including the optimal confidence interval for the variance. In the code below we use the CONF.var function to compute the optimal 95% confidence interval for some mock data. #Load library library(stat.extend) #Create some mock data (same data as used by BruceET) set.seed(2022) x = rnorm(20, 50, 5) #Compute optimal confidence interval #Assumes a mesokurtic distribution (kurt = 3) CONF.var(x, alpha = 0.05, kurt = 3) Confidence Interval (CI) 95.00% CI for variance parameter for infinite population Interval uses 20 data points from data x with sample variance = 25.0148 and assumed kurtosis = 3.0000 Computed using nlm optimisation with 8 iterations (code = 1) [12.4006846357447, 48.0126609150707]
Chi-squared confidence interval for variance For univariate continuous asymmetric distributions the highest density region (HDR) can be found by solving a constrained optimisation problem for the boundary points. You are correct that this invol
33,710
Chi-squared confidence interval for variance
$$ \Pr(a<\chi^2_k<b) = 0.9 \tag 1 $$ One way to choose $a$ and $b$ is to choose them so that the values of the chi-square density function at those two points are equal to each other, and at the same time so that line $(1)$ above is true. That does give you a shorter confidence interval, but it is numerically somewhat complicated to implement, and probably the purpose in the course you're taking is just to show that it is possible to get a confidence interval for $\sigma^2$ by "inverting" the "pivotal quantity" $(n-1)S^2/\sigma^2.$
Chi-squared confidence interval for variance
$$ \Pr(a<\chi^2_k<b) = 0.9 \tag 1 $$ One way to choose $a$ and $b$ is to choose them so that the values of the chi-square density function at those two points are equal to each other, and at the same
Chi-squared confidence interval for variance $$ \Pr(a<\chi^2_k<b) = 0.9 \tag 1 $$ One way to choose $a$ and $b$ is to choose them so that the values of the chi-square density function at those two points are equal to each other, and at the same time so that line $(1)$ above is true. That does give you a shorter confidence interval, but it is numerically somewhat complicated to implement, and probably the purpose in the course you're taking is just to show that it is possible to get a confidence interval for $\sigma^2$ by "inverting" the "pivotal quantity" $(n-1)S^2/\sigma^2.$
Chi-squared confidence interval for variance $$ \Pr(a<\chi^2_k<b) = 0.9 \tag 1 $$ One way to choose $a$ and $b$ is to choose them so that the values of the chi-square density function at those two points are equal to each other, and at the same
33,711
Chi-squared confidence interval for variance
One answer is why not use equal probabilities for both tails? Suppose we do not, we then produce asymmetric confidence intervals in terms of probability. Let us take an extreme example, to make a 95% confidence interval, we would usually set the interval at 2.5% lower, and 2.5% upper tail. Now we decide we want an extreme example, so we choose 0% left tail and a 5% upper-tail of a heavy right tailed distribution. What is that? It is a 5% one-tailed (right) answer to the question of what the probability is of an $X_i\geq$ the larger confidence interval upper bound. So, why balance the probabilities? So that the probability of an answer lower than the confidence interval is the same the probability of an answer higher than the confidence interval, which then gives us a balanced or two-tailed answer, which is the probability of a $X_i\neq$ a value within the confidence interval. It is not unreasonable to ask if there are situations in which one wants unequal probabilities for the tails. Here is an arbitrary example for which the probabilities are not the final answers sought. Suppose that the x-axis measure is a length of pipe that we have produced by cutting very long pipes in a factory, and let us further suppose that if a pipe is too short (i.e., less than our hypothetical confidence interval lower bound) we must discard it and that costs twice as much as a pipe that is too long, which too long pipe can be sent back for trimming to length. In that case, we might want a left tail that is half as probable as a right tail in order to balance the cost of having pipes of the wrong length.
Chi-squared confidence interval for variance
One answer is why not use equal probabilities for both tails? Suppose we do not, we then produce asymmetric confidence intervals in terms of probability. Let us take an extreme example, to make a 95%
Chi-squared confidence interval for variance One answer is why not use equal probabilities for both tails? Suppose we do not, we then produce asymmetric confidence intervals in terms of probability. Let us take an extreme example, to make a 95% confidence interval, we would usually set the interval at 2.5% lower, and 2.5% upper tail. Now we decide we want an extreme example, so we choose 0% left tail and a 5% upper-tail of a heavy right tailed distribution. What is that? It is a 5% one-tailed (right) answer to the question of what the probability is of an $X_i\geq$ the larger confidence interval upper bound. So, why balance the probabilities? So that the probability of an answer lower than the confidence interval is the same the probability of an answer higher than the confidence interval, which then gives us a balanced or two-tailed answer, which is the probability of a $X_i\neq$ a value within the confidence interval. It is not unreasonable to ask if there are situations in which one wants unequal probabilities for the tails. Here is an arbitrary example for which the probabilities are not the final answers sought. Suppose that the x-axis measure is a length of pipe that we have produced by cutting very long pipes in a factory, and let us further suppose that if a pipe is too short (i.e., less than our hypothetical confidence interval lower bound) we must discard it and that costs twice as much as a pipe that is too long, which too long pipe can be sent back for trimming to length. In that case, we might want a left tail that is half as probable as a right tail in order to balance the cost of having pipes of the wrong length.
Chi-squared confidence interval for variance One answer is why not use equal probabilities for both tails? Suppose we do not, we then produce asymmetric confidence intervals in terms of probability. Let us take an extreme example, to make a 95%
33,712
How does PCA behave when there is no correlation in the dataset?
If you have no observed correlation, then your covariance matrix is diagonal, and the PCA diagonalizes a matrix that is already diagonal (so it does nothing). If you have no population correlation but observe small sample correlations due to sampling variability, then the PCA is diagonalizing a covariance matrix that is nearly diagonal, and the result will be a minimally different set of features from the PCA.
How does PCA behave when there is no correlation in the dataset?
If you have no observed correlation, then your covariance matrix is diagonal, and the PCA diagonalizes a matrix that is already diagonal (so it does nothing). If you have no population correlation but
How does PCA behave when there is no correlation in the dataset? If you have no observed correlation, then your covariance matrix is diagonal, and the PCA diagonalizes a matrix that is already diagonal (so it does nothing). If you have no population correlation but observe small sample correlations due to sampling variability, then the PCA is diagonalizing a covariance matrix that is nearly diagonal, and the result will be a minimally different set of features from the PCA.
How does PCA behave when there is no correlation in the dataset? If you have no observed correlation, then your covariance matrix is diagonal, and the PCA diagonalizes a matrix that is already diagonal (so it does nothing). If you have no population correlation but
33,713
How does PCA behave when there is no correlation in the dataset?
The components are the eigenvectors of the covariance matrix. If the covariance matrix is diagonal, then the features are already eigenvectors. So PCA generally will return the original features (up to scaling), ordered in decreasing variance. If you have a degenerate covariance matrix where two or more features has the same variance, however, a poorly designed algorithm that returns linear combinations of those features would technically satisfy the definition of PCA as generally given.
How does PCA behave when there is no correlation in the dataset?
The components are the eigenvectors of the covariance matrix. If the covariance matrix is diagonal, then the features are already eigenvectors. So PCA generally will return the original features (up t
How does PCA behave when there is no correlation in the dataset? The components are the eigenvectors of the covariance matrix. If the covariance matrix is diagonal, then the features are already eigenvectors. So PCA generally will return the original features (up to scaling), ordered in decreasing variance. If you have a degenerate covariance matrix where two or more features has the same variance, however, a poorly designed algorithm that returns linear combinations of those features would technically satisfy the definition of PCA as generally given.
How does PCA behave when there is no correlation in the dataset? The components are the eigenvectors of the covariance matrix. If the covariance matrix is diagonal, then the features are already eigenvectors. So PCA generally will return the original features (up t
33,714
How does PCA behave when there is no correlation in the dataset?
If the true underlying covariance matrix is the identity: the leading eigenvectors of the sample correlation matrix will point in random directions, rather than directions that are informative about the nature of the data. the largest eigenvalues of the sample correlation matrix will still be larger than the smallest eigenvalues, by definition, and this might mislead you into thinking there's some signal. If you are afraid this is happening to you, you can try to verify that the eigenpairs you use exceed the upper bound expected from iid data. This is governed by the Marchenko-Pastur distribution (wiki). If you want to see an example, the M-P upper bound is used for principal component selection by Aviv Regev and coauthors in their analysis of gene activity during zebrafish embryogenesis (Science paper). M-P only works for data with mean 0, variance 1. There might be some similar theory for other situations; I'm not sure.
How does PCA behave when there is no correlation in the dataset?
If the true underlying covariance matrix is the identity: the leading eigenvectors of the sample correlation matrix will point in random directions, rather than directions that are informative about
How does PCA behave when there is no correlation in the dataset? If the true underlying covariance matrix is the identity: the leading eigenvectors of the sample correlation matrix will point in random directions, rather than directions that are informative about the nature of the data. the largest eigenvalues of the sample correlation matrix will still be larger than the smallest eigenvalues, by definition, and this might mislead you into thinking there's some signal. If you are afraid this is happening to you, you can try to verify that the eigenpairs you use exceed the upper bound expected from iid data. This is governed by the Marchenko-Pastur distribution (wiki). If you want to see an example, the M-P upper bound is used for principal component selection by Aviv Regev and coauthors in their analysis of gene activity during zebrafish embryogenesis (Science paper). M-P only works for data with mean 0, variance 1. There might be some similar theory for other situations; I'm not sure.
How does PCA behave when there is no correlation in the dataset? If the true underlying covariance matrix is the identity: the leading eigenvectors of the sample correlation matrix will point in random directions, rather than directions that are informative about
33,715
How does PCA behave when there is no correlation in the dataset?
It depends on the true covariance structure of population. If multiple features have the same population variance, sampling variance can actually mix up the observed features arbitrarily, whereas if all features have different population variances this cannot happen. Let me show this through a derivation. Assume your true population covariance matrix is $A$ and your observed is $A + \epsilon B$, where $\epsilon$ is a small positive number. Basically, your can think of $B$ as the direction that the covariance matrix is perturbed in (perturbation meaning 'error' due to sampling), and $\epsilon$ is the magnitude of that perturbation. Remember that PCA is basically looking at the eigenvectors of the covariance matrix. If the true covariance is $A = aI$, the eigenvectors of $A + \epsilon B$ are just eigenvectors $v$ of $B$. In other words, running PCA on your observed data will return features determined entirely by your sampling variation! The reason this happens is that PCA does not have a unique solution for your true covariance matrix in the first place. That is, whenever you try to diagonalize a matrix with duplicate eigenvalues, you run into the issue that there are multiple valid ways to pick unit eigenvectors (because you have eigenspaces that have dimension greater than one). Thus even if $A$ is not of the form $aI$, but has duplicate eigenvalues, there will be some set of eigenvectors of $A + \epsilon B$ which will deviate from the original features in a way dictated entirely by the eigenvectors of $B$. Fortunately, in reality it is rare to find variables with exactly the same population variance. In this case, we can show that the features are robust against sampling variation. This is actually true in general (i.e. regardless of the assumption that your population variables are uncorrelated). To see this, we can just approximate the eigenvectors of $(A + \epsilon B)$ under some mild assumptions. Basically we want to show that they are just a perturbation of the eigenvectors of $A$ by a term that scales roughly linearly with $\epsilon$. The first assumption is of course that all eigenvalues of $A$ are unique. However, second, we need to assume that the eigenspaces of $A$ and $B$ are disjoint, meaning that they have different eigenvectors. Again we can just appeal to reality -- a random matrix will almost never have the same eigenvectors as a fixed matrix (unless that fixed matrix is a multiple of $I$). A warning that this "proof" is both very long and also not exactly valid. The key weakness is where I assume that the solution is analytic, which is sort of what we actually want to prove. Regardless, hopefully it will provide some insight. [start of proof] Let $u$ denote the $i^{th}$ eigenvector of $A$ for some $i$, and have eigenvalue $\alpha$. We can arbitrarily write the eigenvector of $(A + \epsilon B)$ as $(u+v)$ for some $v$ by taking an eigenvector of the matrix and then letting $v$ be that eigenvector minus $u$. Now, what is $(A + \epsilon B)(u+v)$? It should be $\lambda (u + v)$ for some $\lambda$. For simplicity, we can rewrite this as $(\alpha + \epsilon \gamma)(u+v)$ for $\gamma = \epsilon^{-1}(\lambda - \alpha)$. This makes sense because the eigenvalues are a continuous function of the matrix entries, which can be proven by using that the determinant is a continuous function (since it is a polynomial) and then using that the eigenvalues are defined using the determinant. On the other hand, we also have $$ (A + \epsilon B)(u+v) = Au + Av + \epsilon B (u+v) = \alpha u + Av + \epsilon B (u+v). $$ Thus \begin{align*} (A + \epsilon B)(u+v) &= (\alpha + \epsilon \gamma)(u+v) = \alpha u + \epsilon \gamma u + \alpha v + \epsilon \gamma v \\ Av + eBu + eBv &= \epsilon \gamma u + (a + \epsilon \gamma)v \\ \epsilon B u + (A + \epsilon B)v &= \epsilon \gamma u + (\alpha + \epsilon \gamma)v \\ \epsilon(B - \gamma I)u + ((A - \alpha I) + \epsilon (B - \gamma I))v &= 0 \end{align*} Now, there are two possible ways this equation could hold. One involves having $(A - \alpha I)v = 0$. However, this is would mean that $v$ is an eigenvector of $A$ with eigenvalue $\alpha$, and thus so is $u+v$. The problem with this is that it would imply tht $(u+v)$ is an eigenvector of $B$ as well, and by assumption $B$ cannot share eigenvectors with $A$. Thus certainly $(A - \alpha I)v$ is nonzero. However, looking at the equation, the $(A - \alpha I)v$ is the only term that does not appear proportional to $\epsilon$. Why is this interesting? Well, because if it were not dependent on $\epsilon$ at all, the equation could not have a solution -- since $\epsilon$ is essentially arbitrary, taking the limit as $\epsilon \to 0$ would create an inconsistent equation. The fix is to recognize that $v$ depends on $\epsilon$, and really it must be at least proportional. More precisely, it should be $$ v = w_{0} + \epsilon w_{1} + \epsilon^{2} w_{2} + (\mathrm{higher \ order \ terms}). $$ I am making a bold assumption that $v$ is essentially analytic in $\epsilon$, but since we are basically solving a polynomial equation ($\gamma$ is also analytic in $\epsilon$), it seems reasonable. This is where the approximation comes in anyway. Now, note that the $w_{0}$ must be zero, since as $\epsilon \to 0$, we must have $u+v \to u$. Basically, if $\epsilon$ is quite small, then $v \approx \epsilon w$. Furthermore, \begin{align*} \epsilon(B - \gamma I)u + ((A - \alpha I) + \epsilon (B - \gamma I))v &= \epsilon(B - \gamma I)u + ((A - \alpha I) + \epsilon (B - \gamma I))\epsilon w \\ &= \epsilon(B - \gamma I)u + \epsilon (A - \alpha I) w + \epsilon^{2} (B - \gamma I) w \\ &\approx \epsilon (B - \gamma I)u + \epsilon (A - \alpha I) \end{align*} because $\epsilon^{2} \approx 0$. Therefore, dividing out by $\epsilon$, in the end we are just solving $$ (B - \gamma I)u + (A - \alpha I) w = 0 $$ or really $$ (B - \gamma I)u = -(A - \alpha I) w. $$ Now, $-(A - \alpha I) w$ is some unknown vector in the columnspace of $(A - \alpha I)$. Knowing this allows to solve for $\gamma$. Once we have done this, the term $(B - \gamma I)u$ becomes a known vector and we are just solving a linear equation. The value of $\gamma$ is unique because the columnspace of $(A - \alpha I)$ has smaller dimension than the whole space. In other words, it is 'difficult' to get the image of $u$ exactly into the subspace, so there is only one way to do it. However, the solution for $w$ is not unique, in the sense that $(\alpha I - A)w = y$ does not have a unique solution ($y = (B - \gamma I)u$), with the reason being that $(\alpha I - A)$ has a nontrivial nullspace, in particular containing the eigenvectors of $A$ with eigenvector $\alpha$. However, we can pick the vector $w$ solving our linear equation which minimally violates the equation we actually wanted to solve. In other words, we can pick it so the norm of $(B - \gamma I)w$ is minimal. This is unique. [end of proof] Ok, so what have we actually shown with all of this? Basically, for small $\epsilon$ it is possible to get a unique approximation to the eigenvectors of $(A + \epsilon B)$ that has minimal 'error'. These eigenvectors are perturbed from the eigenvectors of $A$ by a small amount which is proportional to $\epsilon$. Therefore, so long as the features of the true population would be uniquely chosen by PCA (i.e. they have distinct population variances), the features of the observed data also can be uniquely chosen by PCA and they are perturbed from the true features by an amount roughly proportional to the size of the sampling error (assuming the sampling error is small!).
How does PCA behave when there is no correlation in the dataset?
It depends on the true covariance structure of population. If multiple features have the same population variance, sampling variance can actually mix up the observed features arbitrarily, whereas if a
How does PCA behave when there is no correlation in the dataset? It depends on the true covariance structure of population. If multiple features have the same population variance, sampling variance can actually mix up the observed features arbitrarily, whereas if all features have different population variances this cannot happen. Let me show this through a derivation. Assume your true population covariance matrix is $A$ and your observed is $A + \epsilon B$, where $\epsilon$ is a small positive number. Basically, your can think of $B$ as the direction that the covariance matrix is perturbed in (perturbation meaning 'error' due to sampling), and $\epsilon$ is the magnitude of that perturbation. Remember that PCA is basically looking at the eigenvectors of the covariance matrix. If the true covariance is $A = aI$, the eigenvectors of $A + \epsilon B$ are just eigenvectors $v$ of $B$. In other words, running PCA on your observed data will return features determined entirely by your sampling variation! The reason this happens is that PCA does not have a unique solution for your true covariance matrix in the first place. That is, whenever you try to diagonalize a matrix with duplicate eigenvalues, you run into the issue that there are multiple valid ways to pick unit eigenvectors (because you have eigenspaces that have dimension greater than one). Thus even if $A$ is not of the form $aI$, but has duplicate eigenvalues, there will be some set of eigenvectors of $A + \epsilon B$ which will deviate from the original features in a way dictated entirely by the eigenvectors of $B$. Fortunately, in reality it is rare to find variables with exactly the same population variance. In this case, we can show that the features are robust against sampling variation. This is actually true in general (i.e. regardless of the assumption that your population variables are uncorrelated). To see this, we can just approximate the eigenvectors of $(A + \epsilon B)$ under some mild assumptions. Basically we want to show that they are just a perturbation of the eigenvectors of $A$ by a term that scales roughly linearly with $\epsilon$. The first assumption is of course that all eigenvalues of $A$ are unique. However, second, we need to assume that the eigenspaces of $A$ and $B$ are disjoint, meaning that they have different eigenvectors. Again we can just appeal to reality -- a random matrix will almost never have the same eigenvectors as a fixed matrix (unless that fixed matrix is a multiple of $I$). A warning that this "proof" is both very long and also not exactly valid. The key weakness is where I assume that the solution is analytic, which is sort of what we actually want to prove. Regardless, hopefully it will provide some insight. [start of proof] Let $u$ denote the $i^{th}$ eigenvector of $A$ for some $i$, and have eigenvalue $\alpha$. We can arbitrarily write the eigenvector of $(A + \epsilon B)$ as $(u+v)$ for some $v$ by taking an eigenvector of the matrix and then letting $v$ be that eigenvector minus $u$. Now, what is $(A + \epsilon B)(u+v)$? It should be $\lambda (u + v)$ for some $\lambda$. For simplicity, we can rewrite this as $(\alpha + \epsilon \gamma)(u+v)$ for $\gamma = \epsilon^{-1}(\lambda - \alpha)$. This makes sense because the eigenvalues are a continuous function of the matrix entries, which can be proven by using that the determinant is a continuous function (since it is a polynomial) and then using that the eigenvalues are defined using the determinant. On the other hand, we also have $$ (A + \epsilon B)(u+v) = Au + Av + \epsilon B (u+v) = \alpha u + Av + \epsilon B (u+v). $$ Thus \begin{align*} (A + \epsilon B)(u+v) &= (\alpha + \epsilon \gamma)(u+v) = \alpha u + \epsilon \gamma u + \alpha v + \epsilon \gamma v \\ Av + eBu + eBv &= \epsilon \gamma u + (a + \epsilon \gamma)v \\ \epsilon B u + (A + \epsilon B)v &= \epsilon \gamma u + (\alpha + \epsilon \gamma)v \\ \epsilon(B - \gamma I)u + ((A - \alpha I) + \epsilon (B - \gamma I))v &= 0 \end{align*} Now, there are two possible ways this equation could hold. One involves having $(A - \alpha I)v = 0$. However, this is would mean that $v$ is an eigenvector of $A$ with eigenvalue $\alpha$, and thus so is $u+v$. The problem with this is that it would imply tht $(u+v)$ is an eigenvector of $B$ as well, and by assumption $B$ cannot share eigenvectors with $A$. Thus certainly $(A - \alpha I)v$ is nonzero. However, looking at the equation, the $(A - \alpha I)v$ is the only term that does not appear proportional to $\epsilon$. Why is this interesting? Well, because if it were not dependent on $\epsilon$ at all, the equation could not have a solution -- since $\epsilon$ is essentially arbitrary, taking the limit as $\epsilon \to 0$ would create an inconsistent equation. The fix is to recognize that $v$ depends on $\epsilon$, and really it must be at least proportional. More precisely, it should be $$ v = w_{0} + \epsilon w_{1} + \epsilon^{2} w_{2} + (\mathrm{higher \ order \ terms}). $$ I am making a bold assumption that $v$ is essentially analytic in $\epsilon$, but since we are basically solving a polynomial equation ($\gamma$ is also analytic in $\epsilon$), it seems reasonable. This is where the approximation comes in anyway. Now, note that the $w_{0}$ must be zero, since as $\epsilon \to 0$, we must have $u+v \to u$. Basically, if $\epsilon$ is quite small, then $v \approx \epsilon w$. Furthermore, \begin{align*} \epsilon(B - \gamma I)u + ((A - \alpha I) + \epsilon (B - \gamma I))v &= \epsilon(B - \gamma I)u + ((A - \alpha I) + \epsilon (B - \gamma I))\epsilon w \\ &= \epsilon(B - \gamma I)u + \epsilon (A - \alpha I) w + \epsilon^{2} (B - \gamma I) w \\ &\approx \epsilon (B - \gamma I)u + \epsilon (A - \alpha I) \end{align*} because $\epsilon^{2} \approx 0$. Therefore, dividing out by $\epsilon$, in the end we are just solving $$ (B - \gamma I)u + (A - \alpha I) w = 0 $$ or really $$ (B - \gamma I)u = -(A - \alpha I) w. $$ Now, $-(A - \alpha I) w$ is some unknown vector in the columnspace of $(A - \alpha I)$. Knowing this allows to solve for $\gamma$. Once we have done this, the term $(B - \gamma I)u$ becomes a known vector and we are just solving a linear equation. The value of $\gamma$ is unique because the columnspace of $(A - \alpha I)$ has smaller dimension than the whole space. In other words, it is 'difficult' to get the image of $u$ exactly into the subspace, so there is only one way to do it. However, the solution for $w$ is not unique, in the sense that $(\alpha I - A)w = y$ does not have a unique solution ($y = (B - \gamma I)u$), with the reason being that $(\alpha I - A)$ has a nontrivial nullspace, in particular containing the eigenvectors of $A$ with eigenvector $\alpha$. However, we can pick the vector $w$ solving our linear equation which minimally violates the equation we actually wanted to solve. In other words, we can pick it so the norm of $(B - \gamma I)w$ is minimal. This is unique. [end of proof] Ok, so what have we actually shown with all of this? Basically, for small $\epsilon$ it is possible to get a unique approximation to the eigenvectors of $(A + \epsilon B)$ that has minimal 'error'. These eigenvectors are perturbed from the eigenvectors of $A$ by a small amount which is proportional to $\epsilon$. Therefore, so long as the features of the true population would be uniquely chosen by PCA (i.e. they have distinct population variances), the features of the observed data also can be uniquely chosen by PCA and they are perturbed from the true features by an amount roughly proportional to the size of the sampling error (assuming the sampling error is small!).
How does PCA behave when there is no correlation in the dataset? It depends on the true covariance structure of population. If multiple features have the same population variance, sampling variance can actually mix up the observed features arbitrarily, whereas if a
33,716
Log-Transforming target var for training a Random Forest Regressor
I will be assuming that by "better performance" you mean better CV/validation performance, and not train one. I want to invite you to think of what the effect of log-transforming the target variable is on single regression trees Regression trees make splits in a way that minimizes the MSE, which (considering that we predict the mean) means that they minimize the sum of the variances of the target in the children nodes. What happens if your target is skewed? If your variable is skewed, high values will affect the variances and push your split points towards higher values - forcing your decision tree to make less balanced splits and trying to "isolate" the tail from the rest of the points. Example of a single split on non-transformed and transformed data: As a result overall, your trees (and so on RF) will be more affected by your high-end values if your data is not transformed - which means that they should be more accurate in predicting high values and a bit less on the lower ones. If you log-transform you reduce the relative importance of these high values, and accept having more error on those while being more accurate on the bulk of your data. This might generalize better, and - in general - also makes sense. Indeed in the same regression, predicting $\hat{y}=105$ when $y=100$ is better than predicting $\hat{y}=15$ when $y=11$, because the error in relative terms often matters more than the absolute one. Hope this was useful!
Log-Transforming target var for training a Random Forest Regressor
I will be assuming that by "better performance" you mean better CV/validation performance, and not train one. I want to invite you to think of what the effect of log-transforming the target variable
Log-Transforming target var for training a Random Forest Regressor I will be assuming that by "better performance" you mean better CV/validation performance, and not train one. I want to invite you to think of what the effect of log-transforming the target variable is on single regression trees Regression trees make splits in a way that minimizes the MSE, which (considering that we predict the mean) means that they minimize the sum of the variances of the target in the children nodes. What happens if your target is skewed? If your variable is skewed, high values will affect the variances and push your split points towards higher values - forcing your decision tree to make less balanced splits and trying to "isolate" the tail from the rest of the points. Example of a single split on non-transformed and transformed data: As a result overall, your trees (and so on RF) will be more affected by your high-end values if your data is not transformed - which means that they should be more accurate in predicting high values and a bit less on the lower ones. If you log-transform you reduce the relative importance of these high values, and accept having more error on those while being more accurate on the bulk of your data. This might generalize better, and - in general - also makes sense. Indeed in the same regression, predicting $\hat{y}=105$ when $y=100$ is better than predicting $\hat{y}=15$ when $y=11$, because the error in relative terms often matters more than the absolute one. Hope this was useful!
Log-Transforming target var for training a Random Forest Regressor I will be assuming that by "better performance" you mean better CV/validation performance, and not train one. I want to invite you to think of what the effect of log-transforming the target variable
33,717
Log-Transforming target var for training a Random Forest Regressor
Tangentially, the marginal distribution (that is, the distribution obtained when plotting a histogram) of the outcome is irrelevant in regression since most regression methods make assumptions about the conditional distribution (that is, the distribution obtained when plotting the histogram of the outcome were I to only observe outcomes which have the same features). Now, on to your question. If you are evaluating the performance of on the transformed outcome, the results can be misleading. Because the log essentially squeezes the outcomes, the variance is also shrunk meaning predictions will be closer to the observations. This shrinks the loss and appears to make your model better. Try doing this from sklearn.dummy import DummyRegressor from sklearn.model_selection import cross_val_score cross_val_score(DummyRegressor(), X, y, scoring = 'neg_mean_squared_error') cross_val_score(DummyRegressor(), X, np.log(y), scoring = 'neg_mean_squared_error') Same data, but the scores are immensely different. Why? Because the log shrinks the variance of the outcomes making the model appear better even though it does nothing different. If you want to transform your outcome, you can: Train the model on the transformed outcomes Predict on a held out set Re-transform the predictions to the original space Evaluate the prediction quality in the original space Sklearn makes this very easy with their TransformedTargetRegressor. from sklearn.ensemble import RandomForestRegressor from sklearn.compose import TransformedTargetRegressor from sklearn.model_selection import GridSearchCV, train_test_split from sklearn.pipeline import Pipeline from sklearn.datasets import make_regression import numpy as np rf = RandomForestRegressor() log_rf = TransformedTargetRegressor(rf, func = np.log, inverse_func=np.exp) params = {'regressor__n_estimators': [10,100,1000]} gscv = GridSearchCV(log_rf, param_grid=params,refit = True) X,y = make_regression(n_samples = 10_000, n_features=50, n_informative=5) y -= y.min()-1 #Make the outcome positive. Xtrain, Xtest, ytrain, ytest = train_test_split(X,y, test_size = 0.25) gscv.fit(Xtrain, ytrain) This will ensure that the model is trained on the log-transformed outcomes, back transforms into the original space, and evaluates the loss in the original space.
Log-Transforming target var for training a Random Forest Regressor
Tangentially, the marginal distribution (that is, the distribution obtained when plotting a histogram) of the outcome is irrelevant in regression since most regression methods make assumptions about t
Log-Transforming target var for training a Random Forest Regressor Tangentially, the marginal distribution (that is, the distribution obtained when plotting a histogram) of the outcome is irrelevant in regression since most regression methods make assumptions about the conditional distribution (that is, the distribution obtained when plotting the histogram of the outcome were I to only observe outcomes which have the same features). Now, on to your question. If you are evaluating the performance of on the transformed outcome, the results can be misleading. Because the log essentially squeezes the outcomes, the variance is also shrunk meaning predictions will be closer to the observations. This shrinks the loss and appears to make your model better. Try doing this from sklearn.dummy import DummyRegressor from sklearn.model_selection import cross_val_score cross_val_score(DummyRegressor(), X, y, scoring = 'neg_mean_squared_error') cross_val_score(DummyRegressor(), X, np.log(y), scoring = 'neg_mean_squared_error') Same data, but the scores are immensely different. Why? Because the log shrinks the variance of the outcomes making the model appear better even though it does nothing different. If you want to transform your outcome, you can: Train the model on the transformed outcomes Predict on a held out set Re-transform the predictions to the original space Evaluate the prediction quality in the original space Sklearn makes this very easy with their TransformedTargetRegressor. from sklearn.ensemble import RandomForestRegressor from sklearn.compose import TransformedTargetRegressor from sklearn.model_selection import GridSearchCV, train_test_split from sklearn.pipeline import Pipeline from sklearn.datasets import make_regression import numpy as np rf = RandomForestRegressor() log_rf = TransformedTargetRegressor(rf, func = np.log, inverse_func=np.exp) params = {'regressor__n_estimators': [10,100,1000]} gscv = GridSearchCV(log_rf, param_grid=params,refit = True) X,y = make_regression(n_samples = 10_000, n_features=50, n_informative=5) y -= y.min()-1 #Make the outcome positive. Xtrain, Xtest, ytrain, ytest = train_test_split(X,y, test_size = 0.25) gscv.fit(Xtrain, ytrain) This will ensure that the model is trained on the log-transformed outcomes, back transforms into the original space, and evaluates the loss in the original space.
Log-Transforming target var for training a Random Forest Regressor Tangentially, the marginal distribution (that is, the distribution obtained when plotting a histogram) of the outcome is irrelevant in regression since most regression methods make assumptions about t
33,718
Bayesian inverse modeling with non-identifiable parameters?
Adding priors does not solve the identifiability problem This is an case where the parameters are non-identifiable in your model. As you point out, contributions from the individual non-identifiable parameters in these ratios cannot be distinguished using the data. A relevant paper on this matter is O'Neill (2005). When using Bayesian analysis with a non-identifiable model, specification of a prior for all the individual non-identifiable parameters will still lead you to a valid posterior, but this is strongly affected by the prior. The posterior for the non-identifiable parameters converges to a fixed asymptotic distribution that also depends heavily on the prior, so it lacks posterior consistency. The fact that you get a valid posterior, and this converges to a fixed asymptotic distribution, often gives the misleading impression that Bayesian analysis renders the identifiability problem benign. However, it is crucial to note that the posterior in these cases is strongly affected by the prior in ways that do not vanish as we get more and more data. The identifiability problem is not rendered benign merely by using Bayesian analysis with priors. Posterior depends heavily on prior: To see exactly what I mean, define the minimal sufficient parameters $\phi_1 \equiv \beta_1 / \beta_0$ and $\phi_2 \equiv \beta_2 / \beta_0$. These are the parameters that are identified in the present model. Using the rules for density transformation, the posterior distribution for the three non-identifiable parameters of interest can be written as: $$\begin{equation} \begin{aligned} \pi(\beta_0, \beta_1, \beta_2 | \mathbf{x}, \mathbf{y}) &= \frac{1}{\beta_0^2} \cdot \pi(\beta_0, \phi_1, \phi_2 | \mathbf{x}, \mathbf{y}) \\[6pt] &= \frac{1}{\beta_0^2} \cdot p(\beta_0 | \phi_1, \phi_2) \cdot \pi(\phi_1, \phi_2 | \mathbf{x}, \mathbf{y}). \\[6pt] \end{aligned} \end{equation}$$ Now, the posterior $\pi(\phi_1, \phi_2 | \mathbf{x}, \mathbf{y})$ for the minimal sufficient parameters (which are identifiable) is determined by the prior assumptions and the data as normal. However, the density $p(\beta_0 | \phi_1, \phi_2)$ is determined purely by the prior (i.e., it does not change as you get more data). This latter density is just an aspect of the assumed prior on the three non-identifiable parameters. Hence, the posterior of the non-identifiable parameters will be determined in large measure by a part that is purely a function of the prior. Posterior converges for indentifiable parameters, not non-identifiable parameters: Bayesian asymptotic theory tells us that, under broad conditions, the posterior distribution of identifiable parameters converges towards a point-mass on the true values. (More specifically, there are a number of convergence results that show asymptotic convergence to a normal distribution with mean that approaches the true identifiable parameter values and variance that approaches zero.) In the context of regression there are some additional convergence conditions on the explanatory variables, but again, that convergence result holds broadly. Under appropriate conditions, as $n \rightarrow \infty$ the density $\pi(\phi_1, \phi_2 | \mathbf{x}, \mathbf{y})$ will converge closer and closer to a point-mass distribution on the true values $(\phi_1^*, \phi_2^*)$. In the limit the posterior distribution for the non-identifiable parameters converges to a limiting distribution determined by the prior (that is not a point-mass): $$\begin{equation} \begin{aligned} \pi(\beta_0, \beta_1, \beta_2 | \mathbf{x}, \mathbf{y}) &\rightarrow \pi(\beta_0, \beta_1, \beta_2 | \mathbf{x}_\infty, \mathbf{y}_\infty) \\[6pt] &\propto \frac{1}{\beta_0^2} \cdot p(\beta_0 | \phi_1^* = \beta_1 / \beta_0, \phi_2^* = \beta_2 / \beta_0). \\[6pt] \end{aligned} \end{equation}$$ We can see that this asymptotic density is affected by the data only through the true values of the minimal sufficient parameters. It is still heavily affected by the form of the density $p(\beta_0 | \phi_1, \phi_2)$, which is a function of the prior. Although the posterior for the identifiable parameters has converged to a point-mass on the true values, the posterior density for the non-identifiable parameters $\beta_0, \beta_1, \beta_2$ still retains uncertainty even in this limit. Its distribution is now entirely determined by the prior, conditional on holding the identifiable parameters fixed. In the other answer by Björn you can see that he gives an excellent example of this phenomenon in the simple case of IID data from a normal distribution with a mean that is a ratio of two non-identifiable parameters. As you can see from his example, with a large amount of data there is posterior convergence for the identifiable mean, but the corresponding posterior for the non-identifiable parameters is still highly variable (and almost entirely dependent on the prior). Conclusion: In Bayesian analysis you can assign a prior to a set of non-identifiable parameters and you get a valid posterior. However, despite the fact that we get a valid posterior, and asymptotic convergence of the posterior to a limiting distribution, all of those results are heavily affected by the prior, even with an infinite amount of data. In other words, don't let that fool you into thinking that you have "solved" the identifiabiity problem.
Bayesian inverse modeling with non-identifiable parameters?
Adding priors does not solve the identifiability problem This is an case where the parameters are non-identifiable in your model. As you point out, contributions from the individual non-identifiable
Bayesian inverse modeling with non-identifiable parameters? Adding priors does not solve the identifiability problem This is an case where the parameters are non-identifiable in your model. As you point out, contributions from the individual non-identifiable parameters in these ratios cannot be distinguished using the data. A relevant paper on this matter is O'Neill (2005). When using Bayesian analysis with a non-identifiable model, specification of a prior for all the individual non-identifiable parameters will still lead you to a valid posterior, but this is strongly affected by the prior. The posterior for the non-identifiable parameters converges to a fixed asymptotic distribution that also depends heavily on the prior, so it lacks posterior consistency. The fact that you get a valid posterior, and this converges to a fixed asymptotic distribution, often gives the misleading impression that Bayesian analysis renders the identifiability problem benign. However, it is crucial to note that the posterior in these cases is strongly affected by the prior in ways that do not vanish as we get more and more data. The identifiability problem is not rendered benign merely by using Bayesian analysis with priors. Posterior depends heavily on prior: To see exactly what I mean, define the minimal sufficient parameters $\phi_1 \equiv \beta_1 / \beta_0$ and $\phi_2 \equiv \beta_2 / \beta_0$. These are the parameters that are identified in the present model. Using the rules for density transformation, the posterior distribution for the three non-identifiable parameters of interest can be written as: $$\begin{equation} \begin{aligned} \pi(\beta_0, \beta_1, \beta_2 | \mathbf{x}, \mathbf{y}) &= \frac{1}{\beta_0^2} \cdot \pi(\beta_0, \phi_1, \phi_2 | \mathbf{x}, \mathbf{y}) \\[6pt] &= \frac{1}{\beta_0^2} \cdot p(\beta_0 | \phi_1, \phi_2) \cdot \pi(\phi_1, \phi_2 | \mathbf{x}, \mathbf{y}). \\[6pt] \end{aligned} \end{equation}$$ Now, the posterior $\pi(\phi_1, \phi_2 | \mathbf{x}, \mathbf{y})$ for the minimal sufficient parameters (which are identifiable) is determined by the prior assumptions and the data as normal. However, the density $p(\beta_0 | \phi_1, \phi_2)$ is determined purely by the prior (i.e., it does not change as you get more data). This latter density is just an aspect of the assumed prior on the three non-identifiable parameters. Hence, the posterior of the non-identifiable parameters will be determined in large measure by a part that is purely a function of the prior. Posterior converges for indentifiable parameters, not non-identifiable parameters: Bayesian asymptotic theory tells us that, under broad conditions, the posterior distribution of identifiable parameters converges towards a point-mass on the true values. (More specifically, there are a number of convergence results that show asymptotic convergence to a normal distribution with mean that approaches the true identifiable parameter values and variance that approaches zero.) In the context of regression there are some additional convergence conditions on the explanatory variables, but again, that convergence result holds broadly. Under appropriate conditions, as $n \rightarrow \infty$ the density $\pi(\phi_1, \phi_2 | \mathbf{x}, \mathbf{y})$ will converge closer and closer to a point-mass distribution on the true values $(\phi_1^*, \phi_2^*)$. In the limit the posterior distribution for the non-identifiable parameters converges to a limiting distribution determined by the prior (that is not a point-mass): $$\begin{equation} \begin{aligned} \pi(\beta_0, \beta_1, \beta_2 | \mathbf{x}, \mathbf{y}) &\rightarrow \pi(\beta_0, \beta_1, \beta_2 | \mathbf{x}_\infty, \mathbf{y}_\infty) \\[6pt] &\propto \frac{1}{\beta_0^2} \cdot p(\beta_0 | \phi_1^* = \beta_1 / \beta_0, \phi_2^* = \beta_2 / \beta_0). \\[6pt] \end{aligned} \end{equation}$$ We can see that this asymptotic density is affected by the data only through the true values of the minimal sufficient parameters. It is still heavily affected by the form of the density $p(\beta_0 | \phi_1, \phi_2)$, which is a function of the prior. Although the posterior for the identifiable parameters has converged to a point-mass on the true values, the posterior density for the non-identifiable parameters $\beta_0, \beta_1, \beta_2$ still retains uncertainty even in this limit. Its distribution is now entirely determined by the prior, conditional on holding the identifiable parameters fixed. In the other answer by Björn you can see that he gives an excellent example of this phenomenon in the simple case of IID data from a normal distribution with a mean that is a ratio of two non-identifiable parameters. As you can see from his example, with a large amount of data there is posterior convergence for the identifiable mean, but the corresponding posterior for the non-identifiable parameters is still highly variable (and almost entirely dependent on the prior). Conclusion: In Bayesian analysis you can assign a prior to a set of non-identifiable parameters and you get a valid posterior. However, despite the fact that we get a valid posterior, and asymptotic convergence of the posterior to a limiting distribution, all of those results are heavily affected by the prior, even with an infinite amount of data. In other words, don't let that fool you into thinking that you have "solved" the identifiabiity problem.
Bayesian inverse modeling with non-identifiable parameters? Adding priors does not solve the identifiability problem This is an case where the parameters are non-identifiable in your model. As you point out, contributions from the individual non-identifiable
33,719
Bayesian inverse modeling with non-identifiable parameters?
Yes, with a Bayesian analysis you can get sensible posteriors and concentrate around sensible values to the point that the combination of information in prior and likelihood allow it. In this sense the Bayesian analysis can deal with this kind of situation a lot better than a frequentist analysis. However, it cannot get around the fundamental non-identifiability of the parameters in a model and the posteriors will still reflect the lack of identifiability of the parameters. To use an example, let's use a simpler model that is just $\log Z_i \sim N(\theta_1/\theta_2, \sigma^2)$ and let's assume that we have a huge amount of data. We'll have hardly any uncertainty around the ratio $\theta_1/\theta_2$, but lots of values for each parameter are still getting support in the likelihood. Thus, we just get still a wide marginal posterior distribution and the joint distribution does have a very strong correlation between the two parameters. This is illustrated with example code below (using the re-parameterization $Y_i := \log Z_i$ and $\beta_j := \log \theta_j$). Obviously, the marginal posterior would be wider, if our prior for the parameters had been wider and the posterior correlation less strong, if we had less data (then the parameter identifiability problem would be less obvious). I would expect something similar to happen in your example - whether that's a problem or not is a different matter. library(rstan) library(bayesplot) y <- exp(rnorm(10000, 0,1)) stancode <- " data { int n; real y[n]; } parameters{ real beta0; real beta1; real<lower=0> sigma; } model { beta0 ~ normal(0,1); beta1 ~ normal(0,1); sigma ~ normal(0,1); y ~ normal(exp(beta1-beta0), sigma); } " stanfit <- stan(model_code=stancode, data=list(n=length(y), y=y)) posterior <- as.matrix(stanfit) mcmc_pairs(posterior, pars = c("beta0", "beta1"))
Bayesian inverse modeling with non-identifiable parameters?
Yes, with a Bayesian analysis you can get sensible posteriors and concentrate around sensible values to the point that the combination of information in prior and likelihood allow it. In this sense th
Bayesian inverse modeling with non-identifiable parameters? Yes, with a Bayesian analysis you can get sensible posteriors and concentrate around sensible values to the point that the combination of information in prior and likelihood allow it. In this sense the Bayesian analysis can deal with this kind of situation a lot better than a frequentist analysis. However, it cannot get around the fundamental non-identifiability of the parameters in a model and the posteriors will still reflect the lack of identifiability of the parameters. To use an example, let's use a simpler model that is just $\log Z_i \sim N(\theta_1/\theta_2, \sigma^2)$ and let's assume that we have a huge amount of data. We'll have hardly any uncertainty around the ratio $\theta_1/\theta_2$, but lots of values for each parameter are still getting support in the likelihood. Thus, we just get still a wide marginal posterior distribution and the joint distribution does have a very strong correlation between the two parameters. This is illustrated with example code below (using the re-parameterization $Y_i := \log Z_i$ and $\beta_j := \log \theta_j$). Obviously, the marginal posterior would be wider, if our prior for the parameters had been wider and the posterior correlation less strong, if we had less data (then the parameter identifiability problem would be less obvious). I would expect something similar to happen in your example - whether that's a problem or not is a different matter. library(rstan) library(bayesplot) y <- exp(rnorm(10000, 0,1)) stancode <- " data { int n; real y[n]; } parameters{ real beta0; real beta1; real<lower=0> sigma; } model { beta0 ~ normal(0,1); beta1 ~ normal(0,1); sigma ~ normal(0,1); y ~ normal(exp(beta1-beta0), sigma); } " stanfit <- stan(model_code=stancode, data=list(n=length(y), y=y)) posterior <- as.matrix(stanfit) mcmc_pairs(posterior, pars = c("beta0", "beta1"))
Bayesian inverse modeling with non-identifiable parameters? Yes, with a Bayesian analysis you can get sensible posteriors and concentrate around sensible values to the point that the combination of information in prior and likelihood allow it. In this sense th
33,720
How to show this matrix is positive semidefinite?
This is a nice opportunity to apply the definitions: no advanced theorems are needed. To simplify the notation, for any number $\rho$ let $$\mathbb{A}(\rho)=\pmatrix{A&\rho B\\\rho B^\prime&D}$$ be a symmetric block matrix. (If working with block matrices is unfamiliar to you, just assume at first that $A$, $B$, $D$, $x$, and $y$ are numbers. You will get the general idea from this case.) For $\mathbb{A}(\rho)$ to be positive semidefinite (PSD) merely means that for all vectors $x$ and $y$ of suitable dimensions $$\eqalign{ 0 &\le \pmatrix{x^\prime&y^\prime} \mathbb{A}(\rho) \pmatrix{x\\y} \\ &= \pmatrix{x^\prime&y^\prime} \pmatrix{A&\rho B\\\rho B^\prime&D}\pmatrix{x\\y} \\ &=x^\prime A x + 2\rho y^\prime B^\prime x + y^\prime D y.\tag{1} }$$ This is what we have to prove when $|\rho|\le 1$. We are told that $\mathbb{A}(1)$ is PSD. I claim that $\mathbb{A}(-1)$ also is PSD. This follows by negating $y$ in expression $(1)$: as $\pmatrix{x\\y}$ ranges through all possible vectors, $\pmatrix{x\\-y}$ also ranges through all possible vectors, producing $$\eqalign{ 0 &\le \pmatrix{x^\prime&-y^\prime}\mathbb{A}(1)\pmatrix{x\\-y} \\ &= x^\prime A x + 2(-y)^\prime B^\prime x + (-y)^\prime D (-y) \\ &= x^\prime A x + 2(-1)y^\prime B^\prime x + y^\prime D y \\ &= \pmatrix{x^\prime&y^\prime}\mathbb{A}(-1)\pmatrix{x\\y}, }$$ showing that $(1)$ holds with $\rho=-1.$ Notice that $\mathbb{A}(\rho)$ can be expressed as a linear interpolant of the extremes $\mathbb{A}(-1)$ and $\mathbb{A}(1)$: $$\mathbb{A}(\rho) = \frac{1-\rho}{2}\mathbb{A}(-1) + \frac{1+\rho}{2}\mathbb{A}(1).\tag{2}$$ When $|\rho|\le 1$, both coefficients $\color{blue}{(1-\rho)/2}$ and $\color{blue}{(1+\rho)/2}$ are non-negative. Therefore, since both ${\pmatrix{x^\prime&y^\prime}\mathbb{A}(1)\pmatrix{x\\y}}$ and $\pmatrix{x^\prime&y^\prime}\mathbb{A}(-1)\pmatrix{x\\y}$ are nonnegative, so is the right hand side of $$\eqalign{ &\pmatrix{x^\prime&y^\prime}\mathbb{A}(\rho)\pmatrix{x\\y} \\ &= \color{blue}{\left(\frac{1-\rho}{2}\right)}\pmatrix{x^\prime&y^\prime}\mathbb{A}(-1)\pmatrix{x\\y} + \color{blue}{\left(\frac{1+\rho}{2}\right)}\pmatrix{x^\prime&y^\prime}\mathbb{A}(1)\pmatrix{x\\y} \\ &\ge \color{blue}{0}(0) + \color{blue}{0}(0) = 0. }$$ (I use colors to help you see the four separate non-negative terms that are involved.) Because $x$ and $y$ are arbitrary, we have proven $(1)$ for all $\rho$ with $|\rho|\le 1$.
How to show this matrix is positive semidefinite?
This is a nice opportunity to apply the definitions: no advanced theorems are needed. To simplify the notation, for any number $\rho$ let $$\mathbb{A}(\rho)=\pmatrix{A&\rho B\\\rho B^\prime&D}$$ be a
How to show this matrix is positive semidefinite? This is a nice opportunity to apply the definitions: no advanced theorems are needed. To simplify the notation, for any number $\rho$ let $$\mathbb{A}(\rho)=\pmatrix{A&\rho B\\\rho B^\prime&D}$$ be a symmetric block matrix. (If working with block matrices is unfamiliar to you, just assume at first that $A$, $B$, $D$, $x$, and $y$ are numbers. You will get the general idea from this case.) For $\mathbb{A}(\rho)$ to be positive semidefinite (PSD) merely means that for all vectors $x$ and $y$ of suitable dimensions $$\eqalign{ 0 &\le \pmatrix{x^\prime&y^\prime} \mathbb{A}(\rho) \pmatrix{x\\y} \\ &= \pmatrix{x^\prime&y^\prime} \pmatrix{A&\rho B\\\rho B^\prime&D}\pmatrix{x\\y} \\ &=x^\prime A x + 2\rho y^\prime B^\prime x + y^\prime D y.\tag{1} }$$ This is what we have to prove when $|\rho|\le 1$. We are told that $\mathbb{A}(1)$ is PSD. I claim that $\mathbb{A}(-1)$ also is PSD. This follows by negating $y$ in expression $(1)$: as $\pmatrix{x\\y}$ ranges through all possible vectors, $\pmatrix{x\\-y}$ also ranges through all possible vectors, producing $$\eqalign{ 0 &\le \pmatrix{x^\prime&-y^\prime}\mathbb{A}(1)\pmatrix{x\\-y} \\ &= x^\prime A x + 2(-y)^\prime B^\prime x + (-y)^\prime D (-y) \\ &= x^\prime A x + 2(-1)y^\prime B^\prime x + y^\prime D y \\ &= \pmatrix{x^\prime&y^\prime}\mathbb{A}(-1)\pmatrix{x\\y}, }$$ showing that $(1)$ holds with $\rho=-1.$ Notice that $\mathbb{A}(\rho)$ can be expressed as a linear interpolant of the extremes $\mathbb{A}(-1)$ and $\mathbb{A}(1)$: $$\mathbb{A}(\rho) = \frac{1-\rho}{2}\mathbb{A}(-1) + \frac{1+\rho}{2}\mathbb{A}(1).\tag{2}$$ When $|\rho|\le 1$, both coefficients $\color{blue}{(1-\rho)/2}$ and $\color{blue}{(1+\rho)/2}$ are non-negative. Therefore, since both ${\pmatrix{x^\prime&y^\prime}\mathbb{A}(1)\pmatrix{x\\y}}$ and $\pmatrix{x^\prime&y^\prime}\mathbb{A}(-1)\pmatrix{x\\y}$ are nonnegative, so is the right hand side of $$\eqalign{ &\pmatrix{x^\prime&y^\prime}\mathbb{A}(\rho)\pmatrix{x\\y} \\ &= \color{blue}{\left(\frac{1-\rho}{2}\right)}\pmatrix{x^\prime&y^\prime}\mathbb{A}(-1)\pmatrix{x\\y} + \color{blue}{\left(\frac{1+\rho}{2}\right)}\pmatrix{x^\prime&y^\prime}\mathbb{A}(1)\pmatrix{x\\y} \\ &\ge \color{blue}{0}(0) + \color{blue}{0}(0) = 0. }$$ (I use colors to help you see the four separate non-negative terms that are involved.) Because $x$ and $y$ are arbitrary, we have proven $(1)$ for all $\rho$ with $|\rho|\le 1$.
How to show this matrix is positive semidefinite? This is a nice opportunity to apply the definitions: no advanced theorems are needed. To simplify the notation, for any number $\rho$ let $$\mathbb{A}(\rho)=\pmatrix{A&\rho B\\\rho B^\prime&D}$$ be a
33,721
How to show this matrix is positive semidefinite?
There is already a great answer by @whuber, so I will try to give an alternative, shorter proof, using a couple theorems. For any $A$ - PSD and any $Q$ we have $Q^TAQ$ - PSD For $A$ - PSD and $B$ - PSD also $A + B$ - PSD For $A$ - PSD and $q > 0$ also $qA$ - PSD And now: \begin{align*} K^* &= \begin{pmatrix} K_{1,1} & rK_{1,2} \\ rK_{2,1} & K_{2,2} \\ \end{pmatrix} \\ &= \begin{pmatrix} K_{1,1} & rK_{1,2} \\ rK_{2,1} & r^2K_{2,2} \\ \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 & qK_{2,2} \\ \end{pmatrix}, \text{ where $q = 1 - r^2 > 0$} \\ &= \begin{pmatrix} I & 0 \\ 0 & rI \\ \end{pmatrix}^T \begin{pmatrix} K_{1,1} & K_{1,2} \\ K_{2,1} & K_{2,2} \\ \end{pmatrix} \begin{pmatrix} I & 0 \\ 0 & rI \\ \end{pmatrix} + q\begin{pmatrix} 0 & 0 \\ 0 & K_{2,2} \\ \end{pmatrix} \end{align*} Matrix $K$ is PSD by definition and so is its submatrix $K_{2, 2}$
How to show this matrix is positive semidefinite?
There is already a great answer by @whuber, so I will try to give an alternative, shorter proof, using a couple theorems. For any $A$ - PSD and any $Q$ we have $Q^TAQ$ - PSD For $A$ - PSD and $B$ - P
How to show this matrix is positive semidefinite? There is already a great answer by @whuber, so I will try to give an alternative, shorter proof, using a couple theorems. For any $A$ - PSD and any $Q$ we have $Q^TAQ$ - PSD For $A$ - PSD and $B$ - PSD also $A + B$ - PSD For $A$ - PSD and $q > 0$ also $qA$ - PSD And now: \begin{align*} K^* &= \begin{pmatrix} K_{1,1} & rK_{1,2} \\ rK_{2,1} & K_{2,2} \\ \end{pmatrix} \\ &= \begin{pmatrix} K_{1,1} & rK_{1,2} \\ rK_{2,1} & r^2K_{2,2} \\ \end{pmatrix} + \begin{pmatrix} 0 & 0 \\ 0 & qK_{2,2} \\ \end{pmatrix}, \text{ where $q = 1 - r^2 > 0$} \\ &= \begin{pmatrix} I & 0 \\ 0 & rI \\ \end{pmatrix}^T \begin{pmatrix} K_{1,1} & K_{1,2} \\ K_{2,1} & K_{2,2} \\ \end{pmatrix} \begin{pmatrix} I & 0 \\ 0 & rI \\ \end{pmatrix} + q\begin{pmatrix} 0 & 0 \\ 0 & K_{2,2} \\ \end{pmatrix} \end{align*} Matrix $K$ is PSD by definition and so is its submatrix $K_{2, 2}$
How to show this matrix is positive semidefinite? There is already a great answer by @whuber, so I will try to give an alternative, shorter proof, using a couple theorems. For any $A$ - PSD and any $Q$ we have $Q^TAQ$ - PSD For $A$ - PSD and $B$ - P
33,722
Introduction to frequentist statistics for Bayesians [closed]
Actually many of the things mentioned by you are already discussed by the major Bayesian handbooks. In many cases those handbooks are written for frequentists by training, so they discuss many similarities and try translating the frequentist methods into Bayesian ground. One example is the Doing Bayesian Data Analysis book by John K. Kruschke or his paper translating $t$-test into Bayesian ground. There is also another psychologist, Eric-Jan Wagenmakers who with his team talked a lot about translating frequentist concepts into Bayesian ground. Decision-theoretic concepts like loss functions, unbiassness etc. are discussed in the The Bayesian Choice book by Christian P. Robert. Moreover, some of the concepts mentioned by you are not really Bayesian. For example, loss function is a general concept and only if you combine it with prior distribution you get a Bayes risk. It is also worth mentioning that even if you are self-declared Bayesian, then you probably already use a lot of frequentist methods. For example, if you use MCMC for estimation and then calculate mean of the MCMC chain as your point estimate, then you are using a frequentist estimator, since you are not using any Bayesian model and priors to get the estimate of the mean of the MCMC chain. Finally, some frequentist concepts and tools are not easily translatable to Bayesian setting, or the proposed "equivalents" are rather proofs of concept, then something that you'd use in real life. In many cases the approaches are simply different and looking for parallels is a waste of time.
Introduction to frequentist statistics for Bayesians [closed]
Actually many of the things mentioned by you are already discussed by the major Bayesian handbooks. In many cases those handbooks are written for frequentists by training, so they discuss many similar
Introduction to frequentist statistics for Bayesians [closed] Actually many of the things mentioned by you are already discussed by the major Bayesian handbooks. In many cases those handbooks are written for frequentists by training, so they discuss many similarities and try translating the frequentist methods into Bayesian ground. One example is the Doing Bayesian Data Analysis book by John K. Kruschke or his paper translating $t$-test into Bayesian ground. There is also another psychologist, Eric-Jan Wagenmakers who with his team talked a lot about translating frequentist concepts into Bayesian ground. Decision-theoretic concepts like loss functions, unbiassness etc. are discussed in the The Bayesian Choice book by Christian P. Robert. Moreover, some of the concepts mentioned by you are not really Bayesian. For example, loss function is a general concept and only if you combine it with prior distribution you get a Bayes risk. It is also worth mentioning that even if you are self-declared Bayesian, then you probably already use a lot of frequentist methods. For example, if you use MCMC for estimation and then calculate mean of the MCMC chain as your point estimate, then you are using a frequentist estimator, since you are not using any Bayesian model and priors to get the estimate of the mean of the MCMC chain. Finally, some frequentist concepts and tools are not easily translatable to Bayesian setting, or the proposed "equivalents" are rather proofs of concept, then something that you'd use in real life. In many cases the approaches are simply different and looking for parallels is a waste of time.
Introduction to frequentist statistics for Bayesians [closed] Actually many of the things mentioned by you are already discussed by the major Bayesian handbooks. In many cases those handbooks are written for frequentists by training, so they discuss many similar
33,723
Introduction to frequentist statistics for Bayesians [closed]
(not entirely sure about this one). If a certain estimator $\hat θ̂$ is a sufficient statistic for a parameter $θ$, and $p(θ)$ is flat, then $p(\hat θ̂ |θ)=p(D|θ)=c⋅p(θ|D)$, i.e. the sampling distribution is equal to the likelihood function, and therefore equal to the posterior of the parameter given a flat prior. This is incorrect: $p(D|θ)=p(\hat θ̂ |θ)\times p(D|\hat θ)$ when $\hat θ$ is a sufficient statistic $p(D|θ)=c⋅p(θ|D)$ is false when considered as a function of $D$, and when considered as a function of $θ$ (unless one uses the flat prior) only does the posterior based on $\hat θ$ equal the posterior based on $D$ in this context. Furthermore, sufficiency has nothing to do with frequentism versus Bayesianism, even though there exist specifically Bayesian notions of sufficiency. As for instance in model comparison. a Bayesian would probably agree that an unbiased frequentist estimator is generally more desirable than a biased frequentist one The trouble with this part of the question is that Bayesian estimators are frequentist estimators as well in that they satisfy frequentist properties like admissibility or sometimes minimaxity. As discussed in a recent CV entry, Bayes estimates under squared error loss cannot be unbiased. And there is no reason beyond using a special loss function to favour unbiasedness: minimising a posterior loss is all-inclusive and if imposing unbiasedness results in a higher loss it should not be considered. (A last point is that there are very few functions of the parameter that allow for unbiased estimators.)
Introduction to frequentist statistics for Bayesians [closed]
(not entirely sure about this one). If a certain estimator $\hat θ̂$ is a sufficient statistic for a parameter $θ$, and $p(θ)$ is flat, then $p(\hat θ̂ |θ)=p(D|θ)=c⋅p(θ|D)$, i.e. the sampling distri
Introduction to frequentist statistics for Bayesians [closed] (not entirely sure about this one). If a certain estimator $\hat θ̂$ is a sufficient statistic for a parameter $θ$, and $p(θ)$ is flat, then $p(\hat θ̂ |θ)=p(D|θ)=c⋅p(θ|D)$, i.e. the sampling distribution is equal to the likelihood function, and therefore equal to the posterior of the parameter given a flat prior. This is incorrect: $p(D|θ)=p(\hat θ̂ |θ)\times p(D|\hat θ)$ when $\hat θ$ is a sufficient statistic $p(D|θ)=c⋅p(θ|D)$ is false when considered as a function of $D$, and when considered as a function of $θ$ (unless one uses the flat prior) only does the posterior based on $\hat θ$ equal the posterior based on $D$ in this context. Furthermore, sufficiency has nothing to do with frequentism versus Bayesianism, even though there exist specifically Bayesian notions of sufficiency. As for instance in model comparison. a Bayesian would probably agree that an unbiased frequentist estimator is generally more desirable than a biased frequentist one The trouble with this part of the question is that Bayesian estimators are frequentist estimators as well in that they satisfy frequentist properties like admissibility or sometimes minimaxity. As discussed in a recent CV entry, Bayes estimates under squared error loss cannot be unbiased. And there is no reason beyond using a special loss function to favour unbiasedness: minimising a posterior loss is all-inclusive and if imposing unbiasedness results in a higher loss it should not be considered. (A last point is that there are very few functions of the parameter that allow for unbiased estimators.)
Introduction to frequentist statistics for Bayesians [closed] (not entirely sure about this one). If a certain estimator $\hat θ̂$ is a sufficient statistic for a parameter $θ$, and $p(θ)$ is flat, then $p(\hat θ̂ |θ)=p(D|θ)=c⋅p(θ|D)$, i.e. the sampling distri
33,724
Introduction to frequentist statistics for Bayesians [closed]
It appears to me as if you are considering a world of frequentists and Bayesians. That is not much nuanced. Like if you have to be the one or the other, or as if the methods applied are determined by some personal believes (rather than convenience and the specific problem and information at hand). I believe that this is a misconception based on current trends in calling oneself a frequentist or Bayesian, and also lots of statistical language may be confusing. Just try to have a group of statisticians explain p-value or confidence interval. Some classical works may help you to understand frequentist inference. The classical works contain fundamental principles, are close to the heat of the discussion between proponents, and provide a background of the (practical) motivation and relevance at that time. also, these classical works on frequentist methods, were written in a time when people mostly worked with Bayesian principles and mathematical calculation of probability (note that statistics is not always as if you are working on a typical mathematics problem with probabilities, the probabilities may be very ill-defined). Frequentist probability is not inverse probability 'Inverse probability' Fisher 1930 You make a notion of the likelihood as being a Bayesian expression with a flat prior However, while the mathematics coincide (when wrongly interpreted, since you may get P(x|a) = P(a|x), up to a constant, but they are not the same terms) the construction and meaning is different. Likelihood is not meant to be a 'Bayesian probability based on flat, or uniformed, priors'. Likelihood is not even a probability and does not follow the rules of probability distributions (for instance you can not add up likelihood for different events, and the integral is not equal to one), it is only when you multiply it with a flat prior, that it becomes a probability, but then the meaning has changed as well. Some interesting quotes from 'inverse probability' 1930 Fisher. Bayesian and frequentist methods are different tools: ...there are two different measures of rational belief appropriate to different cases. Knowing the population we can express our incomplete knowledge of, or expectation of, the sample in terms of probability; knowing the sample we can express our incomplete knowledge of the population in terms of likelihood. We can state the relative likelihood that an unknown correlation is + 0.6, but not the probability that it lies in the range .595-.605. Note that there is a certain probability statement, which a frequentist method provides. By constructing a table of corresponding values, we may know as soon as T is calculated what is the fiducial 5 per cent, value of $\theta$, and that the true value of $\theta$ will be less than this value in just 5 per cent, of trials. This then is a definite probability statement about the unknown parameter $\theta$, which is true irrespective of any assumption as to its a priori distribution. a frequentist method makes a statement about the probability that an experiment (with random interval) will have the true value of a (possibly random) parameter inside the interval given by a statistic. This is not the be confused with the probability that a specific experiment (with fixed interval) will have the true value of the (fixed) parameter inside the interval given by the statistic. See also 'On the "Probable Error" of a Coefficient of Correlation Deduced from a Small Sample.' Fisher 1921 in which Fisher demonstrated the difference of his method not being a Bayesian inverse probability. In the former paper it was found, by applying a method previously developed, that the << most likely >> value of the correlation of the population was, numerically, slightly smaller than that of the sample. This conclusion was adversely criticized in Biometrica, apparently on the incorrect assumption that I had deduced it from Bayes theorem. It will be shown in this paper that when the sampling curves are rendered approximately normal, the correction I had proposed is equal to the distance between the population value and the mid-point of the sampling curve and is accordingly no more than the correction of a constant bias introduced by the method of calculation. No assumption as to a priori probability is involved. and ...two radically distinct concepts have been confused under the name of << probability >> ... that is probability and likelihood. See also the note on the end of Fishers article from 1921 in which he speaks more on the confusion. Note again that likelihood is a function of a set of parameters, but not a probability density function of that set of parameters. Probability is used for something you can observe. E.g the probability that a dice rolls six. Likelihood is used for something that you can not observe, e.g. the hypothesis that a dice rolls six 1/6 of the time. also, you might like Fisher's work in which he is much lighter in his opinion on Bayes theorem (still describing the differences). 'On the mathematical foundations of theoretical statistics' Fisher 1922 (especially section 6 'formal solution of problem of estimation') More If you can understand and appreciate the comments from Fisher on the difference between inverse probability and the principle of likelihood you may wish to read further on differences within frequentist methods. 'Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability' Neyman 1937 Which is a work of 50 pages and difficult to summarize. But it deals with your questions on unbiasedness, explains the method of least squares (and difference with method of maximum likelihood), and specifically provides a treatment of confidence intervals (frequentist interval are already not similar, unique, let alone that the are the same as Bayesian intervals for flat priors). Regarding the F-test it is not clear, what in the name of Laplace you think is wrong. If you like an early use you can look in 'Studies in crop variation. II. The manurial response of different potato varieties' 1923 Fisher and Mackenzie This paper has the expression of anova in a recognizable linear model subdividing the sums of squares into between and within groups. (in the test of the 1923 article the test consists of a comparison of differences between the logs of sample standard deviations with a calculated standard error for this difference that is determined by a sum of degrees of freedom $\frac{1}{2d_1} + \frac{1}{2d_2}$. Later works make this more sophisticated expressions leading to the F-distribution, such that it may diffuse the ideas that one may have about it. But in essence, without the technical juggling due to more exact distributions for small numbers, it's origin is much like a z-test).
Introduction to frequentist statistics for Bayesians [closed]
It appears to me as if you are considering a world of frequentists and Bayesians. That is not much nuanced. Like if you have to be the one or the other, or as if the methods applied are determined by
Introduction to frequentist statistics for Bayesians [closed] It appears to me as if you are considering a world of frequentists and Bayesians. That is not much nuanced. Like if you have to be the one or the other, or as if the methods applied are determined by some personal believes (rather than convenience and the specific problem and information at hand). I believe that this is a misconception based on current trends in calling oneself a frequentist or Bayesian, and also lots of statistical language may be confusing. Just try to have a group of statisticians explain p-value or confidence interval. Some classical works may help you to understand frequentist inference. The classical works contain fundamental principles, are close to the heat of the discussion between proponents, and provide a background of the (practical) motivation and relevance at that time. also, these classical works on frequentist methods, were written in a time when people mostly worked with Bayesian principles and mathematical calculation of probability (note that statistics is not always as if you are working on a typical mathematics problem with probabilities, the probabilities may be very ill-defined). Frequentist probability is not inverse probability 'Inverse probability' Fisher 1930 You make a notion of the likelihood as being a Bayesian expression with a flat prior However, while the mathematics coincide (when wrongly interpreted, since you may get P(x|a) = P(a|x), up to a constant, but they are not the same terms) the construction and meaning is different. Likelihood is not meant to be a 'Bayesian probability based on flat, or uniformed, priors'. Likelihood is not even a probability and does not follow the rules of probability distributions (for instance you can not add up likelihood for different events, and the integral is not equal to one), it is only when you multiply it with a flat prior, that it becomes a probability, but then the meaning has changed as well. Some interesting quotes from 'inverse probability' 1930 Fisher. Bayesian and frequentist methods are different tools: ...there are two different measures of rational belief appropriate to different cases. Knowing the population we can express our incomplete knowledge of, or expectation of, the sample in terms of probability; knowing the sample we can express our incomplete knowledge of the population in terms of likelihood. We can state the relative likelihood that an unknown correlation is + 0.6, but not the probability that it lies in the range .595-.605. Note that there is a certain probability statement, which a frequentist method provides. By constructing a table of corresponding values, we may know as soon as T is calculated what is the fiducial 5 per cent, value of $\theta$, and that the true value of $\theta$ will be less than this value in just 5 per cent, of trials. This then is a definite probability statement about the unknown parameter $\theta$, which is true irrespective of any assumption as to its a priori distribution. a frequentist method makes a statement about the probability that an experiment (with random interval) will have the true value of a (possibly random) parameter inside the interval given by a statistic. This is not the be confused with the probability that a specific experiment (with fixed interval) will have the true value of the (fixed) parameter inside the interval given by the statistic. See also 'On the "Probable Error" of a Coefficient of Correlation Deduced from a Small Sample.' Fisher 1921 in which Fisher demonstrated the difference of his method not being a Bayesian inverse probability. In the former paper it was found, by applying a method previously developed, that the << most likely >> value of the correlation of the population was, numerically, slightly smaller than that of the sample. This conclusion was adversely criticized in Biometrica, apparently on the incorrect assumption that I had deduced it from Bayes theorem. It will be shown in this paper that when the sampling curves are rendered approximately normal, the correction I had proposed is equal to the distance between the population value and the mid-point of the sampling curve and is accordingly no more than the correction of a constant bias introduced by the method of calculation. No assumption as to a priori probability is involved. and ...two radically distinct concepts have been confused under the name of << probability >> ... that is probability and likelihood. See also the note on the end of Fishers article from 1921 in which he speaks more on the confusion. Note again that likelihood is a function of a set of parameters, but not a probability density function of that set of parameters. Probability is used for something you can observe. E.g the probability that a dice rolls six. Likelihood is used for something that you can not observe, e.g. the hypothesis that a dice rolls six 1/6 of the time. also, you might like Fisher's work in which he is much lighter in his opinion on Bayes theorem (still describing the differences). 'On the mathematical foundations of theoretical statistics' Fisher 1922 (especially section 6 'formal solution of problem of estimation') More If you can understand and appreciate the comments from Fisher on the difference between inverse probability and the principle of likelihood you may wish to read further on differences within frequentist methods. 'Outline of a Theory of Statistical Estimation Based on the Classical Theory of Probability' Neyman 1937 Which is a work of 50 pages and difficult to summarize. But it deals with your questions on unbiasedness, explains the method of least squares (and difference with method of maximum likelihood), and specifically provides a treatment of confidence intervals (frequentist interval are already not similar, unique, let alone that the are the same as Bayesian intervals for flat priors). Regarding the F-test it is not clear, what in the name of Laplace you think is wrong. If you like an early use you can look in 'Studies in crop variation. II. The manurial response of different potato varieties' 1923 Fisher and Mackenzie This paper has the expression of anova in a recognizable linear model subdividing the sums of squares into between and within groups. (in the test of the 1923 article the test consists of a comparison of differences between the logs of sample standard deviations with a calculated standard error for this difference that is determined by a sum of degrees of freedom $\frac{1}{2d_1} + \frac{1}{2d_2}$. Later works make this more sophisticated expressions leading to the F-distribution, such that it may diffuse the ideas that one may have about it. But in essence, without the technical juggling due to more exact distributions for small numbers, it's origin is much like a z-test).
Introduction to frequentist statistics for Bayesians [closed] It appears to me as if you are considering a world of frequentists and Bayesians. That is not much nuanced. Like if you have to be the one or the other, or as if the methods applied are determined by
33,725
Generating Multivariate Uniform Distribution in R
It depends a little bit on the terminology, but usually multivariate uniform refers to a distribution where every point in $[a,b]^d$ is equally likely. Hence, the dimensions are independent, and you can draw uniformly between $[a,b]$ d times individually to get a sample from the multivariate uniform. If you don't want the dimensions to be independent, it might be worth looking into Copulas
Generating Multivariate Uniform Distribution in R
It depends a little bit on the terminology, but usually multivariate uniform refers to a distribution where every point in $[a,b]^d$ is equally likely. Hence, the dimensions are independent, and you c
Generating Multivariate Uniform Distribution in R It depends a little bit on the terminology, but usually multivariate uniform refers to a distribution where every point in $[a,b]^d$ is equally likely. Hence, the dimensions are independent, and you can draw uniformly between $[a,b]$ d times individually to get a sample from the multivariate uniform. If you don't want the dimensions to be independent, it might be worth looking into Copulas
Generating Multivariate Uniform Distribution in R It depends a little bit on the terminology, but usually multivariate uniform refers to a distribution where every point in $[a,b]^d$ is equally likely. Hence, the dimensions are independent, and you c
33,726
Generating Multivariate Uniform Distribution in R
Following up on Sam's answer: samps <- replicate(n, runif(d,a,b)) # draw samples cov(t(samps)) # get the sample covariance matrix
Generating Multivariate Uniform Distribution in R
Following up on Sam's answer: samps <- replicate(n, runif(d,a,b)) # draw samples cov(t(samps)) # get the sample covariance matrix
Generating Multivariate Uniform Distribution in R Following up on Sam's answer: samps <- replicate(n, runif(d,a,b)) # draw samples cov(t(samps)) # get the sample covariance matrix
Generating Multivariate Uniform Distribution in R Following up on Sam's answer: samps <- replicate(n, runif(d,a,b)) # draw samples cov(t(samps)) # get the sample covariance matrix
33,727
Generating Multivariate Uniform Distribution in R
Following up on the second part Sam's answer: library(copula) set.seed(2019) d <- 2 n <- 1000 indep.cop <- indepCopula(d) sample <- rCopula(n, indep.cop) chisq.test(sample) # Pearson's Chi-squared test # data: sample # X-squared = 362.15, df = 1998, p-value = 1 # Warning message: # In chisq.test(sample) : Chi-squared approximation may be incorrect Because the p-value is 1, it’s not unreasonable to assume that the dimensions are independent.
Generating Multivariate Uniform Distribution in R
Following up on the second part Sam's answer: library(copula) set.seed(2019) d <- 2 n <- 1000 indep.cop <- indepCopula(d) sample <- rCopula(n, indep.cop) chisq.test(sample) # Pearson's Chi
Generating Multivariate Uniform Distribution in R Following up on the second part Sam's answer: library(copula) set.seed(2019) d <- 2 n <- 1000 indep.cop <- indepCopula(d) sample <- rCopula(n, indep.cop) chisq.test(sample) # Pearson's Chi-squared test # data: sample # X-squared = 362.15, df = 1998, p-value = 1 # Warning message: # In chisq.test(sample) : Chi-squared approximation may be incorrect Because the p-value is 1, it’s not unreasonable to assume that the dimensions are independent.
Generating Multivariate Uniform Distribution in R Following up on the second part Sam's answer: library(copula) set.seed(2019) d <- 2 n <- 1000 indep.cop <- indepCopula(d) sample <- rCopula(n, indep.cop) chisq.test(sample) # Pearson's Chi
33,728
Median > Mode > Mean > Range
The question has already been answered in the affirmative, but let's approach this from the point of view of construction -- how do we make a set of data that does this? First, note that we can always make all three location-measures greater than the range. Simply construct a preliminary data set that has median > mode > mean and compute the range. Now add (range-mean) + $\epsilon$ (for some small positive $\epsilon$) to all of the data values to get the final data set, whereupon the three location-measures will all exceed the range. So we have now reduced the problem to one of finding a data set where median > mode > mean . Imagine we already had some data with a suitable median and mode. To make the mean smaller than the median and mode, you simply place a single value far enough below the bulk of the data that the mean is pulled down; we can place a second value just above the bulk of the data to keep the median where it was, without changing the mode. So now we can modify an existing data set that simply has median > mode and obtain one which has the mean where we want. So let us create one with median > mode. We can do this by having one value repeated (if it's the only value that occurs twice, it's the sample mode) and then adding enough other values to make the median larger. This is an example: 21, 21, 22, 23, 24 The median is 22 but the mode is 21. Now let's add the two points as previously described, in such a way to make the mean 20 without changing the median or mode. The present points sum to 111, so we need two points that add to 140-111 = 29, and one of them should be just larger that 24. Let's make it 25. Then the smaller point is 29-25 = 4. So now our data set is: 4, 21, 21, 22, 23, 24, 25 It has mean 20, mode 21 and median 22. Now let's fix the relationship of those with the range. What's the range? It's 25-4=21, which is presently larger than the mean. We need simply add something to every data value to make the mean larger than 21, which leaves the range unaltered. Adding 2 will suffice. (Note that range-mean+1=2, so we can see that we took $\epsilon=1$) So our final data set is 6, 23, 23, 24, 25, 26, 27 The range is still 21, the mean is now 22, the mode is 23, the median is 24 So this step by step approach is quite easy to use. In summary: Make a small data set with median > mode by repeating the smallest value and having all the larger values distinct (it's easiest to use sorted values). Having 5 points is convenient (since it lets you specify the median by moving the middle value) but 4 is feasible if needed. Obtain a mean below the median by adding two points that don't alter the median or mode (i.e. two distinct/singleton values will not disturb the mode, and placing them one either side the previous data will preserve the median; place the larger value just above all the present data and then compute the smallest so that the overall mean comes out just below the mode. This takes us to 7 data points. Compute the range. Add a constant (range - mean + $\epsilon$) to all the data values, which guarantees that the mean exceeds the range. This is the final data set. Checking those calculations in R: x <- c(6, 23, 23, 24, 25, 26, 27) data.frame( range=diff(range(x)), mean=mean(x), mode=max(as.numeric(names(table(x))[table(x)==max(table(x))])), median=median(x) ) range mean mode median 1 21 22 23 24 (note that if we somehow happened to generate more than one mode, this calculation tries to find the largest of them)
Median > Mode > Mean > Range
The question has already been answered in the affirmative, but let's approach this from the point of view of construction -- how do we make a set of data that does this? First, note that we can always
Median > Mode > Mean > Range The question has already been answered in the affirmative, but let's approach this from the point of view of construction -- how do we make a set of data that does this? First, note that we can always make all three location-measures greater than the range. Simply construct a preliminary data set that has median > mode > mean and compute the range. Now add (range-mean) + $\epsilon$ (for some small positive $\epsilon$) to all of the data values to get the final data set, whereupon the three location-measures will all exceed the range. So we have now reduced the problem to one of finding a data set where median > mode > mean . Imagine we already had some data with a suitable median and mode. To make the mean smaller than the median and mode, you simply place a single value far enough below the bulk of the data that the mean is pulled down; we can place a second value just above the bulk of the data to keep the median where it was, without changing the mode. So now we can modify an existing data set that simply has median > mode and obtain one which has the mean where we want. So let us create one with median > mode. We can do this by having one value repeated (if it's the only value that occurs twice, it's the sample mode) and then adding enough other values to make the median larger. This is an example: 21, 21, 22, 23, 24 The median is 22 but the mode is 21. Now let's add the two points as previously described, in such a way to make the mean 20 without changing the median or mode. The present points sum to 111, so we need two points that add to 140-111 = 29, and one of them should be just larger that 24. Let's make it 25. Then the smaller point is 29-25 = 4. So now our data set is: 4, 21, 21, 22, 23, 24, 25 It has mean 20, mode 21 and median 22. Now let's fix the relationship of those with the range. What's the range? It's 25-4=21, which is presently larger than the mean. We need simply add something to every data value to make the mean larger than 21, which leaves the range unaltered. Adding 2 will suffice. (Note that range-mean+1=2, so we can see that we took $\epsilon=1$) So our final data set is 6, 23, 23, 24, 25, 26, 27 The range is still 21, the mean is now 22, the mode is 23, the median is 24 So this step by step approach is quite easy to use. In summary: Make a small data set with median > mode by repeating the smallest value and having all the larger values distinct (it's easiest to use sorted values). Having 5 points is convenient (since it lets you specify the median by moving the middle value) but 4 is feasible if needed. Obtain a mean below the median by adding two points that don't alter the median or mode (i.e. two distinct/singleton values will not disturb the mode, and placing them one either side the previous data will preserve the median; place the larger value just above all the present data and then compute the smallest so that the overall mean comes out just below the mode. This takes us to 7 data points. Compute the range. Add a constant (range - mean + $\epsilon$) to all the data values, which guarantees that the mean exceeds the range. This is the final data set. Checking those calculations in R: x <- c(6, 23, 23, 24, 25, 26, 27) data.frame( range=diff(range(x)), mean=mean(x), mode=max(as.numeric(names(table(x))[table(x)==max(table(x))])), median=median(x) ) range mean mode median 1 21 22 23 24 (note that if we somehow happened to generate more than one mode, this calculation tries to find the largest of them)
Median > Mode > Mean > Range The question has already been answered in the affirmative, but let's approach this from the point of view of construction -- how do we make a set of data that does this? First, note that we can always
33,729
Median > Mode > Mean > Range
Yes, it's not hard to come up with such a set. S = {0, 1, 2, 3, 4, 4, 1000} Median = 3, Mode = 4, Mean = 144.85, Range = 1000 Data of this kind will skew to the right, since your mean is higher than the median, implying that on average, values above the median are further away than values below.
Median > Mode > Mean > Range
Yes, it's not hard to come up with such a set. S = {0, 1, 2, 3, 4, 4, 1000} Median = 3, Mode = 4, Mean = 144.85, Range = 1000 Data of this kind will skew to the right, since your mean is higher than t
Median > Mode > Mean > Range Yes, it's not hard to come up with such a set. S = {0, 1, 2, 3, 4, 4, 1000} Median = 3, Mode = 4, Mean = 144.85, Range = 1000 Data of this kind will skew to the right, since your mean is higher than the median, implying that on average, values above the median are further away than values below.
Median > Mode > Mean > Range Yes, it's not hard to come up with such a set. S = {0, 1, 2, 3, 4, 4, 1000} Median = 3, Mode = 4, Mean = 144.85, Range = 1000 Data of this kind will skew to the right, since your mean is higher than t
33,730
Median > Mode > Mean > Range
Irrespective of what the order is, the answer is yes. Data sets that are subsets of distributions, whose left tails are heavier than their right tails will frequently have the mode smaller than the median and the median smaller than the mean and the mean smaller than the range. A beta distribution with the mode greater 1/2 would have that property. If one wants to have the mode in any particular position, one can make a mixture distribution by adding in a small percentage of a narrow (small) standard deviation but tall distribution, e.g., Dirac $\delta$, wherever one wants to put that mode.
Median > Mode > Mean > Range
Irrespective of what the order is, the answer is yes. Data sets that are subsets of distributions, whose left tails are heavier than their right tails will frequently have the mode smaller than the me
Median > Mode > Mean > Range Irrespective of what the order is, the answer is yes. Data sets that are subsets of distributions, whose left tails are heavier than their right tails will frequently have the mode smaller than the median and the median smaller than the mean and the mean smaller than the range. A beta distribution with the mode greater 1/2 would have that property. If one wants to have the mode in any particular position, one can make a mixture distribution by adding in a small percentage of a narrow (small) standard deviation but tall distribution, e.g., Dirac $\delta$, wherever one wants to put that mode.
Median > Mode > Mean > Range Irrespective of what the order is, the answer is yes. Data sets that are subsets of distributions, whose left tails are heavier than their right tails will frequently have the mode smaller than the me
33,731
How should I interpret this residual plot?
The plot is very dense so it is not easy to see all trends there may be. You could run alternative tests for hetoroscedasticity and autocorrelation to get additional diagnostics. What is visible is that over the first 100 values or so the variance of the residual increases which may hint to hetoroscedasticity. Afterwards the variance seems to decrease again. This somewhat non-linear behavior of the variance may also point to the need for a difference functional form (so maybe polynomial instead of linear). Another indication for this is the trend in residuals you observe in the high end of the fitted values (there aren't any positive residuals anymore).
How should I interpret this residual plot?
The plot is very dense so it is not easy to see all trends there may be. You could run alternative tests for hetoroscedasticity and autocorrelation to get additional diagnostics. What is visible is th
How should I interpret this residual plot? The plot is very dense so it is not easy to see all trends there may be. You could run alternative tests for hetoroscedasticity and autocorrelation to get additional diagnostics. What is visible is that over the first 100 values or so the variance of the residual increases which may hint to hetoroscedasticity. Afterwards the variance seems to decrease again. This somewhat non-linear behavior of the variance may also point to the need for a difference functional form (so maybe polynomial instead of linear). Another indication for this is the trend in residuals you observe in the high end of the fitted values (there aren't any positive residuals anymore).
How should I interpret this residual plot? The plot is very dense so it is not easy to see all trends there may be. You could run alternative tests for hetoroscedasticity and autocorrelation to get additional diagnostics. What is visible is th
33,732
How should I interpret this residual plot?
Your residual plot has a definite pattern, with several lines trending downward as fitted values increase. This pattern can occur if you fail to account for fixed/random effects in your model and the fixed effects are correlated with explanatory variables. Consider the following example: set.seed(999) N = 1000 num.groups = 10 alpha = runif(num.groups, -10, 10) #Fixed effects beta = 10 #Slope parameter group = sample(num.groups, N, replace = TRUE) X = rnorm(N, mean = alpha[group], sd = 5) #Mean of X correlated with fixed effect e = rnorm(N, sd = 1) y = alpha[group] + X * beta + e df = data.frame(group = as.factor(group), X, y) m.no.fe = lm(y ~ X, data = df) #Not including group fixed effects plot(m.no.fe, which = 1) This results in the following residual/fitted plot: You might see something similar if, for example, you regressed SAT scores on entry earnings for several high schools but failed to include high school fixed effects; each school will have different baseline earnings (i.e., fixed effects) and mean SAT scores, which are likely correlated. Including group fixed effects, we get m.fe = lm(y ~ group + X, data = df) #Now including fixed effects plot(m.fe, which = 1) which gives a much better residual/fitted plot:
How should I interpret this residual plot?
Your residual plot has a definite pattern, with several lines trending downward as fitted values increase. This pattern can occur if you fail to account for fixed/random effects in your model and the
How should I interpret this residual plot? Your residual plot has a definite pattern, with several lines trending downward as fitted values increase. This pattern can occur if you fail to account for fixed/random effects in your model and the fixed effects are correlated with explanatory variables. Consider the following example: set.seed(999) N = 1000 num.groups = 10 alpha = runif(num.groups, -10, 10) #Fixed effects beta = 10 #Slope parameter group = sample(num.groups, N, replace = TRUE) X = rnorm(N, mean = alpha[group], sd = 5) #Mean of X correlated with fixed effect e = rnorm(N, sd = 1) y = alpha[group] + X * beta + e df = data.frame(group = as.factor(group), X, y) m.no.fe = lm(y ~ X, data = df) #Not including group fixed effects plot(m.no.fe, which = 1) This results in the following residual/fitted plot: You might see something similar if, for example, you regressed SAT scores on entry earnings for several high schools but failed to include high school fixed effects; each school will have different baseline earnings (i.e., fixed effects) and mean SAT scores, which are likely correlated. Including group fixed effects, we get m.fe = lm(y ~ group + X, data = df) #Now including fixed effects plot(m.fe, which = 1) which gives a much better residual/fitted plot:
How should I interpret this residual plot? Your residual plot has a definite pattern, with several lines trending downward as fitted values increase. This pattern can occur if you fail to account for fixed/random effects in your model and the
33,733
How should I interpret this residual plot?
The residual plot does look unusual from the point of view of standard OLS (linear) regression. There is, for example, an indication of heteroscedasticity, specifically that the spread of the residuals is larger in the middle than at the two ends. This is not the real problem, however. The real issue here is that you have fit the wrong model. OLS regression is based on the assumption that the response is normally distributed (conditional on the regressors—i.e., your $X$ variables). Your response is not normal, and cannot be. Your response is a number of seats sold out of a total number of seats in the theater. Your response is binomial. A binomial cannot be modeled correctly with OLS. You need to fit a logistic regression model. There will be some additional issues you will need to address. A couple that are apparent from your description is that you have clustered observations, in the sense that you have multiple observations for the same show (i.e., over the 90 days). You need to address this non-independence, perhaps by fitting a GLMM. Another issue is that there will be a dependence between successive days within the same show. After all, if you have sold $y_d$ tickets on day $d$, you will have sold at least that many on day $d+1$. One way to try to address this is to fit only 89 days of data and to include the previous day's number as a covariate. (Sorry, on re-reading the question, I see you already have included a tickets sold till date variable.) There may well be more issues to be addressed in modeling your data. These are fairly advanced topics; if you aren't familiar with them, you may need to work with a statistical consultant.
How should I interpret this residual plot?
The residual plot does look unusual from the point of view of standard OLS (linear) regression. There is, for example, an indication of heteroscedasticity, specifically that the spread of the residua
How should I interpret this residual plot? The residual plot does look unusual from the point of view of standard OLS (linear) regression. There is, for example, an indication of heteroscedasticity, specifically that the spread of the residuals is larger in the middle than at the two ends. This is not the real problem, however. The real issue here is that you have fit the wrong model. OLS regression is based on the assumption that the response is normally distributed (conditional on the regressors—i.e., your $X$ variables). Your response is not normal, and cannot be. Your response is a number of seats sold out of a total number of seats in the theater. Your response is binomial. A binomial cannot be modeled correctly with OLS. You need to fit a logistic regression model. There will be some additional issues you will need to address. A couple that are apparent from your description is that you have clustered observations, in the sense that you have multiple observations for the same show (i.e., over the 90 days). You need to address this non-independence, perhaps by fitting a GLMM. Another issue is that there will be a dependence between successive days within the same show. After all, if you have sold $y_d$ tickets on day $d$, you will have sold at least that many on day $d+1$. One way to try to address this is to fit only 89 days of data and to include the previous day's number as a covariate. (Sorry, on re-reading the question, I see you already have included a tickets sold till date variable.) There may well be more issues to be addressed in modeling your data. These are fairly advanced topics; if you aren't familiar with them, you may need to work with a statistical consultant.
How should I interpret this residual plot? The residual plot does look unusual from the point of view of standard OLS (linear) regression. There is, for example, an indication of heteroscedasticity, specifically that the spread of the residua
33,734
Is it valid to derive a mean from categorical data?
It's somewhat misleading to just lump this in with ordinal data; I'd called it "binned data" though formally it's interval-censored data (and there are a variety of other terms that might be used). You can certainly talk about the population mean (since the underlying scale really does have a mean) and how to estimate it, bringing in what is understood about the underlying variable to help figure out ways to estimate it well from the bin-counts and bin-boundaries. While it's common to use the mid-point in such cases it's not always the best possible option. However, one can get some idea of how biased that might be under some set of assumptions so it's possible to get a sense of whether it really matters all that much. Where the underlying density is decreasing, the correct "midpoint" to use would be left of half way, and if the underlying density is increasing, the correct "midpoint" to use would be right of half way. If you can come up with a plausible distributional model for the underlying variable, the mean can be estimated from the binned data via maximum likelihood (for example). Even in the absence of any model at all, one can place limits on the mean, since the lowest the mean can be is when all the values are at the low end of each interval and highest when they're all up at the high end of each interval. [Even if the upper category is seemingly open-ended, there's still likely an effective upper bound on hours worked. e.g. it's simply impossible to work 25 hours in a day or 169 hours in a week, even if you never need to eat or sleep. Likely there's some other substantially lower bound beyond which nobody can go for one reason or another.]
Is it valid to derive a mean from categorical data?
It's somewhat misleading to just lump this in with ordinal data; I'd called it "binned data" though formally it's interval-censored data (and there are a variety of other terms that might be used). Yo
Is it valid to derive a mean from categorical data? It's somewhat misleading to just lump this in with ordinal data; I'd called it "binned data" though formally it's interval-censored data (and there are a variety of other terms that might be used). You can certainly talk about the population mean (since the underlying scale really does have a mean) and how to estimate it, bringing in what is understood about the underlying variable to help figure out ways to estimate it well from the bin-counts and bin-boundaries. While it's common to use the mid-point in such cases it's not always the best possible option. However, one can get some idea of how biased that might be under some set of assumptions so it's possible to get a sense of whether it really matters all that much. Where the underlying density is decreasing, the correct "midpoint" to use would be left of half way, and if the underlying density is increasing, the correct "midpoint" to use would be right of half way. If you can come up with a plausible distributional model for the underlying variable, the mean can be estimated from the binned data via maximum likelihood (for example). Even in the absence of any model at all, one can place limits on the mean, since the lowest the mean can be is when all the values are at the low end of each interval and highest when they're all up at the high end of each interval. [Even if the upper category is seemingly open-ended, there's still likely an effective upper bound on hours worked. e.g. it's simply impossible to work 25 hours in a day or 169 hours in a week, even if you never need to eat or sleep. Likely there's some other substantially lower bound beyond which nobody can go for one reason or another.]
Is it valid to derive a mean from categorical data? It's somewhat misleading to just lump this in with ordinal data; I'd called it "binned data" though formally it's interval-censored data (and there are a variety of other terms that might be used). Yo
33,735
Is it valid to derive a mean from categorical data?
No, I would not consider that to be valid. The problem is that the mean of the true values in each category is not likely to be the midpoint. For example, there are probably many more people who would answer 10 hours than one hour - so the average hours worked will be more than 5.5, but you are assuming that the mean is 5.5. Hence your estimate will be biased. What you could do is consider it to be a scale with a weird non-linear transformation - saying something like "On a scale where 1 = 1 to 10, 2 = 11-20 ... the mean score was 1.8." But if you only have three categories, you can just say "22% of people worked 1-10 hours, 43% worked 11-20 hours ..." Unless there's a very good reason that you need a mean, I would do that.
Is it valid to derive a mean from categorical data?
No, I would not consider that to be valid. The problem is that the mean of the true values in each category is not likely to be the midpoint. For example, there are probably many more people who would
Is it valid to derive a mean from categorical data? No, I would not consider that to be valid. The problem is that the mean of the true values in each category is not likely to be the midpoint. For example, there are probably many more people who would answer 10 hours than one hour - so the average hours worked will be more than 5.5, but you are assuming that the mean is 5.5. Hence your estimate will be biased. What you could do is consider it to be a scale with a weird non-linear transformation - saying something like "On a scale where 1 = 1 to 10, 2 = 11-20 ... the mean score was 1.8." But if you only have three categories, you can just say "22% of people worked 1-10 hours, 43% worked 11-20 hours ..." Unless there's a very good reason that you need a mean, I would do that.
Is it valid to derive a mean from categorical data? No, I would not consider that to be valid. The problem is that the mean of the true values in each category is not likely to be the midpoint. For example, there are probably many more people who would
33,736
Is it valid to derive a mean from categorical data?
Possible? Yes, as you've shown. Valid? Depends on what you mean. It's an estimate, and estimates can be biased. Consider the case where half the respondents give you exact measurements (e.g. 22 hours) and then half give you a binned estimate (e.g. 21-30 hours). If you calculate the average binned estimate as you showed above n1 x midpoint of category 1 + n2 x midpoint category 2 ..... divide by total n then you could add that number with the mean exact measurement, divide by 2, and get an estimate of the average working hours. Or maybe you want to give more weight to the mean exact measurement, and so you could do a weighted average of the two means to estimate the average working hours. A third estimator could look like this: Bin the exact measurements into the three categories, and then find the deviation of the empirical average within a bin from the midpoint of that bin. (e.g. with exact hours observed as 22, 24, and 23, the average within the bin is 23, which deviates from 25.5 by 2.5). Then, you may choose to use the empirical average within the bin (instead of the midpoint of the bin) in order to calculate the average work hours from the observations that had measurement in categories/bins: n1 x empirical average (from observations with exact measurements) within bin 1 + n2 x empirical average within bin 1 ..... divide by total n Another estimator could take a parametric assumption and/or Bayesian framework to estimate the average from the obervations with binned measurements. There are plenty of estimators. Theory of statistics can show that some may "work better" than others. If you're a frequentist you'll probably want one with 95% asymptotic coverage. Those estimators would probably be the "most valid". As another answer points out, your proposed method is likely to be biased, and so maybe not as "valid" as you would like. Reporting the percent of observations in each bin, however, is a very good way of explaining your data. if you feel strongly above giving an estimate of the overall mean, you could do so, but be sure to be clear that you used a midpoint-calculation statistic like your proposed method, and perhaps state that your estimate is not very precise.
Is it valid to derive a mean from categorical data?
Possible? Yes, as you've shown. Valid? Depends on what you mean. It's an estimate, and estimates can be biased. Consider the case where half the respondents give you exact measurements (e.g. 22 hours)
Is it valid to derive a mean from categorical data? Possible? Yes, as you've shown. Valid? Depends on what you mean. It's an estimate, and estimates can be biased. Consider the case where half the respondents give you exact measurements (e.g. 22 hours) and then half give you a binned estimate (e.g. 21-30 hours). If you calculate the average binned estimate as you showed above n1 x midpoint of category 1 + n2 x midpoint category 2 ..... divide by total n then you could add that number with the mean exact measurement, divide by 2, and get an estimate of the average working hours. Or maybe you want to give more weight to the mean exact measurement, and so you could do a weighted average of the two means to estimate the average working hours. A third estimator could look like this: Bin the exact measurements into the three categories, and then find the deviation of the empirical average within a bin from the midpoint of that bin. (e.g. with exact hours observed as 22, 24, and 23, the average within the bin is 23, which deviates from 25.5 by 2.5). Then, you may choose to use the empirical average within the bin (instead of the midpoint of the bin) in order to calculate the average work hours from the observations that had measurement in categories/bins: n1 x empirical average (from observations with exact measurements) within bin 1 + n2 x empirical average within bin 1 ..... divide by total n Another estimator could take a parametric assumption and/or Bayesian framework to estimate the average from the obervations with binned measurements. There are plenty of estimators. Theory of statistics can show that some may "work better" than others. If you're a frequentist you'll probably want one with 95% asymptotic coverage. Those estimators would probably be the "most valid". As another answer points out, your proposed method is likely to be biased, and so maybe not as "valid" as you would like. Reporting the percent of observations in each bin, however, is a very good way of explaining your data. if you feel strongly above giving an estimate of the overall mean, you could do so, but be sure to be clear that you used a midpoint-calculation statistic like your proposed method, and perhaps state that your estimate is not very precise.
Is it valid to derive a mean from categorical data? Possible? Yes, as you've shown. Valid? Depends on what you mean. It's an estimate, and estimates can be biased. Consider the case where half the respondents give you exact measurements (e.g. 22 hours)
33,737
Handling unbalanced data using SMOTE - no big difference?
I would like to bring to your attention also that in the original SMOTE paper, the good results were based on both combining SMOTE and random under-sampling. This is because applying SMOTE to achieve an equal balance with the majority class is not necessarily the best case for the classifier and as your results show. Thus, you may under-sample the majority to different percentages of the original majority class and then (say 25%, 50%, 75%) , apply SMOTE to minority samples with different numbers of synthetically generated samples (say 2, 3, 4). You end up with a combination of cases and you may choose the one showing better cross-validated results.
Handling unbalanced data using SMOTE - no big difference?
I would like to bring to your attention also that in the original SMOTE paper, the good results were based on both combining SMOTE and random under-sampling. This is because applying SMOTE to achieve
Handling unbalanced data using SMOTE - no big difference? I would like to bring to your attention also that in the original SMOTE paper, the good results were based on both combining SMOTE and random under-sampling. This is because applying SMOTE to achieve an equal balance with the majority class is not necessarily the best case for the classifier and as your results show. Thus, you may under-sample the majority to different percentages of the original majority class and then (say 25%, 50%, 75%) , apply SMOTE to minority samples with different numbers of synthetically generated samples (say 2, 3, 4). You end up with a combination of cases and you may choose the one showing better cross-validated results.
Handling unbalanced data using SMOTE - no big difference? I would like to bring to your attention also that in the original SMOTE paper, the good results were based on both combining SMOTE and random under-sampling. This is because applying SMOTE to achieve
33,738
Handling unbalanced data using SMOTE - no big difference?
SMOTE isn't really about changing f-measure or accuracy... it's about the trade-off between precision vs. recall. By using SMOTE you can increase recall at the cost of precision, if that's something you want. Just look at Figure 2 in the SMOTE paper about how SMOTE affects classifier performance. Undersampling the minority class gets you less data, and most classifiers' performance suffers with less data. An alternative, if your classifier allows it, is to reweight the data, giving a higher weight to the minority class and lower weight to the majority class. So why use something like SMOTE? Usually if the class you're interested in is rare, like finding defaults if predicting a credit score, a classifier giving 0-1 scores will say everyone doesn't default. Often in practice, one would rather have a classifier that returns the vast majority of the defaults, even if precision is less than 50%, as these can be examined by a human, or you can direct deeper, more expensive, data collection efforts towards these cases. If you use a classifier with a more continuous score, you can just lower the threshold to get more recall - i.e. for a logistic regression, start treating $X^T w > -2$ as positive, but this usually results in getting lower f-measure, since it's not the "fulcrum point" of where the model is being trained. By reweighting the proportion of the classes, you make your model be trained at the precision/recall tradeoff you prefer, which means you end up with both being slightly better than if you just lowered the threshold.
Handling unbalanced data using SMOTE - no big difference?
SMOTE isn't really about changing f-measure or accuracy... it's about the trade-off between precision vs. recall. By using SMOTE you can increase recall at the cost of precision, if that's something y
Handling unbalanced data using SMOTE - no big difference? SMOTE isn't really about changing f-measure or accuracy... it's about the trade-off between precision vs. recall. By using SMOTE you can increase recall at the cost of precision, if that's something you want. Just look at Figure 2 in the SMOTE paper about how SMOTE affects classifier performance. Undersampling the minority class gets you less data, and most classifiers' performance suffers with less data. An alternative, if your classifier allows it, is to reweight the data, giving a higher weight to the minority class and lower weight to the majority class. So why use something like SMOTE? Usually if the class you're interested in is rare, like finding defaults if predicting a credit score, a classifier giving 0-1 scores will say everyone doesn't default. Often in practice, one would rather have a classifier that returns the vast majority of the defaults, even if precision is less than 50%, as these can be examined by a human, or you can direct deeper, more expensive, data collection efforts towards these cases. If you use a classifier with a more continuous score, you can just lower the threshold to get more recall - i.e. for a logistic regression, start treating $X^T w > -2$ as positive, but this usually results in getting lower f-measure, since it's not the "fulcrum point" of where the model is being trained. By reweighting the proportion of the classes, you make your model be trained at the precision/recall tradeoff you prefer, which means you end up with both being slightly better than if you just lowered the threshold.
Handling unbalanced data using SMOTE - no big difference? SMOTE isn't really about changing f-measure or accuracy... it's about the trade-off between precision vs. recall. By using SMOTE you can increase recall at the cost of precision, if that's something y
33,739
Handling unbalanced data using SMOTE - no big difference?
Let's say your classes are split as 0: 10,000 and 1: 100. So, even if you model is predicting ALL 1's incorrectly, your model would be 99% accurate. Is that a good model to predict 1's accurately? No. Hence, SMOTE*. Even if accuracy of the new model is 96% *stands true for both accuracy and f-measure
Handling unbalanced data using SMOTE - no big difference?
Let's say your classes are split as 0: 10,000 and 1: 100. So, even if you model is predicting ALL 1's incorrectly, your model would be 99% accurate. Is that a good model to predict 1's accurately? No.
Handling unbalanced data using SMOTE - no big difference? Let's say your classes are split as 0: 10,000 and 1: 100. So, even if you model is predicting ALL 1's incorrectly, your model would be 99% accurate. Is that a good model to predict 1's accurately? No. Hence, SMOTE*. Even if accuracy of the new model is 96% *stands true for both accuracy and f-measure
Handling unbalanced data using SMOTE - no big difference? Let's say your classes are split as 0: 10,000 and 1: 100. So, even if you model is predicting ALL 1's incorrectly, your model would be 99% accurate. Is that a good model to predict 1's accurately? No.
33,740
Difference between hazard function and intensity function?
A Poisson process is a model for a stream of "random" arrivals and has the properties that there can be at most one arrival at any instant $t$ the number of arrivals in any interval $(t_1,t_2]$ is a Poisson random variable which is here denoted as $\mathbb N(t_1,t_2]$ For $t_1 < t_2 \leq t_3 < t_4 \leq t_5 < t_6 < \cdots \leq t_{2n-1} < t_{2n}$, the Poisson random variables $\mathbb N(t_1,t_2], \mathbb N(t_3,t_4], \mathbb N(t_5,t_6], \cdots , \mathbb N(t_{2n-1},t_{2n}]$, which count the numbers of arrivals in the $n$ disjoint or non-overlapping time intervals $(t_1,t_2],(t_3,t_4], (t_5, t_6], \cdots , (t_{2n-1},t_{2n}]$, are independent random variables The probability of exactly one arrival in a small time interval $(t, t+\Delta t]$ is proportional to the length $\Delta t$ of the time interval; the probability of two or more arrivals in this small interval is $o(\Delta t)$ and can be neglected in the limit as $\Delta t \to 0$. The constant of proportionality in this last item is assumed to be a constant $\lambda > 0$ for homogeneous Poisson processes but is assumed to be varying with time for nonhomogeneous processes. That is, the probability of one arrival in the vanishingly small interval $(t, t+\Delta t]$ is $\lambda(t)\Delta t$ while the probability of no arrivals during this interval is $1 - \lambda(t)\Delta t$. Here, of course, we assume that $\lambda(t) > 0$ for all $t$. $\lambda(t)$ is called the intensity of the process at time $t$. Let $P_0(t)$ denote the probability that there are no arrivals in the interval $(0,t]$. If no arrivals occurred in $(0,t+\Delta t]$, then it must be that there are no arrivals in $(0,t]$ and that there are no arrivals in $(t,t+\Delta t)$. The numbers of arrivals in these two disjoint time intervals are independent random variables and so we see that $$\begin{align} P_0(t+\Delta t) &= P_0(t)(1-\lambda(t)\Delta t)\\ P_0(t+\Delta t) - P_0(t) &= - \lambda(t)P_0(t)\Delta t\\ \frac{P_0(t+\Delta t) - P_0(t)}{\Delta t} &= -P_0(t)\lambda(t)\\ \frac{\mathrm dP_0(t)}{\mathrm dt} &= -P_0(t)\lambda(t)\\ P_0(t) &= \exp\left(-\int_0^t \lambda(\tau)\,\mathrm d\tau\right)\tag{1}\\ &= \exp\left(-t\cdot\bar{\lambda}(0,t]\right)\tag{2} \end{align}$$ where $\bar{\lambda}(t_1,t_2]$ denotes the average value $\displaystyle\frac{1}{t_2-t_1}\int_{t_1}^{t_2}\lambda(t)\,\mathrm dt$ over the time interval $(t_1,t_2]$ Skipping additional details, I will assert that the parameter of the Poisson random variable $\mathbb N(t_1,t_2]$ is $\displaystyle \int_{t_1}^{t_2}\lambda(t)\,\mathrm dt$. Thus, the _average number of arrivals in $(t_1,t_2]$ is $$E\left[\mathbb N(t_1,t_2]\right] = \int_{t_1}^{t_2}\lambda(t)\,\mathrm dt = (t_2-t_1)\bar{\lambda}(t_1,t_2].\tag{3}$$ Note that the average number of arrivals in $(t_1,t_2]$ per unit time is $\bar{\lambda}(t_1,t_2]$ and is called the average intensity over this time interval while $\lambda(t)$ is called the (instantaneous) intensity at time $t$. Poisson processes deal with a stream of arrivals whereas hazard rates and survival analysis deal with only one arrival -- the arrival of the Angel of Death! Consider a system that is put into operation at time $0$ and fails at some random time $X > 0$. The hazard rate function $h(t)$ tells us the conditional probability of the system failing in the interval $(t,t+\Delta t]$ conditioned on the system being in working condition at time $t$. Thus, $$\begin{align} h(t)\Delta t &= P\{X \in (t,t+\Delta t]\mid X > t\}\\ &= \frac{P\left(\{X \in (t,t+\Delta t]\}\cap P\{X > t\}\right)}{P\{X > t\}}\\ &= \frac{P\{X \in (t,t+\Delta t]\}}{P\{X > t\}}\\ &= \frac{f_X(t)\Delta t}{1 - F_X(t)}. \end{align}$$ Consequently, $$\begin{align} \int_0^t h(\tau)\,\mathrm d\tau &= \int_0^t \frac{f_X(\tau)}{1 - F_X(\tau)}\,\mathrm d\tau\\ &= - \ln (1-F_X(\tau))\big|_0^t\\ &= -\ln (1-F_X(t))\\ 1-F_X(t) = P\{X > t\} &= \exp \left(- \int_0^t h(\tau)\,\mathrm d\tau\right)\tag{4} \end{align}$$ which of course looks a lot like $(1)$, and both integrals are telling us the probability that there are no arrivals in $(0,t]$. However, if the first arrival after $0$ occurs at time $T$, then analysis of the time of the next arrival is based on $\lambda(t)$ for $t \geq T$ whereas there are no new arrivals in survival analysis: the system is dead and that's all there is to it. Now, we can extend the paradigm to say that the failed system is instantaneously replaced by a brand-new system that begins operating at time $T$, but the analysis now begins anew and the hazard rate $\hat{h}(t)$ for the replacement is $h(t-T)$ etc. In other words, the probability that the replacement is struck dead in $(T, T+\Delta t]$ is $h(0)\Delta t$, not $h(T)\Delta t$. What the OP conjectured, viz. To me, it seems like the intensity function deals with reoccurring failure, while the hazard function deals only with the time to first failure? is correct.
Difference between hazard function and intensity function?
A Poisson process is a model for a stream of "random" arrivals and has the properties that there can be at most one arrival at any instant $t$ the number of arrivals in any interval $(t_1,t_2]$ is a
Difference between hazard function and intensity function? A Poisson process is a model for a stream of "random" arrivals and has the properties that there can be at most one arrival at any instant $t$ the number of arrivals in any interval $(t_1,t_2]$ is a Poisson random variable which is here denoted as $\mathbb N(t_1,t_2]$ For $t_1 < t_2 \leq t_3 < t_4 \leq t_5 < t_6 < \cdots \leq t_{2n-1} < t_{2n}$, the Poisson random variables $\mathbb N(t_1,t_2], \mathbb N(t_3,t_4], \mathbb N(t_5,t_6], \cdots , \mathbb N(t_{2n-1},t_{2n}]$, which count the numbers of arrivals in the $n$ disjoint or non-overlapping time intervals $(t_1,t_2],(t_3,t_4], (t_5, t_6], \cdots , (t_{2n-1},t_{2n}]$, are independent random variables The probability of exactly one arrival in a small time interval $(t, t+\Delta t]$ is proportional to the length $\Delta t$ of the time interval; the probability of two or more arrivals in this small interval is $o(\Delta t)$ and can be neglected in the limit as $\Delta t \to 0$. The constant of proportionality in this last item is assumed to be a constant $\lambda > 0$ for homogeneous Poisson processes but is assumed to be varying with time for nonhomogeneous processes. That is, the probability of one arrival in the vanishingly small interval $(t, t+\Delta t]$ is $\lambda(t)\Delta t$ while the probability of no arrivals during this interval is $1 - \lambda(t)\Delta t$. Here, of course, we assume that $\lambda(t) > 0$ for all $t$. $\lambda(t)$ is called the intensity of the process at time $t$. Let $P_0(t)$ denote the probability that there are no arrivals in the interval $(0,t]$. If no arrivals occurred in $(0,t+\Delta t]$, then it must be that there are no arrivals in $(0,t]$ and that there are no arrivals in $(t,t+\Delta t)$. The numbers of arrivals in these two disjoint time intervals are independent random variables and so we see that $$\begin{align} P_0(t+\Delta t) &= P_0(t)(1-\lambda(t)\Delta t)\\ P_0(t+\Delta t) - P_0(t) &= - \lambda(t)P_0(t)\Delta t\\ \frac{P_0(t+\Delta t) - P_0(t)}{\Delta t} &= -P_0(t)\lambda(t)\\ \frac{\mathrm dP_0(t)}{\mathrm dt} &= -P_0(t)\lambda(t)\\ P_0(t) &= \exp\left(-\int_0^t \lambda(\tau)\,\mathrm d\tau\right)\tag{1}\\ &= \exp\left(-t\cdot\bar{\lambda}(0,t]\right)\tag{2} \end{align}$$ where $\bar{\lambda}(t_1,t_2]$ denotes the average value $\displaystyle\frac{1}{t_2-t_1}\int_{t_1}^{t_2}\lambda(t)\,\mathrm dt$ over the time interval $(t_1,t_2]$ Skipping additional details, I will assert that the parameter of the Poisson random variable $\mathbb N(t_1,t_2]$ is $\displaystyle \int_{t_1}^{t_2}\lambda(t)\,\mathrm dt$. Thus, the _average number of arrivals in $(t_1,t_2]$ is $$E\left[\mathbb N(t_1,t_2]\right] = \int_{t_1}^{t_2}\lambda(t)\,\mathrm dt = (t_2-t_1)\bar{\lambda}(t_1,t_2].\tag{3}$$ Note that the average number of arrivals in $(t_1,t_2]$ per unit time is $\bar{\lambda}(t_1,t_2]$ and is called the average intensity over this time interval while $\lambda(t)$ is called the (instantaneous) intensity at time $t$. Poisson processes deal with a stream of arrivals whereas hazard rates and survival analysis deal with only one arrival -- the arrival of the Angel of Death! Consider a system that is put into operation at time $0$ and fails at some random time $X > 0$. The hazard rate function $h(t)$ tells us the conditional probability of the system failing in the interval $(t,t+\Delta t]$ conditioned on the system being in working condition at time $t$. Thus, $$\begin{align} h(t)\Delta t &= P\{X \in (t,t+\Delta t]\mid X > t\}\\ &= \frac{P\left(\{X \in (t,t+\Delta t]\}\cap P\{X > t\}\right)}{P\{X > t\}}\\ &= \frac{P\{X \in (t,t+\Delta t]\}}{P\{X > t\}}\\ &= \frac{f_X(t)\Delta t}{1 - F_X(t)}. \end{align}$$ Consequently, $$\begin{align} \int_0^t h(\tau)\,\mathrm d\tau &= \int_0^t \frac{f_X(\tau)}{1 - F_X(\tau)}\,\mathrm d\tau\\ &= - \ln (1-F_X(\tau))\big|_0^t\\ &= -\ln (1-F_X(t))\\ 1-F_X(t) = P\{X > t\} &= \exp \left(- \int_0^t h(\tau)\,\mathrm d\tau\right)\tag{4} \end{align}$$ which of course looks a lot like $(1)$, and both integrals are telling us the probability that there are no arrivals in $(0,t]$. However, if the first arrival after $0$ occurs at time $T$, then analysis of the time of the next arrival is based on $\lambda(t)$ for $t \geq T$ whereas there are no new arrivals in survival analysis: the system is dead and that's all there is to it. Now, we can extend the paradigm to say that the failed system is instantaneously replaced by a brand-new system that begins operating at time $T$, but the analysis now begins anew and the hazard rate $\hat{h}(t)$ for the replacement is $h(t-T)$ etc. In other words, the probability that the replacement is struck dead in $(T, T+\Delta t]$ is $h(0)\Delta t$, not $h(T)\Delta t$. What the OP conjectured, viz. To me, it seems like the intensity function deals with reoccurring failure, while the hazard function deals only with the time to first failure? is correct.
Difference between hazard function and intensity function? A Poisson process is a model for a stream of "random" arrivals and has the properties that there can be at most one arrival at any instant $t$ the number of arrivals in any interval $(t_1,t_2]$ is a
33,741
Difference between hazard function and intensity function?
From the theory on counting processes and survival analysis, we find (informally) a simple relation between the hazard rate and the intensity function (refer to [1]). Definitions Let $X_1,\cdots, X_n$ be a sample of $n$ uncensored continuously distributed survival times (e.g. time of failure after repair) with density function $f$ and distribution function $F$. Hazard rate $\alpha$ The hazard rate $\alpha(t)$ is the probability that the event $X_i$ happens at time $t$ given that it has not happened before $t$. Loosely, this means: $P(X_i \text{ happens at time }t \text{ given the past})=P(X_i\in [t,t+dt)\vert X_i\geq t) =\alpha(t)dt $ and $\alpha$ can be expressed as $\alpha = \frac{f}{1-F}$. Event-counting process $N(t)$ Now, let's define $N=\left(N(t)\right)_{t\geq 0}$ as the event-counting process related to $(X_i)_i$, i.e. $N$ counts the number of events $X_i$ that occurred before or at time $t$: $$N(t)=\# \{ i : X_i\leq t \} $$ Intensity process $\lambda$ Then, the intensity process $\lambda$ of $N$ is "a measure of the rate of change of its predictable part" [2]. Loosely put, $\lambda(t)$ is the instantaneous ($dt\rightarrow 0$) expected number of events counted in the interval $[t,t+dt)$ given the past (which is represented by the $\sigma$-algebra $\mathcal{F}_{t^-}$). This yields $$\lambda(t)dt=\mathbb{E}\left(N(t+dt)-N(t)\vert \mathcal{F}_{t^-}\right)$$ Expected number of observations in $[t, t+dt)$ In terms of $\alpha$ Looking informally at the expected number of events to be observed in the interval $[t,t+dt)$ (with $dt$ small) given the past, we have: $$\mathbb{E}\left(\# \{ i:X_i\in[t,t+dt)\}\vert \mathcal{F}_{t^-}\right)=\#\{i:X_i\geq t \}\cdot \alpha(t)dt \qquad (1)$$ In terms of $\lambda$ The left part of equation $(1)$, $\mathbb{E}\left(\# \{ i:X_i\in[t,t+dt)\}\vert \mathcal{F}_{t^-}\right)$, is the expected number of events to be observed in the interval $[t,t+dt)$. Thus it is equivalent to $\mathbb{E}\left(N(t+dt)-N(t)\right)$ by definition of $N(t)$, which counts events prior to $t$. So $N(t+dt)-N(t)$ count events in the interval $[t,t+dt)$ and we know that this equals $\lambda(t) dt$. Hence we have $$\mathbb{E}\left(\# \{ i:X_i\in[t,t+dt)\}\vert \mathcal{F}_{t^-}\right)=\mathbb{E}\left(N(t+dt)-N(t)\vert \mathcal{F}_{t^-}\right)=\lambda(t) dt$$ The number at risk is the link between the intensity process and the hazard rate If we define the process $Y$ as $Y(t)=\# \{ i:X_i\geq t\}$ which represents the number at risk at time $t$ (i.e. the number of events $X_i$ that have not yet been observed at $t$), we have that $$\lambda(t)=Y(t)\alpha(t)$$ Which gives a nice relation between the intensity function $\lambda$ and the hazard rate $\alpha$. More details, including the consideration of censored random times, can be found in [1, section II.1]. References [1] P. K. Andersen, Ø. Borgan, R. D. Gill and N. Keiding. (1993) "Statistical Models Based on Counting Processes". Springer-Verlag: New York. [2] Intensity of counting processes, https://en.wikipedia.org/w/index.php?title=Intensity_of_counting_processes&oldid=960041135 (last visited Sept. 30, 2020).
Difference between hazard function and intensity function?
From the theory on counting processes and survival analysis, we find (informally) a simple relation between the hazard rate and the intensity function (refer to [1]). Definitions Let $X_1,\cdots, X_n$
Difference between hazard function and intensity function? From the theory on counting processes and survival analysis, we find (informally) a simple relation between the hazard rate and the intensity function (refer to [1]). Definitions Let $X_1,\cdots, X_n$ be a sample of $n$ uncensored continuously distributed survival times (e.g. time of failure after repair) with density function $f$ and distribution function $F$. Hazard rate $\alpha$ The hazard rate $\alpha(t)$ is the probability that the event $X_i$ happens at time $t$ given that it has not happened before $t$. Loosely, this means: $P(X_i \text{ happens at time }t \text{ given the past})=P(X_i\in [t,t+dt)\vert X_i\geq t) =\alpha(t)dt $ and $\alpha$ can be expressed as $\alpha = \frac{f}{1-F}$. Event-counting process $N(t)$ Now, let's define $N=\left(N(t)\right)_{t\geq 0}$ as the event-counting process related to $(X_i)_i$, i.e. $N$ counts the number of events $X_i$ that occurred before or at time $t$: $$N(t)=\# \{ i : X_i\leq t \} $$ Intensity process $\lambda$ Then, the intensity process $\lambda$ of $N$ is "a measure of the rate of change of its predictable part" [2]. Loosely put, $\lambda(t)$ is the instantaneous ($dt\rightarrow 0$) expected number of events counted in the interval $[t,t+dt)$ given the past (which is represented by the $\sigma$-algebra $\mathcal{F}_{t^-}$). This yields $$\lambda(t)dt=\mathbb{E}\left(N(t+dt)-N(t)\vert \mathcal{F}_{t^-}\right)$$ Expected number of observations in $[t, t+dt)$ In terms of $\alpha$ Looking informally at the expected number of events to be observed in the interval $[t,t+dt)$ (with $dt$ small) given the past, we have: $$\mathbb{E}\left(\# \{ i:X_i\in[t,t+dt)\}\vert \mathcal{F}_{t^-}\right)=\#\{i:X_i\geq t \}\cdot \alpha(t)dt \qquad (1)$$ In terms of $\lambda$ The left part of equation $(1)$, $\mathbb{E}\left(\# \{ i:X_i\in[t,t+dt)\}\vert \mathcal{F}_{t^-}\right)$, is the expected number of events to be observed in the interval $[t,t+dt)$. Thus it is equivalent to $\mathbb{E}\left(N(t+dt)-N(t)\right)$ by definition of $N(t)$, which counts events prior to $t$. So $N(t+dt)-N(t)$ count events in the interval $[t,t+dt)$ and we know that this equals $\lambda(t) dt$. Hence we have $$\mathbb{E}\left(\# \{ i:X_i\in[t,t+dt)\}\vert \mathcal{F}_{t^-}\right)=\mathbb{E}\left(N(t+dt)-N(t)\vert \mathcal{F}_{t^-}\right)=\lambda(t) dt$$ The number at risk is the link between the intensity process and the hazard rate If we define the process $Y$ as $Y(t)=\# \{ i:X_i\geq t\}$ which represents the number at risk at time $t$ (i.e. the number of events $X_i$ that have not yet been observed at $t$), we have that $$\lambda(t)=Y(t)\alpha(t)$$ Which gives a nice relation between the intensity function $\lambda$ and the hazard rate $\alpha$. More details, including the consideration of censored random times, can be found in [1, section II.1]. References [1] P. K. Andersen, Ø. Borgan, R. D. Gill and N. Keiding. (1993) "Statistical Models Based on Counting Processes". Springer-Verlag: New York. [2] Intensity of counting processes, https://en.wikipedia.org/w/index.php?title=Intensity_of_counting_processes&oldid=960041135 (last visited Sept. 30, 2020).
Difference between hazard function and intensity function? From the theory on counting processes and survival analysis, we find (informally) a simple relation between the hazard rate and the intensity function (refer to [1]). Definitions Let $X_1,\cdots, X_n$
33,742
Difference between hazard function and intensity function?
The hazard rate and intensity function can take the same functional form but conceptually they are different, hazard rates are based on continuous values taken from a population, intensity measures the rate at which events in time occur.
Difference between hazard function and intensity function?
The hazard rate and intensity function can take the same functional form but conceptually they are different, hazard rates are based on continuous values taken from a population, intensity measures th
Difference between hazard function and intensity function? The hazard rate and intensity function can take the same functional form but conceptually they are different, hazard rates are based on continuous values taken from a population, intensity measures the rate at which events in time occur.
Difference between hazard function and intensity function? The hazard rate and intensity function can take the same functional form but conceptually they are different, hazard rates are based on continuous values taken from a population, intensity measures th
33,743
Difference between hazard function and intensity function?
The hazard function is typically encountered in event history models/survival analysis models. The hazard is the probability of experiencing the event in a given time period/by a given point in time (depending on how time is operationalized in the model), conditional on not having experienced the event up to that period/before that point in time. This is in contrast to the survival function, which, also indexed by time, is the probability of not experiencing the event by that point/period in time across all of study time.
Difference between hazard function and intensity function?
The hazard function is typically encountered in event history models/survival analysis models. The hazard is the probability of experiencing the event in a given time period/by a given point in time (
Difference between hazard function and intensity function? The hazard function is typically encountered in event history models/survival analysis models. The hazard is the probability of experiencing the event in a given time period/by a given point in time (depending on how time is operationalized in the model), conditional on not having experienced the event up to that period/before that point in time. This is in contrast to the survival function, which, also indexed by time, is the probability of not experiencing the event by that point/period in time across all of study time.
Difference between hazard function and intensity function? The hazard function is typically encountered in event history models/survival analysis models. The hazard is the probability of experiencing the event in a given time period/by a given point in time (
33,744
How did Efron imagine the bootstrap?
In his own words: My first thoughts on the bootstrap centered around variance and bias estimation. This was natural enough given the bootstrap’s roots in the jackknife literature, with Quenouille (1949) on bias and Tukey (1958) on variance setting the agenda. The oldest note I can find says simply “What is the jackknife an approximation to?” Poor English, but a good question that resulted in the 1977 Rietz Lecture, “Bootstrap Methods: Another Look at the Jackknife” (Efron, 1979). Jaeckel’s (1972) Bell Labs memorandum on the infinitesimal jackknife was particularly helpful in answering the approximation question. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.367.4292&rep=rep1&type=pdf
How did Efron imagine the bootstrap?
In his own words: My first thoughts on the bootstrap centered around variance and bias estimation. This was natural enough given the bootstrap’s roots in the jackknife literature, with Quenouille
How did Efron imagine the bootstrap? In his own words: My first thoughts on the bootstrap centered around variance and bias estimation. This was natural enough given the bootstrap’s roots in the jackknife literature, with Quenouille (1949) on bias and Tukey (1958) on variance setting the agenda. The oldest note I can find says simply “What is the jackknife an approximation to?” Poor English, but a good question that resulted in the 1977 Rietz Lecture, “Bootstrap Methods: Another Look at the Jackknife” (Efron, 1979). Jaeckel’s (1972) Bell Labs memorandum on the infinitesimal jackknife was particularly helpful in answering the approximation question. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.367.4292&rep=rep1&type=pdf
How did Efron imagine the bootstrap? In his own words: My first thoughts on the bootstrap centered around variance and bias estimation. This was natural enough given the bootstrap’s roots in the jackknife literature, with Quenouille
33,745
Why is the MLE of N of the discrete uniform distribution the value you choose?
Looking at the general case of having a sample of $m$ realizations of an i.i.d. sample, the joint density of the sample is $$f(\mathbf X) = \prod_{i=1}^m \frac 1n \cdot \mathbf I\Big\{X_i\in \{1,...,n\}\Big\} $$ where $\mathbf I\{\}$ is the indicator function. Using the properties of the indicator function, and treating the joint density as a likelihood function of the unknown parameter $n$ given the actual realization of the sample, we have $$ L(n \mid \mathbf x) = \frac 1{n^m} \cdot \min_i\Big(\mathbf I\Big\{x_i \le n\Big\}\Big)$$ Now, if we have overlooked the existence of the indicator function, then the term $\frac 1{n^m}$ is decreasing in $n$ - so the value of $n$ that would maximize the likelihood would be the smallest possible $n$, i.e $n=1$. But the existence of the (minimum of) the $m$ indicator functions tells us that if even one $x$-realization is larger than the chosen value of $n$, then the indicator function for this $x$-realization will equal zero, hence the minimum of the $m$ indicator functions will equal zero, hence the likelihood will equal zero. So we need to choose $\hat n$ so as all realizations of the sample are equal or smaller than it... so why not choose an arbitrary large value? Because the further away we move from $\hat n =1$ the smaller the value of the likelihood becomes. So we want to move away the less possible: this means that we choose $\hat n = \max_i\{x_i\}$ which is the argmax of the likelihood given the constraint, as this constraint is represented by the indicator function, because it reduces the value of the likelihood no more than needed in order to satisfy the constraint. Obviously the above holds for $m=1$ also. RESPONSE TO COMMENT 2022-05-28 (user @user1916067) Suppose we have a sample of 3 observations,say, $\{3, 4, 5\}$ and so $m=3$. Using the expression of $L(n | \mathbf x)$, compute the value of the likelihood for the following candidate MLEs A value smaller than the realized values, say $\hat n = 2$ The values from the observed sample, $\hat n = \{3, 4, 5\}$ A value larger than the realized values, say $\hat n = 6$ You will find that the value of the likelihood for $\hat n = \{2,3,4\}$ is zero, and the value for $\hat n = 6$ is smaller than its value when $\hat n = 5 = \max x_i$.
Why is the MLE of N of the discrete uniform distribution the value you choose?
Looking at the general case of having a sample of $m$ realizations of an i.i.d. sample, the joint density of the sample is $$f(\mathbf X) = \prod_{i=1}^m \frac 1n \cdot \mathbf I\Big\{X_i\in \{1,...,n
Why is the MLE of N of the discrete uniform distribution the value you choose? Looking at the general case of having a sample of $m$ realizations of an i.i.d. sample, the joint density of the sample is $$f(\mathbf X) = \prod_{i=1}^m \frac 1n \cdot \mathbf I\Big\{X_i\in \{1,...,n\}\Big\} $$ where $\mathbf I\{\}$ is the indicator function. Using the properties of the indicator function, and treating the joint density as a likelihood function of the unknown parameter $n$ given the actual realization of the sample, we have $$ L(n \mid \mathbf x) = \frac 1{n^m} \cdot \min_i\Big(\mathbf I\Big\{x_i \le n\Big\}\Big)$$ Now, if we have overlooked the existence of the indicator function, then the term $\frac 1{n^m}$ is decreasing in $n$ - so the value of $n$ that would maximize the likelihood would be the smallest possible $n$, i.e $n=1$. But the existence of the (minimum of) the $m$ indicator functions tells us that if even one $x$-realization is larger than the chosen value of $n$, then the indicator function for this $x$-realization will equal zero, hence the minimum of the $m$ indicator functions will equal zero, hence the likelihood will equal zero. So we need to choose $\hat n$ so as all realizations of the sample are equal or smaller than it... so why not choose an arbitrary large value? Because the further away we move from $\hat n =1$ the smaller the value of the likelihood becomes. So we want to move away the less possible: this means that we choose $\hat n = \max_i\{x_i\}$ which is the argmax of the likelihood given the constraint, as this constraint is represented by the indicator function, because it reduces the value of the likelihood no more than needed in order to satisfy the constraint. Obviously the above holds for $m=1$ also. RESPONSE TO COMMENT 2022-05-28 (user @user1916067) Suppose we have a sample of 3 observations,say, $\{3, 4, 5\}$ and so $m=3$. Using the expression of $L(n | \mathbf x)$, compute the value of the likelihood for the following candidate MLEs A value smaller than the realized values, say $\hat n = 2$ The values from the observed sample, $\hat n = \{3, 4, 5\}$ A value larger than the realized values, say $\hat n = 6$ You will find that the value of the likelihood for $\hat n = \{2,3,4\}$ is zero, and the value for $\hat n = 6$ is smaller than its value when $\hat n = 5 = \max x_i$.
Why is the MLE of N of the discrete uniform distribution the value you choose? Looking at the general case of having a sample of $m$ realizations of an i.i.d. sample, the joint density of the sample is $$f(\mathbf X) = \prod_{i=1}^m \frac 1n \cdot \mathbf I\Big\{X_i\in \{1,...,n
33,746
Why is the MLE of N of the discrete uniform distribution the value you choose?
It's pretty obvious mathematically. I tend to think it's less so intuitively. Suppose the true max is Y. Then Y has to be $\ge$ X. But then the prob of observing X is exactly 1/Y. How do you think 1/Y is maximized under the lower bound constraint?
Why is the MLE of N of the discrete uniform distribution the value you choose?
It's pretty obvious mathematically. I tend to think it's less so intuitively. Suppose the true max is Y. Then Y has to be $\ge$ X. But then the prob of observing X is exactly 1/Y. How do you think 1/Y
Why is the MLE of N of the discrete uniform distribution the value you choose? It's pretty obvious mathematically. I tend to think it's less so intuitively. Suppose the true max is Y. Then Y has to be $\ge$ X. But then the prob of observing X is exactly 1/Y. How do you think 1/Y is maximized under the lower bound constraint?
Why is the MLE of N of the discrete uniform distribution the value you choose? It's pretty obvious mathematically. I tend to think it's less so intuitively. Suppose the true max is Y. Then Y has to be $\ge$ X. But then the prob of observing X is exactly 1/Y. How do you think 1/Y
33,747
Why is the MLE of N of the discrete uniform distribution the value you choose?
The density function of the discrete uniform distribution is $f(x)= 1/ N \quad for \ x=x_i \ with \ i \in [1,\dots, N] \\ f(x)= 0 \quad else.$ Hence, the likelihood function is $L(X_1,\dots,X_n;N) = N^{-n}$. Thus, the MLE is $\hat N = arg\ maxL(X_1,\dots,X_n;N)=max(X_1,\dots,X_n).$
Why is the MLE of N of the discrete uniform distribution the value you choose?
The density function of the discrete uniform distribution is $f(x)= 1/ N \quad for \ x=x_i \ with \ i \in [1,\dots, N] \\ f(x)= 0 \quad else.$ Hence, the likelihood function is $L(X_1,\dots,X_n;N) =
Why is the MLE of N of the discrete uniform distribution the value you choose? The density function of the discrete uniform distribution is $f(x)= 1/ N \quad for \ x=x_i \ with \ i \in [1,\dots, N] \\ f(x)= 0 \quad else.$ Hence, the likelihood function is $L(X_1,\dots,X_n;N) = N^{-n}$. Thus, the MLE is $\hat N = arg\ maxL(X_1,\dots,X_n;N)=max(X_1,\dots,X_n).$
Why is the MLE of N of the discrete uniform distribution the value you choose? The density function of the discrete uniform distribution is $f(x)= 1/ N \quad for \ x=x_i \ with \ i \in [1,\dots, N] \\ f(x)= 0 \quad else.$ Hence, the likelihood function is $L(X_1,\dots,X_n;N) =
33,748
Scoring items which are not easily compared
Let me expand on alternative solution proposed by @curious_cat. $P_{ij}$ is the matrix of pitches $L_{ij}$ is the matrix of sells $S_{ij} = L_{ij}/P_{ij}$ is the matrix of success rates (elementwise division where it exists and 0 elsewhere) As @curious_cat suggested, you want to approximate $S_{ij}$ by the outer product of two positive vectors $$S_{ij} \approx M_i \times A_j^T$$ Least square minimization will lead to $$\min | S_{ij} - M_j \times A_i^T |_2$$ where $| \quad |_2$ is the Frobenius norm. BUT you do not want to minimize for the entries in which $S_{ij}$ is not defined. So what you realy want is something like: $$ \min |W_{ij} \odot (S_{ij} - M_j \times A_i^T)|_2$$ where $\odot$ is the elementwise multiplication. 1) At a first approximation, $w_{ij}$ is 0 where $p_{ij}$ is 0 and 1 elsewhere. This is a weighted non-negative matrix factorisation (or approximation) problem. Google should give some references to it. 2) Now, shooting from the hip, let us try to answer the point also made by @curious_cat that you should trust more a success rate of 1000 sells over 2000 pitches than a 2 sells over 4 pitches. The weight $w_{ij}$ need not to be uniformly 1 for the entries that are defined in $S_{ij}$. One can give it more weight to success rates with higher pitches. My guess is to use $\sqrt{p_{ij}}$ as the weight. The intuition is that the confidence interval on the success rate is inversely proportional to $\sqrt{p_{ij}}$.
Scoring items which are not easily compared
Let me expand on alternative solution proposed by @curious_cat. $P_{ij}$ is the matrix of pitches $L_{ij}$ is the matrix of sells $S_{ij} = L_{ij}/P_{ij}$ is the matrix of success rates (elementwise
Scoring items which are not easily compared Let me expand on alternative solution proposed by @curious_cat. $P_{ij}$ is the matrix of pitches $L_{ij}$ is the matrix of sells $S_{ij} = L_{ij}/P_{ij}$ is the matrix of success rates (elementwise division where it exists and 0 elsewhere) As @curious_cat suggested, you want to approximate $S_{ij}$ by the outer product of two positive vectors $$S_{ij} \approx M_i \times A_j^T$$ Least square minimization will lead to $$\min | S_{ij} - M_j \times A_i^T |_2$$ where $| \quad |_2$ is the Frobenius norm. BUT you do not want to minimize for the entries in which $S_{ij}$ is not defined. So what you realy want is something like: $$ \min |W_{ij} \odot (S_{ij} - M_j \times A_i^T)|_2$$ where $\odot$ is the elementwise multiplication. 1) At a first approximation, $w_{ij}$ is 0 where $p_{ij}$ is 0 and 1 elsewhere. This is a weighted non-negative matrix factorisation (or approximation) problem. Google should give some references to it. 2) Now, shooting from the hip, let us try to answer the point also made by @curious_cat that you should trust more a success rate of 1000 sells over 2000 pitches than a 2 sells over 4 pitches. The weight $w_{ij}$ need not to be uniformly 1 for the entries that are defined in $S_{ij}$. One can give it more weight to success rates with higher pitches. My guess is to use $\sqrt{p_{ij}}$ as the weight. The intuition is that the confidence interval on the success rate is inversely proportional to $\sqrt{p_{ij}}$.
Scoring items which are not easily compared Let me expand on alternative solution proposed by @curious_cat. $P_{ij}$ is the matrix of pitches $L_{ij}$ is the matrix of sells $S_{ij} = L_{ij}/P_{ij}$ is the matrix of success rates (elementwise
33,749
Scoring items which are not easily compared
This type of problem is typically referred to in econometrics and marketing research as a "choice modeling" problem. Texts dealing with such problems include: Louviere, J., D. A. Hensher, et al. (2000). Stated Choice Methods: Analysis and Application. Cambridge, Cambridge University Press. Train, K. E. (2009). Discrete Choice Methods with Simulation. Cambridge, Cambridge University Press. Rossi, P. E., G. M. Allenby, et al. (2005). Bayesian Statistics and Marketing, Wiley. The simplest practical model you could estimate would be a binary logit model with the dependent variable indicating when an object is purchased versus when it is not purchased, with two independent variables: a categorical variable for merchant and a categorical variable for product. (Or, if you do not know anything about when a product is not purchased, you could use Poisson regression or some other counts model.) The parameter estimate for each merchant would be their skill score and the parameter for each product would be the "attractiveness score". The "attractiveness" score is more commonly referred to as a "utility" in choice modeling. A practical computational problem you will experience is that unless you have only a few hundred merchants and few hundred categorical variables you will struggle to estimate the model and may need a "random effects" model (sometimes referred to as a "hierarchical model" in this context). In addition to the assumption that you mention, a key set of assumptions that will determine the validity of your analysis will relate to which alternatives are available at a given time. For example, a product that is intrinsically unattractive may be purchased regularly because the more attractive products are not available at the purchase time. This effect can have a very large impact upon your resulting estimates, as when it is ignored you inadvertently will confound the attractiveness of a product with its availability. The texts cited earlier discuss various modifications of choice models to deal with many of the types of assumptions likely relevant to your problem.
Scoring items which are not easily compared
This type of problem is typically referred to in econometrics and marketing research as a "choice modeling" problem. Texts dealing with such problems include: Louviere, J., D. A. Hensher, et al. (200
Scoring items which are not easily compared This type of problem is typically referred to in econometrics and marketing research as a "choice modeling" problem. Texts dealing with such problems include: Louviere, J., D. A. Hensher, et al. (2000). Stated Choice Methods: Analysis and Application. Cambridge, Cambridge University Press. Train, K. E. (2009). Discrete Choice Methods with Simulation. Cambridge, Cambridge University Press. Rossi, P. E., G. M. Allenby, et al. (2005). Bayesian Statistics and Marketing, Wiley. The simplest practical model you could estimate would be a binary logit model with the dependent variable indicating when an object is purchased versus when it is not purchased, with two independent variables: a categorical variable for merchant and a categorical variable for product. (Or, if you do not know anything about when a product is not purchased, you could use Poisson regression or some other counts model.) The parameter estimate for each merchant would be their skill score and the parameter for each product would be the "attractiveness score". The "attractiveness" score is more commonly referred to as a "utility" in choice modeling. A practical computational problem you will experience is that unless you have only a few hundred merchants and few hundred categorical variables you will struggle to estimate the model and may need a "random effects" model (sometimes referred to as a "hierarchical model" in this context). In addition to the assumption that you mention, a key set of assumptions that will determine the validity of your analysis will relate to which alternatives are available at a given time. For example, a product that is intrinsically unattractive may be purchased regularly because the more attractive products are not available at the purchase time. This effect can have a very large impact upon your resulting estimates, as when it is ignored you inadvertently will confound the attractiveness of a product with its availability. The texts cited earlier discuss various modifications of choice models to deal with many of the types of assumptions likely relevant to your problem.
Scoring items which are not easily compared This type of problem is typically referred to in econometrics and marketing research as a "choice modeling" problem. Texts dealing with such problems include: Louviere, J., D. A. Hensher, et al. (200
33,750
Scoring items which are not easily compared
Your problem can be modeled by a Rasch Model. Here is a document that explains the model with the following example Rasch model is a statistical model of a test that attempts to describe the probability that a student answers a question correctly. It assigns to every student a real number, a, called the "ability", and to every questions a real number, d, called the "difficulty". This is similar to your situation where each merchant has some inherent "skill" and each product has an inherent "attractiveness".
Scoring items which are not easily compared
Your problem can be modeled by a Rasch Model. Here is a document that explains the model with the following example Rasch model is a statistical model of a test that attempts to describe the prob
Scoring items which are not easily compared Your problem can be modeled by a Rasch Model. Here is a document that explains the model with the following example Rasch model is a statistical model of a test that attempts to describe the probability that a student answers a question correctly. It assigns to every student a real number, a, called the "ability", and to every questions a real number, d, called the "difficulty". This is similar to your situation where each merchant has some inherent "skill" and each product has an inherent "attractiveness".
Scoring items which are not easily compared Your problem can be modeled by a Rasch Model. Here is a document that explains the model with the following example Rasch model is a statistical model of a test that attempts to describe the prob
33,751
Scoring items which are not easily compared
Why not for each merchant compute a success rate for every product he sells $S_{ij}$. ($i$ indexes products and $j$ indexes merchants) Average this and compute a merchant average baseline success rate($S_j$). Now compute differences ($\delta S_{ij}=S_{ij} - S_j$). Each of this $\delta S_{ij}$ indicates how much better or worse every product does with respect to that merchants baseline success rate. If you sum up this $\delta S_{ij}$ over all merchants j you'd obtain some sort of score of the attractiveness of every product $S_i$? The merchant skill metric would be a dual of this. One problem is this doesn't weigh in the confidence level motivated by large data. i.e. 2 successes out of 4 pitches ought to (perhaps) matter less than 1000 successes out of 2000 pitches? You'd have to find some way to adjust for that in case it matters. Alternatively: Assume every merchant has a skill value $M_j$ and every product has a product attractiveness $A_i$. You could model the success rate of product $i$ sold by merchant $j$ ($S_{ij}$) as some function of $M_j$ and $A_i$ with possible cross terms. If you fit this you might be able to score using the coefficents. If you consider $S_{ij} = M_j \times A_i + \epsilon_{ij}$ you get one simple model. The matrix of success elements is possibly sparse (since not all merchants sell all products). If it were indeed fully populated you must estimate 200 coefficients from 100x100 success rate numbers such that you minimize $\epsilon_{ij}$ in some sort of least squares sense. Possible flaws: I don't see an easy way to interpret relative scores. e.g. If two Products have an attractiveness of $A_{i1}$ and $A_{i2}$ how much better is one than the other? A simple ratio? A log likelihood? etc. Perhaps there is some interpretation but I cannot see it yet. From a strictly ordering perspective it shouldn't matter. PS How sparse is your matrix? Knowing that you have millions of pitches maybe not too sparse? Or is it? i.e. Out of a maximum possible 10,000 merchant-product combinations how many are filled (i.e. have at least one pitch)? PS1 Uniqueness. I cannot prove whether your $M_j$ and $A_i$ values will be unique or even close to. If there are multiple solutions it'll be an interesting situation. Maybe there are stronger math results about this?
Scoring items which are not easily compared
Why not for each merchant compute a success rate for every product he sells $S_{ij}$. ($i$ indexes products and $j$ indexes merchants) Average this and compute a merchant average baseline success rat
Scoring items which are not easily compared Why not for each merchant compute a success rate for every product he sells $S_{ij}$. ($i$ indexes products and $j$ indexes merchants) Average this and compute a merchant average baseline success rate($S_j$). Now compute differences ($\delta S_{ij}=S_{ij} - S_j$). Each of this $\delta S_{ij}$ indicates how much better or worse every product does with respect to that merchants baseline success rate. If you sum up this $\delta S_{ij}$ over all merchants j you'd obtain some sort of score of the attractiveness of every product $S_i$? The merchant skill metric would be a dual of this. One problem is this doesn't weigh in the confidence level motivated by large data. i.e. 2 successes out of 4 pitches ought to (perhaps) matter less than 1000 successes out of 2000 pitches? You'd have to find some way to adjust for that in case it matters. Alternatively: Assume every merchant has a skill value $M_j$ and every product has a product attractiveness $A_i$. You could model the success rate of product $i$ sold by merchant $j$ ($S_{ij}$) as some function of $M_j$ and $A_i$ with possible cross terms. If you fit this you might be able to score using the coefficents. If you consider $S_{ij} = M_j \times A_i + \epsilon_{ij}$ you get one simple model. The matrix of success elements is possibly sparse (since not all merchants sell all products). If it were indeed fully populated you must estimate 200 coefficients from 100x100 success rate numbers such that you minimize $\epsilon_{ij}$ in some sort of least squares sense. Possible flaws: I don't see an easy way to interpret relative scores. e.g. If two Products have an attractiveness of $A_{i1}$ and $A_{i2}$ how much better is one than the other? A simple ratio? A log likelihood? etc. Perhaps there is some interpretation but I cannot see it yet. From a strictly ordering perspective it shouldn't matter. PS How sparse is your matrix? Knowing that you have millions of pitches maybe not too sparse? Or is it? i.e. Out of a maximum possible 10,000 merchant-product combinations how many are filled (i.e. have at least one pitch)? PS1 Uniqueness. I cannot prove whether your $M_j$ and $A_i$ values will be unique or even close to. If there are multiple solutions it'll be an interesting situation. Maybe there are stronger math results about this?
Scoring items which are not easily compared Why not for each merchant compute a success rate for every product he sells $S_{ij}$. ($i$ indexes products and $j$ indexes merchants) Average this and compute a merchant average baseline success rat
33,752
Scoring items which are not easily compared
I think you are looking to attribute qualities that are not inherent in, or do not follow from, your data. You have unambiguous data on success rate, and there should be a way to calculate or estimate a merchant's "adjusted success rate" given the rate at which his products tend to sell among all merchants. Similarly, there should be a way to determine each product's adjusted success rate given the success rates of the merchants who tend to sell it. These two angles on the analysis might be accomplished with a nested/hierarchical/multi-level logistic regression, if the data are suitable for it. But that wouldn't necessarily reveal the attributes of "skill" or "attractiveness"; it might yield workable proxies for them, but how adequate these proxies would be is a substantive question more than a statistical one.
Scoring items which are not easily compared
I think you are looking to attribute qualities that are not inherent in, or do not follow from, your data. You have unambiguous data on success rate, and there should be a way to calculate or estimat
Scoring items which are not easily compared I think you are looking to attribute qualities that are not inherent in, or do not follow from, your data. You have unambiguous data on success rate, and there should be a way to calculate or estimate a merchant's "adjusted success rate" given the rate at which his products tend to sell among all merchants. Similarly, there should be a way to determine each product's adjusted success rate given the success rates of the merchants who tend to sell it. These two angles on the analysis might be accomplished with a nested/hierarchical/multi-level logistic regression, if the data are suitable for it. But that wouldn't necessarily reveal the attributes of "skill" or "attractiveness"; it might yield workable proxies for them, but how adequate these proxies would be is a substantive question more than a statistical one.
Scoring items which are not easily compared I think you are looking to attribute qualities that are not inherent in, or do not follow from, your data. You have unambiguous data on success rate, and there should be a way to calculate or estimat
33,753
Scoring items which are not easily compared
I would just create a 2 way table for this. For e.g. rows corresponding to different merchants and columns corresponding to different products. Each cell in this 100 x 100 table/matrix represents counts/proportion for no. of times the combination was successful. Once this is done, you can sort this matrix by rows and then by columns (or the other way round) to get the product and merchant skills ordering.
Scoring items which are not easily compared
I would just create a 2 way table for this. For e.g. rows corresponding to different merchants and columns corresponding to different products. Each cell in this 100 x 100 table/matrix represents coun
Scoring items which are not easily compared I would just create a 2 way table for this. For e.g. rows corresponding to different merchants and columns corresponding to different products. Each cell in this 100 x 100 table/matrix represents counts/proportion for no. of times the combination was successful. Once this is done, you can sort this matrix by rows and then by columns (or the other way round) to get the product and merchant skills ordering.
Scoring items which are not easily compared I would just create a 2 way table for this. For e.g. rows corresponding to different merchants and columns corresponding to different products. Each cell in this 100 x 100 table/matrix represents coun
33,754
Scoring items which are not easily compared
I'd recommend a logistic regression with merchants and products as random effects. In R, this would look like: library("lme4") fit <- glmer(sold ~ (1|merchant) + (1|product), data, family=binomial, REML=TRUE, verbose=TRUE, weights) summary(fit) ranef(fit) Extracting the estimates is relatively straightforward, and I handle millions of data points with approaches similar to this on standard workstations all the time. The model fitting typically only takes a few minutes.
Scoring items which are not easily compared
I'd recommend a logistic regression with merchants and products as random effects. In R, this would look like: library("lme4") fit <- glmer(sold ~ (1|merchant) + (1|product), data, family=binomial, RE
Scoring items which are not easily compared I'd recommend a logistic regression with merchants and products as random effects. In R, this would look like: library("lme4") fit <- glmer(sold ~ (1|merchant) + (1|product), data, family=binomial, REML=TRUE, verbose=TRUE, weights) summary(fit) ranef(fit) Extracting the estimates is relatively straightforward, and I handle millions of data points with approaches similar to this on standard workstations all the time. The model fitting typically only takes a few minutes.
Scoring items which are not easily compared I'd recommend a logistic regression with merchants and products as random effects. In R, this would look like: library("lme4") fit <- glmer(sold ~ (1|merchant) + (1|product), data, family=binomial, RE
33,755
Calculating R-squared (coefficient of determination) with centered vs. un-centered sums of squares
As Stephane hinted in the comment, it is the difference between model with or without intercept that matters. For what it is worth, here is code from one of my packages: ## cf src/library/stats/R/lm.R and case with no weights and an intercept f <- object$fitted.values r <- object$residuals mss <- if (object$intercept) sum((f - mean(f))^2) else sum(f^2) rss <- sum(r^2) r.squared <- mss/(mss + rss) Residuals are centered by design, leaves the fitted values with need to be centered in the intercept and not otherwise.
Calculating R-squared (coefficient of determination) with centered vs. un-centered sums of squares
As Stephane hinted in the comment, it is the difference between model with or without intercept that matters. For what it is worth, here is code from one of my packages: ## cf src/library/stats/R/lm.R
Calculating R-squared (coefficient of determination) with centered vs. un-centered sums of squares As Stephane hinted in the comment, it is the difference between model with or without intercept that matters. For what it is worth, here is code from one of my packages: ## cf src/library/stats/R/lm.R and case with no weights and an intercept f <- object$fitted.values r <- object$residuals mss <- if (object$intercept) sum((f - mean(f))^2) else sum(f^2) rss <- sum(r^2) r.squared <- mss/(mss + rss) Residuals are centered by design, leaves the fitted values with need to be centered in the intercept and not otherwise.
Calculating R-squared (coefficient of determination) with centered vs. un-centered sums of squares As Stephane hinted in the comment, it is the difference between model with or without intercept that matters. For what it is worth, here is code from one of my packages: ## cf src/library/stats/R/lm.R
33,756
Calculating R-squared (coefficient of determination) with centered vs. un-centered sums of squares
Ok, I thought I'd follow up on this. I've been struggling with the answers here a bit, and have come to some better understanding of the problem. For posterity, I also think that a full explanation of _why there are two different forms of this equation for R^2 would be beneficial to anyone who stumbles upon this thread. I don't know if this is common knowledge, or what--no one seems to explain (possibly a lot of people just don't know, or possibly it's so basic that it's expected that people 'just know') WHY there are two forms for R^2. This includes several sets of lecture notes by professors at major universities: perhaps I'm just not looking in the right places. The reason for the two different equations above comes from the fact that you're comparing the model against the null hypothesis. The null hypothesis is "there exists zero relationship between the dependent and independent variables". This means you're taking the slope to be zero. Another way to say this is that you're comparing the regression model you build to a nested model with one fewer parameters. Now, suppose we have a set of data with one independent variable (x) and one dependent variable (y). We have two choices: We choose to model the relationship between x and y with a two parameter linear model (i.e., $\hat{y}_i = a_0 + a_1 \hat{x}_i + \epsilon_i$). The null hypothesis is $\hat{y}_i = a_0 + \epsilon_i$, and $\bar{y} \neq 0$ in general. Thus the appropriate form of $R^2$ to use is: $$ R^2 = 1- \frac{\sum_i (y_i - \hat{y}_i)^2}{\sum_i (y_i - \bar{y})^2} $$ We choose to model the relationship between x and y with a one parameter linear model, namely $\hat{y}_i = a_1 \hat{x}_i + \epsilon_i$. The null hypothesis is that there is no relationship between x and y, thus the correct null hypothesis is $\hat{y}_i = \epsilon_i$. In other words, the null hypothesis is just white noise. Clearly, $\mathbb{E}(y) = 0$, thus the correct form of $R^2$ is $$ R^2 = 1- \frac{\sum_i (y_i - \hat{y}_i)^2}{\sum_i y_i^2} $$ A good way to think about this is the following: suppose the null hypothesis were (100%) correct, and there truly were no relationship between x and y. What would we expect? If anything's fair, the answer is "We expect $R^2=0$." In the case where we choose a two-parameter model, we expect that $\bar{y} = \hat{y}_i = a_0$. If this isn't obvious, try drawing the picture with the model value under the null hypothesis $\hat{y}_i$, the data point as $y_i$, and the average $\bar{y}$. If the model is correct (i.e., number of data points -> infinity), then you should be able to see graphically that $\bar{y} = \hat{y}_i = a_0$, in the case where the null hypothesis is true. Conversely, using the same picture as above, $\hat{y}_i = \bar{y} = 0$. There's a slight rub here, because you have to worry about how these things go to zero. L'Hopital will tell you that, in this case at least, $\lim 0/0 = 0$, and everything is ok. You can see why you get funny things happening with $R^2$ (like negative values) if you use the wrong form of the equation. I noticed it first because the statsmodels package in Python does one thing, and R does something else: it pains me to say, but R is right and statsmodels is wrong. (Well, not really "pains"...) I would love for some feedback on this intuition. I have only found one reference where this is explained explicitly. Please see this pdf file (download here), Section 5.3.6. Additionally, the other linked answer on stackexchange alludes to this fact, but the reasoning wasn't completely clear to me (no offense to the person who answered the question, it is a very well-written response, and I can be dense at times! ). Again, please correct my reasoning in the comments, and I will amend the post until it is acceptable.
Calculating R-squared (coefficient of determination) with centered vs. un-centered sums of squares
Ok, I thought I'd follow up on this. I've been struggling with the answers here a bit, and have come to some better understanding of the problem. For posterity, I also think that a full explanation of
Calculating R-squared (coefficient of determination) with centered vs. un-centered sums of squares Ok, I thought I'd follow up on this. I've been struggling with the answers here a bit, and have come to some better understanding of the problem. For posterity, I also think that a full explanation of _why there are two different forms of this equation for R^2 would be beneficial to anyone who stumbles upon this thread. I don't know if this is common knowledge, or what--no one seems to explain (possibly a lot of people just don't know, or possibly it's so basic that it's expected that people 'just know') WHY there are two forms for R^2. This includes several sets of lecture notes by professors at major universities: perhaps I'm just not looking in the right places. The reason for the two different equations above comes from the fact that you're comparing the model against the null hypothesis. The null hypothesis is "there exists zero relationship between the dependent and independent variables". This means you're taking the slope to be zero. Another way to say this is that you're comparing the regression model you build to a nested model with one fewer parameters. Now, suppose we have a set of data with one independent variable (x) and one dependent variable (y). We have two choices: We choose to model the relationship between x and y with a two parameter linear model (i.e., $\hat{y}_i = a_0 + a_1 \hat{x}_i + \epsilon_i$). The null hypothesis is $\hat{y}_i = a_0 + \epsilon_i$, and $\bar{y} \neq 0$ in general. Thus the appropriate form of $R^2$ to use is: $$ R^2 = 1- \frac{\sum_i (y_i - \hat{y}_i)^2}{\sum_i (y_i - \bar{y})^2} $$ We choose to model the relationship between x and y with a one parameter linear model, namely $\hat{y}_i = a_1 \hat{x}_i + \epsilon_i$. The null hypothesis is that there is no relationship between x and y, thus the correct null hypothesis is $\hat{y}_i = \epsilon_i$. In other words, the null hypothesis is just white noise. Clearly, $\mathbb{E}(y) = 0$, thus the correct form of $R^2$ is $$ R^2 = 1- \frac{\sum_i (y_i - \hat{y}_i)^2}{\sum_i y_i^2} $$ A good way to think about this is the following: suppose the null hypothesis were (100%) correct, and there truly were no relationship between x and y. What would we expect? If anything's fair, the answer is "We expect $R^2=0$." In the case where we choose a two-parameter model, we expect that $\bar{y} = \hat{y}_i = a_0$. If this isn't obvious, try drawing the picture with the model value under the null hypothesis $\hat{y}_i$, the data point as $y_i$, and the average $\bar{y}$. If the model is correct (i.e., number of data points -> infinity), then you should be able to see graphically that $\bar{y} = \hat{y}_i = a_0$, in the case where the null hypothesis is true. Conversely, using the same picture as above, $\hat{y}_i = \bar{y} = 0$. There's a slight rub here, because you have to worry about how these things go to zero. L'Hopital will tell you that, in this case at least, $\lim 0/0 = 0$, and everything is ok. You can see why you get funny things happening with $R^2$ (like negative values) if you use the wrong form of the equation. I noticed it first because the statsmodels package in Python does one thing, and R does something else: it pains me to say, but R is right and statsmodels is wrong. (Well, not really "pains"...) I would love for some feedback on this intuition. I have only found one reference where this is explained explicitly. Please see this pdf file (download here), Section 5.3.6. Additionally, the other linked answer on stackexchange alludes to this fact, but the reasoning wasn't completely clear to me (no offense to the person who answered the question, it is a very well-written response, and I can be dense at times! ). Again, please correct my reasoning in the comments, and I will amend the post until it is acceptable.
Calculating R-squared (coefficient of determination) with centered vs. un-centered sums of squares Ok, I thought I'd follow up on this. I've been struggling with the answers here a bit, and have come to some better understanding of the problem. For posterity, I also think that a full explanation of
33,757
Calculating R-squared (coefficient of determination) with centered vs. un-centered sums of squares
I guess that, as Stephane hinted in the comment, all the difference is in how you describe your model. If you describe the same model, the r squared will be the same in both cases. I will post some python code to show that afterward, but first a word of caution: statsmodels, with the OLS function do not add automatically the intercept, while the R formula will, so this may be the origin of your difference. If this make you feel uncomfortable, try to use the new formula syntax. With the following code oyu can verify that they bot gives you the same value (statsmodels is actually tested to give the same results as R) # create the data for statsmodels import statsmodels.formula.api as smf import pandas as pd from pylab import arange, randn x = arange(20) y = x * 0.3 + randn(20) data = pd.DataFrame({'x':x, 'y':y}) # create the data for Rpy import rpy2.robjects as robjects import pandas.rpy.common as com r = robjects.r from rpy2.robjects import FloatVector from rpy2.robjects.packages import importr stats = importr('stats') base = importr('base') robjects.globalenv["x"] = FloatVector(x) robjects.globalenv["y"] = FloatVector(y) # model without intercept lm0 = stats.lm("y ~ x + 0") s = base.summary(lm0) print s.rx2("r.squared")[0], smf.ols('y ~ x + 0', data).fit().rsquared print s.rx2("adj.r.squared")[0], smf.ols('y ~ x + 0', data).fit().rsquared_adj # model with intercept lm0 = stats.lm("y ~ x + 1") s = base.summary(lm0) print s.rx2("r.squared")[0], smf.ols('y ~ x + 1', data).fit().rsquared print s.rx2("adj.r.squared")[0], smf.ols('y ~ x + 1', data).fit().rsquared_adj
Calculating R-squared (coefficient of determination) with centered vs. un-centered sums of squares
I guess that, as Stephane hinted in the comment, all the difference is in how you describe your model. If you describe the same model, the r squared will be the same in both cases. I will post some py
Calculating R-squared (coefficient of determination) with centered vs. un-centered sums of squares I guess that, as Stephane hinted in the comment, all the difference is in how you describe your model. If you describe the same model, the r squared will be the same in both cases. I will post some python code to show that afterward, but first a word of caution: statsmodels, with the OLS function do not add automatically the intercept, while the R formula will, so this may be the origin of your difference. If this make you feel uncomfortable, try to use the new formula syntax. With the following code oyu can verify that they bot gives you the same value (statsmodels is actually tested to give the same results as R) # create the data for statsmodels import statsmodels.formula.api as smf import pandas as pd from pylab import arange, randn x = arange(20) y = x * 0.3 + randn(20) data = pd.DataFrame({'x':x, 'y':y}) # create the data for Rpy import rpy2.robjects as robjects import pandas.rpy.common as com r = robjects.r from rpy2.robjects import FloatVector from rpy2.robjects.packages import importr stats = importr('stats') base = importr('base') robjects.globalenv["x"] = FloatVector(x) robjects.globalenv["y"] = FloatVector(y) # model without intercept lm0 = stats.lm("y ~ x + 0") s = base.summary(lm0) print s.rx2("r.squared")[0], smf.ols('y ~ x + 0', data).fit().rsquared print s.rx2("adj.r.squared")[0], smf.ols('y ~ x + 0', data).fit().rsquared_adj # model with intercept lm0 = stats.lm("y ~ x + 1") s = base.summary(lm0) print s.rx2("r.squared")[0], smf.ols('y ~ x + 1', data).fit().rsquared print s.rx2("adj.r.squared")[0], smf.ols('y ~ x + 1', data).fit().rsquared_adj
Calculating R-squared (coefficient of determination) with centered vs. un-centered sums of squares I guess that, as Stephane hinted in the comment, all the difference is in how you describe your model. If you describe the same model, the r squared will be the same in both cases. I will post some py
33,758
Calculating R-squared (coefficient of determination) with centered vs. un-centered sums of squares
My understanding is: The uncentered R2 explains the explanatory power of all regressors (including the constant regressor). The centered R2 explains the explanatory power of all nonconstant regressors.
Calculating R-squared (coefficient of determination) with centered vs. un-centered sums of squares
My understanding is: The uncentered R2 explains the explanatory power of all regressors (including the constant regressor). The centered R2 explains the explanatory power of all nonconstant regresso
Calculating R-squared (coefficient of determination) with centered vs. un-centered sums of squares My understanding is: The uncentered R2 explains the explanatory power of all regressors (including the constant regressor). The centered R2 explains the explanatory power of all nonconstant regressors.
Calculating R-squared (coefficient of determination) with centered vs. un-centered sums of squares My understanding is: The uncentered R2 explains the explanatory power of all regressors (including the constant regressor). The centered R2 explains the explanatory power of all nonconstant regresso
33,759
Calculating R-squared (coefficient of determination) with centered vs. un-centered sums of squares
When there is an intercept in the regression model, one can prove that $ \sum (y_i-\bar{y})^2 = \sum (\hat{y}_i - \bar{y})^2 + \sum(y_i - \hat{y}_i)^2 $ The proof utilizes the two OLS first order conditions. And this equality ensures that $R^2$ is always in [0,1]. When there is no intercept, the equation above do not hold. Instead, we have $\sum y_i^2 = \sum \hat{y}_i^2 + \sum (y_i - \hat{y}_i)^2$ The proof utilizes the only OLS first order condition for this homogenous model. And the uncentered $R^2$ is always in [0,1]
Calculating R-squared (coefficient of determination) with centered vs. un-centered sums of squares
When there is an intercept in the regression model, one can prove that $ \sum (y_i-\bar{y})^2 = \sum (\hat{y}_i - \bar{y})^2 + \sum(y_i - \hat{y}_i)^2 $ The proof utilizes the two OLS first order cond
Calculating R-squared (coefficient of determination) with centered vs. un-centered sums of squares When there is an intercept in the regression model, one can prove that $ \sum (y_i-\bar{y})^2 = \sum (\hat{y}_i - \bar{y})^2 + \sum(y_i - \hat{y}_i)^2 $ The proof utilizes the two OLS first order conditions. And this equality ensures that $R^2$ is always in [0,1]. When there is no intercept, the equation above do not hold. Instead, we have $\sum y_i^2 = \sum \hat{y}_i^2 + \sum (y_i - \hat{y}_i)^2$ The proof utilizes the only OLS first order condition for this homogenous model. And the uncentered $R^2$ is always in [0,1]
Calculating R-squared (coefficient of determination) with centered vs. un-centered sums of squares When there is an intercept in the regression model, one can prove that $ \sum (y_i-\bar{y})^2 = \sum (\hat{y}_i - \bar{y})^2 + \sum(y_i - \hat{y}_i)^2 $ The proof utilizes the two OLS first order cond
33,760
Uncertainty and sensitivity analysis
You can try one of the tools provided here. That is matlab solutions, very nice code and modern methods. Firstly I would suggest you to try graphical tools from the library to make sense about the data. As you did not provide the details on what you need here are some comments on the methods implied: Global Sensitivity Analysis. Global sensitivity analysis is the study of how the uncertainty in the output of a model (numerical or otherwise) can be apportioned to different sources of uncertainty in the model input. Global could be an unnecessary specification here, were it not for the fact that most analysis met in the literature are local or one-factor-at-a-time. Monte-Carlo (or Sample-based) Analysis. Monte Carlo (MC) analysis is based on performing multiple evaluations with randomly selected model input, and then using the results of these evaluations to determine both uncertainty in model predictions and apportioning to the input factors their contribution to this uncertainty. A MC analysis involves the selection of ranges and distributions for each input factor; generation of a sample from the ranges and distributions specified in the first step; evaluation of the model for each element of the sample; uncertainty analysis and sensitivity analysis. Response Surface Methodology. This procedure is based on the development of a response surface approximation to the model under consideration. This approximation is then used as a surrogate for the original model in uncertainty and sensitivity analysis. The analysis involves the selection of ranges and distributions for each input factor, the development of an experimental design defining the combinations of factor values on which evaluate the model, evaluations of the model, construction of a response surface approximation to the original model, uncertainty analysis and sensitivity analysis. Screening Designs. Factors screening may be useful as a first step when dealing with a model containing a large number of input factors (hundreds). By input factor we mean any quantity that can be changed in the model prior to its execution. This can be a model parameter, or an input variable, or a model scenario. Often, only a few of the input factors and groupings of factors, have a significant effect on the model output. Local (Differential Analysis). Local SA investigates the impact of the input factors on the model locally, i.e. at some fixed point in the space of the input factors. Local SA is usually carried out by computing partial derivatives of the output functions with respect to the input variables (differential analysis). In order to compute the derivative numerically, the input parameters are varied within a small interval around a nominal value. The interval is not related to our degree of knowledge of the variables and is usually the same for all of the variables. FORM-SORM. FORM and SORM are useful methods when the analyst is not interested in the magnitude of Y (and hence its potential variation) but in the probability of Y exceeding some critical value. The constraint (Y-Ycrit < 0) determines a hyper-surface in the space of the input factors, X. The minimum distance between some design point for X and the hyper-surface is the quantity of interest. Good luck!
Uncertainty and sensitivity analysis
You can try one of the tools provided here. That is matlab solutions, very nice code and modern methods. Firstly I would suggest you to try graphical tools from the library to make sense about the dat
Uncertainty and sensitivity analysis You can try one of the tools provided here. That is matlab solutions, very nice code and modern methods. Firstly I would suggest you to try graphical tools from the library to make sense about the data. As you did not provide the details on what you need here are some comments on the methods implied: Global Sensitivity Analysis. Global sensitivity analysis is the study of how the uncertainty in the output of a model (numerical or otherwise) can be apportioned to different sources of uncertainty in the model input. Global could be an unnecessary specification here, were it not for the fact that most analysis met in the literature are local or one-factor-at-a-time. Monte-Carlo (or Sample-based) Analysis. Monte Carlo (MC) analysis is based on performing multiple evaluations with randomly selected model input, and then using the results of these evaluations to determine both uncertainty in model predictions and apportioning to the input factors their contribution to this uncertainty. A MC analysis involves the selection of ranges and distributions for each input factor; generation of a sample from the ranges and distributions specified in the first step; evaluation of the model for each element of the sample; uncertainty analysis and sensitivity analysis. Response Surface Methodology. This procedure is based on the development of a response surface approximation to the model under consideration. This approximation is then used as a surrogate for the original model in uncertainty and sensitivity analysis. The analysis involves the selection of ranges and distributions for each input factor, the development of an experimental design defining the combinations of factor values on which evaluate the model, evaluations of the model, construction of a response surface approximation to the original model, uncertainty analysis and sensitivity analysis. Screening Designs. Factors screening may be useful as a first step when dealing with a model containing a large number of input factors (hundreds). By input factor we mean any quantity that can be changed in the model prior to its execution. This can be a model parameter, or an input variable, or a model scenario. Often, only a few of the input factors and groupings of factors, have a significant effect on the model output. Local (Differential Analysis). Local SA investigates the impact of the input factors on the model locally, i.e. at some fixed point in the space of the input factors. Local SA is usually carried out by computing partial derivatives of the output functions with respect to the input variables (differential analysis). In order to compute the derivative numerically, the input parameters are varied within a small interval around a nominal value. The interval is not related to our degree of knowledge of the variables and is usually the same for all of the variables. FORM-SORM. FORM and SORM are useful methods when the analyst is not interested in the magnitude of Y (and hence its potential variation) but in the probability of Y exceeding some critical value. The constraint (Y-Ycrit < 0) determines a hyper-surface in the space of the input factors, X. The minimum distance between some design point for X and the hyper-surface is the quantity of interest. Good luck!
Uncertainty and sensitivity analysis You can try one of the tools provided here. That is matlab solutions, very nice code and modern methods. Firstly I would suggest you to try graphical tools from the library to make sense about the dat
33,761
Uncertainty and sensitivity analysis
To address the first question, I suggest you have a look at canonical correlation analysis and to a more recent dimension reduction technique called sliced inverse regression. On the latter, see the initial paper by Ker Chau Li Sliced inverse regression for dimension reduction (with discussion). Journal of the American Statistical Association, 86(414):316–327, 1991. It is freely available on the Internet. The version with the (interesting) comments you might have to buy thought. Some important parameters for the choice of a method in you situation are : dimensionality of the input (n=3, n=15 and n=50 are very different problems); time needed to get one evaluation (0.1 s, 5 min and 5 hours are also very different problems); assumptions that you can make about your model : is it linear ? is it monotonous ? Also you mention a possible multivariate output. If you have a few of them that represent completely different things, just do several independent sensitivity analysis. If they are higly correlated or functional then it also change the problematic a lot. You should make all this points clear before going for a given methodology.
Uncertainty and sensitivity analysis
To address the first question, I suggest you have a look at canonical correlation analysis and to a more recent dimension reduction technique called sliced inverse regression. On the latter, see the i
Uncertainty and sensitivity analysis To address the first question, I suggest you have a look at canonical correlation analysis and to a more recent dimension reduction technique called sliced inverse regression. On the latter, see the initial paper by Ker Chau Li Sliced inverse regression for dimension reduction (with discussion). Journal of the American Statistical Association, 86(414):316–327, 1991. It is freely available on the Internet. The version with the (interesting) comments you might have to buy thought. Some important parameters for the choice of a method in you situation are : dimensionality of the input (n=3, n=15 and n=50 are very different problems); time needed to get one evaluation (0.1 s, 5 min and 5 hours are also very different problems); assumptions that you can make about your model : is it linear ? is it monotonous ? Also you mention a possible multivariate output. If you have a few of them that represent completely different things, just do several independent sensitivity analysis. If they are higly correlated or functional then it also change the problematic a lot. You should make all this points clear before going for a given methodology.
Uncertainty and sensitivity analysis To address the first question, I suggest you have a look at canonical correlation analysis and to a more recent dimension reduction technique called sliced inverse regression. On the latter, see the i
33,762
Uncertainty and sensitivity analysis
You may be able to use a variance-based global sensitivity analysis approach to answer the second question. According to Saltelli (2008), sensitivity analysis is "...the study of how uncertainty in the output of a model can be apportioned to different sources of uncertainty in the model input..." The approaches, such as those mentioned in the Saltelli book, normally focus on first generating an input sample, which is subsequently run through a model to generate outputs, and then analysed. The output metrics which result, such as the total sensitivity index $S_{t_i}$ and first-order sensitivity index $S_i$ represent the main effect contribution of each input factor to the variance of the output. Variance based approaches decompose the variance in the output. They are computationally demanding, and require a specific input sample. For your purposes, given that you have an existing ranch of data is an alternative method such as that suggested by Delta Moment-Independent Measure (Borgonovo 2007, Plischke et al. 2013) and implemented in the Python library SALib. The following code, taken from the example allows you to generate the sensitivity indices from your existing data: from SALib.analyze import dgsm from SALib.util import read_param_file # Read the parameter range file problem = read_param_file('../../SALib/test_functions/params/Ishigami.txt') # Perform the sensitivity analysis using the model output # Specify which column of the output file to analyze (zero-indexed) Si = dgsm.analyze(problem, param_values, Y, conf_level=0.95, print_to_console=False) # Returns a dictionary with keys 'vi', 'vi_std', 'dgsm', and 'dgsm_conf' # e.g. Si['vi'] contains the sensitivity measure for each parameter, in # the same order as the parameter file
Uncertainty and sensitivity analysis
You may be able to use a variance-based global sensitivity analysis approach to answer the second question. According to Saltelli (2008), sensitivity analysis is "...the study of how uncertainty in
Uncertainty and sensitivity analysis You may be able to use a variance-based global sensitivity analysis approach to answer the second question. According to Saltelli (2008), sensitivity analysis is "...the study of how uncertainty in the output of a model can be apportioned to different sources of uncertainty in the model input..." The approaches, such as those mentioned in the Saltelli book, normally focus on first generating an input sample, which is subsequently run through a model to generate outputs, and then analysed. The output metrics which result, such as the total sensitivity index $S_{t_i}$ and first-order sensitivity index $S_i$ represent the main effect contribution of each input factor to the variance of the output. Variance based approaches decompose the variance in the output. They are computationally demanding, and require a specific input sample. For your purposes, given that you have an existing ranch of data is an alternative method such as that suggested by Delta Moment-Independent Measure (Borgonovo 2007, Plischke et al. 2013) and implemented in the Python library SALib. The following code, taken from the example allows you to generate the sensitivity indices from your existing data: from SALib.analyze import dgsm from SALib.util import read_param_file # Read the parameter range file problem = read_param_file('../../SALib/test_functions/params/Ishigami.txt') # Perform the sensitivity analysis using the model output # Specify which column of the output file to analyze (zero-indexed) Si = dgsm.analyze(problem, param_values, Y, conf_level=0.95, print_to_console=False) # Returns a dictionary with keys 'vi', 'vi_std', 'dgsm', and 'dgsm_conf' # e.g. Si['vi'] contains the sensitivity measure for each parameter, in # the same order as the parameter file
Uncertainty and sensitivity analysis You may be able to use a variance-based global sensitivity analysis approach to answer the second question. According to Saltelli (2008), sensitivity analysis is "...the study of how uncertainty in
33,763
Finding the Moment Generating Function of chi-squared distribution
Yes, since $\chi^2$ is a sum of $Z_i^2$ the MGF is a product of individual summands. But then you need the MGF of $Z_i^2$ which is $\chi^2$ with 1 degree of freedom. The obvious way of calculating the MGF of $\chi^2$ is by integrating. It is not that hard: $$Ee^{tX}=\frac{1}{2^{k/2}\Gamma(k/2)}\int_0^\infty x^{k/2-1}e^{-x(1/2-t)}dx$$ Now do the change of variables $y=x(1/2-t)$, then note that you get Gamma function and the result is yours. If you want deeper insights (if there are any) try asking at http://math.stackexchange.com.
Finding the Moment Generating Function of chi-squared distribution
Yes, since $\chi^2$ is a sum of $Z_i^2$ the MGF is a product of individual summands. But then you need the MGF of $Z_i^2$ which is $\chi^2$ with 1 degree of freedom. The obvious way of calculating the
Finding the Moment Generating Function of chi-squared distribution Yes, since $\chi^2$ is a sum of $Z_i^2$ the MGF is a product of individual summands. But then you need the MGF of $Z_i^2$ which is $\chi^2$ with 1 degree of freedom. The obvious way of calculating the MGF of $\chi^2$ is by integrating. It is not that hard: $$Ee^{tX}=\frac{1}{2^{k/2}\Gamma(k/2)}\int_0^\infty x^{k/2-1}e^{-x(1/2-t)}dx$$ Now do the change of variables $y=x(1/2-t)$, then note that you get Gamma function and the result is yours. If you want deeper insights (if there are any) try asking at http://math.stackexchange.com.
Finding the Moment Generating Function of chi-squared distribution Yes, since $\chi^2$ is a sum of $Z_i^2$ the MGF is a product of individual summands. But then you need the MGF of $Z_i^2$ which is $\chi^2$ with 1 degree of freedom. The obvious way of calculating the
33,764
Finding the Moment Generating Function of chi-squared distribution
I think the easiest way is to simply start with a single squared gaussian: $$E[e^{tX^2}] = \int_{-\infty}^\infty e^{tx^2}\tfrac1{\sqrt{2\pi}}e^{-x^2/2}dx = \tfrac1{\sqrt{2\pi}}\int_{-\infty}^\infty e^{-(1-2t)x^2/2}dx = \tfrac1{\sqrt{1-2t}},$$ for $t<1/2$. Since the Chi-squared is just a sum of independent squared gaussians, you get the factor $k$ in the exponent.
Finding the Moment Generating Function of chi-squared distribution
I think the easiest way is to simply start with a single squared gaussian: $$E[e^{tX^2}] = \int_{-\infty}^\infty e^{tx^2}\tfrac1{\sqrt{2\pi}}e^{-x^2/2}dx = \tfrac1{\sqrt{2\pi}}\int_{-\infty}^\infty e^
Finding the Moment Generating Function of chi-squared distribution I think the easiest way is to simply start with a single squared gaussian: $$E[e^{tX^2}] = \int_{-\infty}^\infty e^{tx^2}\tfrac1{\sqrt{2\pi}}e^{-x^2/2}dx = \tfrac1{\sqrt{2\pi}}\int_{-\infty}^\infty e^{-(1-2t)x^2/2}dx = \tfrac1{\sqrt{1-2t}},$$ for $t<1/2$. Since the Chi-squared is just a sum of independent squared gaussians, you get the factor $k$ in the exponent.
Finding the Moment Generating Function of chi-squared distribution I think the easiest way is to simply start with a single squared gaussian: $$E[e^{tX^2}] = \int_{-\infty}^\infty e^{tx^2}\tfrac1{\sqrt{2\pi}}e^{-x^2/2}dx = \tfrac1{\sqrt{2\pi}}\int_{-\infty}^\infty e^
33,765
Finding the Moment Generating Function of chi-squared distribution
You can also do this calculation by brute-force straight from the general chi-squared distribution, without appeal to any intermediate appeal to sums of random variables. For $X \sim \chi_n^2$ we have moment generating function: $$\begin{equation} \begin{aligned} M_X(t) \equiv \mathbb{E}(\exp (tX)) &= \int \limits_0^\infty \exp(tx) \cdot \text{Chi-Sq}(x | n) dx \\[8pt] &= \frac{1}{2^{n/2} \Gamma(n/2)} \int \limits_0^\infty \exp(tx) \cdot x^{n/2-1} \exp(-x/2) dx \\[8pt] &= \frac{1}{2^{n/2} \Gamma(n/2)} \int \limits_0^\infty x^{n/2-1} \exp((t -\tfrac{1}{2})x) dx. \\[8pt] \end{aligned} \end{equation}$$ For the case where $t < \tfrac{1}{2}$, using the change-of-variable $r = (\tfrac{1}{2} - t)x$ we have: $$\begin{equation} \begin{aligned} M_X(t) &= \frac{1}{2^{n/2} \Gamma(n/2)} \int \limits_0^\infty x^{n/2-1} \exp((t -\tfrac{1}{2})x) dx. \\[8pt] &= (\tfrac{1}{2} - t)^{-n/2} \cdot \frac{1}{2^{n/2} \Gamma(n/2)} \int \limits_0^\infty r^{n/2-1} \exp(-r) dr. \\[8pt] &= (1 - 2t)^{-n/2} \cdot \frac{1}{\Gamma(n/2)} \int \limits_0^\infty r^{n/2-1} \exp(-r) dr. \\[8pt] &= (1 - 2t)^{-n/2}. \\[8pt] \end{aligned} \end{equation}$$
Finding the Moment Generating Function of chi-squared distribution
You can also do this calculation by brute-force straight from the general chi-squared distribution, without appeal to any intermediate appeal to sums of random variables. For $X \sim \chi_n^2$ we hav
Finding the Moment Generating Function of chi-squared distribution You can also do this calculation by brute-force straight from the general chi-squared distribution, without appeal to any intermediate appeal to sums of random variables. For $X \sim \chi_n^2$ we have moment generating function: $$\begin{equation} \begin{aligned} M_X(t) \equiv \mathbb{E}(\exp (tX)) &= \int \limits_0^\infty \exp(tx) \cdot \text{Chi-Sq}(x | n) dx \\[8pt] &= \frac{1}{2^{n/2} \Gamma(n/2)} \int \limits_0^\infty \exp(tx) \cdot x^{n/2-1} \exp(-x/2) dx \\[8pt] &= \frac{1}{2^{n/2} \Gamma(n/2)} \int \limits_0^\infty x^{n/2-1} \exp((t -\tfrac{1}{2})x) dx. \\[8pt] \end{aligned} \end{equation}$$ For the case where $t < \tfrac{1}{2}$, using the change-of-variable $r = (\tfrac{1}{2} - t)x$ we have: $$\begin{equation} \begin{aligned} M_X(t) &= \frac{1}{2^{n/2} \Gamma(n/2)} \int \limits_0^\infty x^{n/2-1} \exp((t -\tfrac{1}{2})x) dx. \\[8pt] &= (\tfrac{1}{2} - t)^{-n/2} \cdot \frac{1}{2^{n/2} \Gamma(n/2)} \int \limits_0^\infty r^{n/2-1} \exp(-r) dr. \\[8pt] &= (1 - 2t)^{-n/2} \cdot \frac{1}{\Gamma(n/2)} \int \limits_0^\infty r^{n/2-1} \exp(-r) dr. \\[8pt] &= (1 - 2t)^{-n/2}. \\[8pt] \end{aligned} \end{equation}$$
Finding the Moment Generating Function of chi-squared distribution You can also do this calculation by brute-force straight from the general chi-squared distribution, without appeal to any intermediate appeal to sums of random variables. For $X \sim \chi_n^2$ we hav
33,766
How to get Sphericity in R for a nested within subject design?
Try: library(ez) ezANOVA(data=subset(p12bl, exps==1), within=.(sucrose, citral), wid=.(subject), dv=.(resp) ) and the equivalent aov command, minus sphericity etc, is: aov(resp ~ sucrose*citral + Error(subject/(sucrose*citral)), data= subset(p12bl, exps==1)) Here's the equivalent using Anova from car directly: library(car) df1<-read.table("clipboard", header=T) #From copying data in the question above sucrose<-factor(rep(c(1:4), each=4)) citral<-factor(rep(c(1:4), 4)) idata<-data.frame(sucrose,citral) mod<-lm(cbind(S1C1, S1C2, S1C3, S1C4, S2C1, S2C2, S2C3, S2C4, S3C1, S3C2, S3C3, S3C4, S4C1, S4C2, S4C3, S4C4)~1, data=df1) av.mod<-Anova(mod, idata=idata, idesign=~sucrose*citral) summary(av.mod)
How to get Sphericity in R for a nested within subject design?
Try: library(ez) ezANOVA(data=subset(p12bl, exps==1), within=.(sucrose, citral), wid=.(subject), dv=.(resp) ) and the equivalent aov command, minus sphericity etc, is: aov(resp ~ sucrose*citr
How to get Sphericity in R for a nested within subject design? Try: library(ez) ezANOVA(data=subset(p12bl, exps==1), within=.(sucrose, citral), wid=.(subject), dv=.(resp) ) and the equivalent aov command, minus sphericity etc, is: aov(resp ~ sucrose*citral + Error(subject/(sucrose*citral)), data= subset(p12bl, exps==1)) Here's the equivalent using Anova from car directly: library(car) df1<-read.table("clipboard", header=T) #From copying data in the question above sucrose<-factor(rep(c(1:4), each=4)) citral<-factor(rep(c(1:4), 4)) idata<-data.frame(sucrose,citral) mod<-lm(cbind(S1C1, S1C2, S1C3, S1C4, S2C1, S2C2, S2C3, S2C4, S3C1, S3C2, S3C3, S3C4, S4C1, S4C2, S4C3, S4C4)~1, data=df1) av.mod<-Anova(mod, idata=idata, idesign=~sucrose*citral) summary(av.mod)
How to get Sphericity in R for a nested within subject design? Try: library(ez) ezANOVA(data=subset(p12bl, exps==1), within=.(sucrose, citral), wid=.(subject), dv=.(resp) ) and the equivalent aov command, minus sphericity etc, is: aov(resp ~ sucrose*citr
33,767
How to get Sphericity in R for a nested within subject design?
Did you try the car package, from John Fox? It includes the function Anova() which is very useful when working with experimental designs. It should give you corrected p-value following Greenhouse-Geisser correction and Huynh-Feldt correction. I can post a quick R example if you wonder how to use it. Also, there is a nice tutorial on the use of R with repeated measurements and mixed-effects model for psychology experiments and questionnaires; see Section 6.10 about sphericity. As a sidenote, the Mauchly's Test of Sphericity is available in mauchly.test(), but it doesn't work with aov object if I remembered correctly. The R Newsletter from October 2007 includes a brief description of this topic.
How to get Sphericity in R for a nested within subject design?
Did you try the car package, from John Fox? It includes the function Anova() which is very useful when working with experimental designs. It should give you corrected p-value following Greenhouse-Geis
How to get Sphericity in R for a nested within subject design? Did you try the car package, from John Fox? It includes the function Anova() which is very useful when working with experimental designs. It should give you corrected p-value following Greenhouse-Geisser correction and Huynh-Feldt correction. I can post a quick R example if you wonder how to use it. Also, there is a nice tutorial on the use of R with repeated measurements and mixed-effects model for psychology experiments and questionnaires; see Section 6.10 about sphericity. As a sidenote, the Mauchly's Test of Sphericity is available in mauchly.test(), but it doesn't work with aov object if I remembered correctly. The R Newsletter from October 2007 includes a brief description of this topic.
How to get Sphericity in R for a nested within subject design? Did you try the car package, from John Fox? It includes the function Anova() which is very useful when working with experimental designs. It should give you corrected p-value following Greenhouse-Geis
33,768
How to get Sphericity in R for a nested within subject design?
I generally recommend avoiding these types of sphericity tests altogether by using modern mixed modeling methods. If you are not working with few subjects this will give you a great deal of flexibility in modeling an appropriate covariance structure, freeing you from the strict assumption of sphericity when necessary. I infer from the str output that you have 16 subjects with 12 observations each (I assume balance b/c you areusing clasical method-of-moments tools) which should be enough data to fit a mixed model with structured covariance matrices via (restricted) maximum likelihood. Without being close to your data I can't offer specific model recommendations, but a place to start in R would be to replace aov in your model specifications with lme (after library(nlme)). The reason this will work is that you have mistakenly provided an nlme-style random argument to aov (when, as @Matt Albrecht pointed out, an Error term would have been approriate). In nlme, with the random argument set to ~ 1|<your grouping structure> and no correlation or weight arguments, you are specifying a random intercept for each group, implying the response covariance within groups is ZGZ' + R = 1G1'+ \sigma^2 I ==> compound symmetry with between-group variance off-diagonal and between-group variance + within-group variance on the diagonal ==> a spherical structure. From there you can begin to explore (e.g. using the built-in graphical methods), model, and test (e.g. comparing information criteria or using LRTs for nested models) the various forms of non-sphericity. Some of the tools for the modeling component are: Using the weights argument to model non-constant variance (diagonals) within or between groups (e.g. error variance changes between sucrose levels). Using the correlation argument to model non-constant covariance (off-diagonals) within groups (e.g. a structure within a group where residual errors that are closer together in time (e.g. AR1 structure) or space (e.g. Spherical structure) are more similar). Modeling random slopes by adding terms to the LHS of the | in the random formula. Though the process can be complex with many potential pitfalls, I believe it will lead you to think more about the data generating mechanism, and when combined with careful graphical checks (I recommed lattice b/c nlme has excellent lattice-based plotting methods -- but ggplot works well too) you are likely to have not only a better scientific understanding of the process, but also less biased and more efficient estimators with which to draw inferences.
How to get Sphericity in R for a nested within subject design?
I generally recommend avoiding these types of sphericity tests altogether by using modern mixed modeling methods. If you are not working with few subjects this will give you a great deal of flexibili
How to get Sphericity in R for a nested within subject design? I generally recommend avoiding these types of sphericity tests altogether by using modern mixed modeling methods. If you are not working with few subjects this will give you a great deal of flexibility in modeling an appropriate covariance structure, freeing you from the strict assumption of sphericity when necessary. I infer from the str output that you have 16 subjects with 12 observations each (I assume balance b/c you areusing clasical method-of-moments tools) which should be enough data to fit a mixed model with structured covariance matrices via (restricted) maximum likelihood. Without being close to your data I can't offer specific model recommendations, but a place to start in R would be to replace aov in your model specifications with lme (after library(nlme)). The reason this will work is that you have mistakenly provided an nlme-style random argument to aov (when, as @Matt Albrecht pointed out, an Error term would have been approriate). In nlme, with the random argument set to ~ 1|<your grouping structure> and no correlation or weight arguments, you are specifying a random intercept for each group, implying the response covariance within groups is ZGZ' + R = 1G1'+ \sigma^2 I ==> compound symmetry with between-group variance off-diagonal and between-group variance + within-group variance on the diagonal ==> a spherical structure. From there you can begin to explore (e.g. using the built-in graphical methods), model, and test (e.g. comparing information criteria or using LRTs for nested models) the various forms of non-sphericity. Some of the tools for the modeling component are: Using the weights argument to model non-constant variance (diagonals) within or between groups (e.g. error variance changes between sucrose levels). Using the correlation argument to model non-constant covariance (off-diagonals) within groups (e.g. a structure within a group where residual errors that are closer together in time (e.g. AR1 structure) or space (e.g. Spherical structure) are more similar). Modeling random slopes by adding terms to the LHS of the | in the random formula. Though the process can be complex with many potential pitfalls, I believe it will lead you to think more about the data generating mechanism, and when combined with careful graphical checks (I recommed lattice b/c nlme has excellent lattice-based plotting methods -- but ggplot works well too) you are likely to have not only a better scientific understanding of the process, but also less biased and more efficient estimators with which to draw inferences.
How to get Sphericity in R for a nested within subject design? I generally recommend avoiding these types of sphericity tests altogether by using modern mixed modeling methods. If you are not working with few subjects this will give you a great deal of flexibili
33,769
How to get Sphericity in R for a nested within subject design?
ez has now been updated to version 2.0. Among other improvements, the bug that caused it to fail to work for this example has been fixed.
How to get Sphericity in R for a nested within subject design?
ez has now been updated to version 2.0. Among other improvements, the bug that caused it to fail to work for this example has been fixed.
How to get Sphericity in R for a nested within subject design? ez has now been updated to version 2.0. Among other improvements, the bug that caused it to fail to work for this example has been fixed.
How to get Sphericity in R for a nested within subject design? ez has now been updated to version 2.0. Among other improvements, the bug that caused it to fail to work for this example has been fixed.
33,770
Mean of the log and variance of the log
Those apply to a log-normal distribution. The paper says "The evidence is in practice approximately log-normally distributed." If it has parameters $\mu=\mathbb{E}[\log Z]$ and $\sigma^2=\mathrm{Var}[\log Z]$ then: $\mathbb{E}[ Z] = \exp\left(\mu + \frac{\sigma^2}{2}\right)$ $\mathrm{Var}[Z]=(\exp(\sigma^2)-1)\exp(2\mu+\sigma^2)$ $\mathbb{E}[ Z^2] =\exp(2\mu+2\sigma^2)$ which leads to the desired $2\log(\mathbb{E}[Z])-\frac12\log(\mathbb{E}[Z^2]) = 2\mu+\sigma^2 - \mu-\sigma^2=\mu=\mathbb{E}[\log Z]$ $\log(\mathbb{E}[Z^2])-2\log(\mathbb{E}[Z]) = 2\mu +2\sigma^2-2\mu-\sigma^2 = \sigma^2=\mathrm{Var}[\log Z]$
Mean of the log and variance of the log
Those apply to a log-normal distribution. The paper says "The evidence is in practice approximately log-normally distributed." If it has parameters $\mu=\mathbb{E}[\log Z]$ and $\sigma^2=\mathrm{Var}
Mean of the log and variance of the log Those apply to a log-normal distribution. The paper says "The evidence is in practice approximately log-normally distributed." If it has parameters $\mu=\mathbb{E}[\log Z]$ and $\sigma^2=\mathrm{Var}[\log Z]$ then: $\mathbb{E}[ Z] = \exp\left(\mu + \frac{\sigma^2}{2}\right)$ $\mathrm{Var}[Z]=(\exp(\sigma^2)-1)\exp(2\mu+\sigma^2)$ $\mathbb{E}[ Z^2] =\exp(2\mu+2\sigma^2)$ which leads to the desired $2\log(\mathbb{E}[Z])-\frac12\log(\mathbb{E}[Z^2]) = 2\mu+\sigma^2 - \mu-\sigma^2=\mu=\mathbb{E}[\log Z]$ $\log(\mathbb{E}[Z^2])-2\log(\mathbb{E}[Z]) = 2\mu +2\sigma^2-2\mu-\sigma^2 = \sigma^2=\mathrm{Var}[\log Z]$
Mean of the log and variance of the log Those apply to a log-normal distribution. The paper says "The evidence is in practice approximately log-normally distributed." If it has parameters $\mu=\mathbb{E}[\log Z]$ and $\sigma^2=\mathrm{Var}
33,771
Mean of the log and variance of the log
$\newcommand{\e}{\operatorname E} \newcommand{\v}{\operatorname{var}}$The paper actually assumes $X= \log\mathcal Z$ is normally distributed, not just that it is some random variable. Let $\mu=\e(X) = \e(\log\mathcal Z)$ and $\sigma^2 = \v(X) = \v(\log\mathcal Z).$ Then $\mathcal Z=e^X = \exp(X)$ and \begin{align} \e(\mathcal Z) = {} &\int_{-\infty}^{+\infty} (\exp x) \varphi_{\mu,\sigma^2}(x) \,dx \\ & \text{where } \varphi_{\mu,\sigma^2} \text{ is the normal} \\ & \text{density with expectation $\mu$} \\ & \text{and variance $\sigma^2$.} \\[8pt] = {} & \int_{-\infty}^{+\infty} (\exp x) \frac 1 {\sqrt{2\pi}} \exp\left( -\frac12 \left( \frac{x-\mu} \sigma \right)^2 \right) \,\,\frac{dx}{\sigma} \\[8pt] = {} & \int_{-\infty}^{+\infty} (\exp(\mu + \sigma w)) \frac 1 {\sqrt{2\pi}} \exp \left( -\frac 12\,w^2 \right)\, dw \\[8pt] = {} & \int_{-\infty}^{+\infty} \frac 1 {\sqrt{2\pi}} \exp\left( -\frac12 w^2 + \sigma w + \mu \right) \, dw \\[8pt] = {} & \int_{-\infty}^{+\infty} \frac 1 {\sqrt{2\pi}} \exp\left( -\frac12\left( w^2 - 2\sigma w \right) + \mu \right) \, dw \\[8pt] = {} & \int_{-\infty}^{+\infty} \frac 1 {\sqrt{2\pi}} \exp\left( -\frac12\left( w^2 - 2\sigma w + \sigma^2 \right) + \mu + \frac12\sigma^2 \right) \, dw \\ & \text{This is completing the square.} \\[8pt] = {} & \int_{-\infty}^{+\infty} \frac 1 {\sqrt{2\pi}} \exp\left( -\frac12(w-\sigma)^2 \right) \underbrace{ \exp\left( \mu + \frac12\sigma^2 \right) }_\text{No “$w$” appears here!} \, dw \\[8pt] & \text{The absence of $w$ from the expression above} \\ & \text{the $\underbrace{\text{underbrace}}$ means that that can be pulled out:} \\[8pt] = {} & \exp\left( \mu + \frac12\sigma^2 \right) \int_{-\infty}^{+\infty} \frac 1 {\sqrt{2\pi}} \exp\left( -\frac12(w-\sigma)^2 \right) \, dw \\[8pt] = {} & \exp\left( \mu + \frac12\sigma^2 \right) \cdot 1 \end{align} That last integral is equal to $1$ because it is the integral of the normal density with expectation $\sigma$ and variance $1.$ We will also need $\e(\mathcal Z^2).$ Notice that $$ \Big( \exp(\mu+\sigma w) \Big)^2 = \exp(2\mu + 2\sigma w). $$ So the random variable $\mathcal Z^2= \exp(2X)$ has values of $\mu$ and $\sigma$ that are twice as big; hence we have $$ \e(\mathcal Z^2) = \exp\left(2\mu + \frac12(2\sigma)^2 \right). $$ So \begin{align} & \log\e(\mathcal Z) = \mu + \frac12 \sigma^2, \\[8pt] & \log\e(\mathcal Z^2) = 2\mu + \frac12(2\sigma^2) = 2\mu + 2\sigma^2, \\[8pt] \text{and so } \mu & = \e(\log\mathcal Z) = 2\log\e(\mathcal Z) - \frac12 \log\e(\mathcal Z^2) \\[8pt] \sigma^2 & = \v( \log\mathcal Z) = \log\e(\mathcal Z^2) - 2\log\e(\mathcal Z). \end{align}
Mean of the log and variance of the log
$\newcommand{\e}{\operatorname E} \newcommand{\v}{\operatorname{var}}$The paper actually assumes $X= \log\mathcal Z$ is normally distributed, not just that it is some random variable. Let $\mu=\e(X) =
Mean of the log and variance of the log $\newcommand{\e}{\operatorname E} \newcommand{\v}{\operatorname{var}}$The paper actually assumes $X= \log\mathcal Z$ is normally distributed, not just that it is some random variable. Let $\mu=\e(X) = \e(\log\mathcal Z)$ and $\sigma^2 = \v(X) = \v(\log\mathcal Z).$ Then $\mathcal Z=e^X = \exp(X)$ and \begin{align} \e(\mathcal Z) = {} &\int_{-\infty}^{+\infty} (\exp x) \varphi_{\mu,\sigma^2}(x) \,dx \\ & \text{where } \varphi_{\mu,\sigma^2} \text{ is the normal} \\ & \text{density with expectation $\mu$} \\ & \text{and variance $\sigma^2$.} \\[8pt] = {} & \int_{-\infty}^{+\infty} (\exp x) \frac 1 {\sqrt{2\pi}} \exp\left( -\frac12 \left( \frac{x-\mu} \sigma \right)^2 \right) \,\,\frac{dx}{\sigma} \\[8pt] = {} & \int_{-\infty}^{+\infty} (\exp(\mu + \sigma w)) \frac 1 {\sqrt{2\pi}} \exp \left( -\frac 12\,w^2 \right)\, dw \\[8pt] = {} & \int_{-\infty}^{+\infty} \frac 1 {\sqrt{2\pi}} \exp\left( -\frac12 w^2 + \sigma w + \mu \right) \, dw \\[8pt] = {} & \int_{-\infty}^{+\infty} \frac 1 {\sqrt{2\pi}} \exp\left( -\frac12\left( w^2 - 2\sigma w \right) + \mu \right) \, dw \\[8pt] = {} & \int_{-\infty}^{+\infty} \frac 1 {\sqrt{2\pi}} \exp\left( -\frac12\left( w^2 - 2\sigma w + \sigma^2 \right) + \mu + \frac12\sigma^2 \right) \, dw \\ & \text{This is completing the square.} \\[8pt] = {} & \int_{-\infty}^{+\infty} \frac 1 {\sqrt{2\pi}} \exp\left( -\frac12(w-\sigma)^2 \right) \underbrace{ \exp\left( \mu + \frac12\sigma^2 \right) }_\text{No “$w$” appears here!} \, dw \\[8pt] & \text{The absence of $w$ from the expression above} \\ & \text{the $\underbrace{\text{underbrace}}$ means that that can be pulled out:} \\[8pt] = {} & \exp\left( \mu + \frac12\sigma^2 \right) \int_{-\infty}^{+\infty} \frac 1 {\sqrt{2\pi}} \exp\left( -\frac12(w-\sigma)^2 \right) \, dw \\[8pt] = {} & \exp\left( \mu + \frac12\sigma^2 \right) \cdot 1 \end{align} That last integral is equal to $1$ because it is the integral of the normal density with expectation $\sigma$ and variance $1.$ We will also need $\e(\mathcal Z^2).$ Notice that $$ \Big( \exp(\mu+\sigma w) \Big)^2 = \exp(2\mu + 2\sigma w). $$ So the random variable $\mathcal Z^2= \exp(2X)$ has values of $\mu$ and $\sigma$ that are twice as big; hence we have $$ \e(\mathcal Z^2) = \exp\left(2\mu + \frac12(2\sigma)^2 \right). $$ So \begin{align} & \log\e(\mathcal Z) = \mu + \frac12 \sigma^2, \\[8pt] & \log\e(\mathcal Z^2) = 2\mu + \frac12(2\sigma^2) = 2\mu + 2\sigma^2, \\[8pt] \text{and so } \mu & = \e(\log\mathcal Z) = 2\log\e(\mathcal Z) - \frac12 \log\e(\mathcal Z^2) \\[8pt] \sigma^2 & = \v( \log\mathcal Z) = \log\e(\mathcal Z^2) - 2\log\e(\mathcal Z). \end{align}
Mean of the log and variance of the log $\newcommand{\e}{\operatorname E} \newcommand{\v}{\operatorname{var}}$The paper actually assumes $X= \log\mathcal Z$ is normally distributed, not just that it is some random variable. Let $\mu=\e(X) =
33,772
Can the χ² test be used without a contingency table?
The $\chi^2$ distribution describes a sum of independent standard normal variables. Although it's usually encountered by students first in the context of contingency tables, it has much wider use. It's the basis of likelihood-ratio tests. The Wald test for coefficients of generalized linear models is based on an asymptotic $\chi^2$ distribution. The F-distribution used in analysis of variance and ordinary linear regression is based on a ratio of $\chi^2$-distributed variables. So a $\chi^2$ value including all coefficients involving a predictor in a model (including nonlinear terms and interactions, or all levels of a categorical predictor) provides a useful summary of the contribution of that predictor to any regression model. If the predictors use up different degrees of freedom, comparison is best done by subtracting the corresponding degrees of freedom (the mean under the null hypothesis) from each $\chi^2$. That said, be very wary of such attempts at automated model selection. Section 5.4 of Frank Harrell's course notes and book illustrate how variable such variable selection based on $\chi^2$ can be. Illustration of this type of $\chi^2$ for predictor comparison Other answers have shown that thescikit-learn function in question bins the continuous features to generate a contingency table. Here's an example of how you could use the Wald $\chi^2$ to evaluate predictor importance without binning. With the iris data set, do a multinomial regression of Species on the continuous predictors. library(nnet) mnIris <- multinom(Species~ Sepal.Length+Sepal.Width+Petal.Length+Petal.Width, data = iris, Hess=TRUE,maxit=200) With code (shown below) to extract the 2 coefficients for each continuous predictor and the corresponding subset of the coefficient covariance matrix, display the $\chi^2$ values. for(pred in names(iris)[1:4]) WaldChisq(mnIris,pred) # Sepal.Length 1.093174 # Sepal.Width 2.292513 # Petal.Length 3.979784 # Petal.Width 3.525995 These are all based on 2 degrees of freedom so they can be compared directly. Admittedly, this won't scale to large data sets as efficiently as the scikit-learn binning, but it does demonstrate a use of $\chi^2$ statistics for predictor comparison without a contingency table. The function to get single-predictor $\chi^2$ statistics from the multinomial model: WaldChisq <- function(model, predictor) { cat(predictor,"\t"); coefs<-data.frame(coef(model))[,predictor]; vcovSub <- vcov(model)[grepl(predictor,rownames(vcov(model))), grepl(predictor,colnames(vcov(model)))]; cat(as.numeric(coefs %*% solve(vcovSub,coefs)),"\n") }
Can the χ² test be used without a contingency table?
The $\chi^2$ distribution describes a sum of independent standard normal variables. Although it's usually encountered by students first in the context of contingency tables, it has much wider use. It'
Can the χ² test be used without a contingency table? The $\chi^2$ distribution describes a sum of independent standard normal variables. Although it's usually encountered by students first in the context of contingency tables, it has much wider use. It's the basis of likelihood-ratio tests. The Wald test for coefficients of generalized linear models is based on an asymptotic $\chi^2$ distribution. The F-distribution used in analysis of variance and ordinary linear regression is based on a ratio of $\chi^2$-distributed variables. So a $\chi^2$ value including all coefficients involving a predictor in a model (including nonlinear terms and interactions, or all levels of a categorical predictor) provides a useful summary of the contribution of that predictor to any regression model. If the predictors use up different degrees of freedom, comparison is best done by subtracting the corresponding degrees of freedom (the mean under the null hypothesis) from each $\chi^2$. That said, be very wary of such attempts at automated model selection. Section 5.4 of Frank Harrell's course notes and book illustrate how variable such variable selection based on $\chi^2$ can be. Illustration of this type of $\chi^2$ for predictor comparison Other answers have shown that thescikit-learn function in question bins the continuous features to generate a contingency table. Here's an example of how you could use the Wald $\chi^2$ to evaluate predictor importance without binning. With the iris data set, do a multinomial regression of Species on the continuous predictors. library(nnet) mnIris <- multinom(Species~ Sepal.Length+Sepal.Width+Petal.Length+Petal.Width, data = iris, Hess=TRUE,maxit=200) With code (shown below) to extract the 2 coefficients for each continuous predictor and the corresponding subset of the coefficient covariance matrix, display the $\chi^2$ values. for(pred in names(iris)[1:4]) WaldChisq(mnIris,pred) # Sepal.Length 1.093174 # Sepal.Width 2.292513 # Petal.Length 3.979784 # Petal.Width 3.525995 These are all based on 2 degrees of freedom so they can be compared directly. Admittedly, this won't scale to large data sets as efficiently as the scikit-learn binning, but it does demonstrate a use of $\chi^2$ statistics for predictor comparison without a contingency table. The function to get single-predictor $\chi^2$ statistics from the multinomial model: WaldChisq <- function(model, predictor) { cat(predictor,"\t"); coefs<-data.frame(coef(model))[,predictor]; vcovSub <- vcov(model)[grepl(predictor,rownames(vcov(model))), grepl(predictor,colnames(vcov(model)))]; cat(as.numeric(coefs %*% solve(vcovSub,coefs)),"\n") }
Can the χ² test be used without a contingency table? The $\chi^2$ distribution describes a sum of independent standard normal variables. Although it's usually encountered by students first in the context of contingency tables, it has much wider use. It'
33,773
Can the χ² test be used without a contingency table?
So there are two things here, 1. can the chi-squared test be used on data that cannot be represented as a contingency table? 2. what is scikit-learn doing? most commonly, when you search for the chi-squared test, you will find the test on the contingency table, however, the chi-squared is more general, and it refers to any test that uses the chi-squared distribution. With this, you can test the goodness of fit of many different models, not just the contingency table. The most important test here is the likelihood-ratio test, which is also a chi-squared test, because the test statistics should follow the chi-squared distribution. This test compares the goodness of fit of two nested models, and most of your standard tests can be expressed as comparing two nested models and are equivalent to some form. So the chi-squared test can be used to test pretty much anything. It is not uncommon that scikit-learn implements some statistical procedures badly. This is because, as developers say, it is a machine learning library and not a stats library and the developers are also not statisticians but machine learners. And they get a bit rude and defensive when someone points out that something doesn't work as a stats person would expect (at least in the past). Anyway. In the docs for chi2 it says This score can be used to select the n_features features with the highest values for the test chi-squared statistic from X, which must contain only non-negative features such as booleans or frequencies (e.g., term counts in document classification), relative to the classes. So it seems like the function is implementing the standard chi-squared test on the contingency tables, and in this case, its use in the iris tutorial would be wrong. I assume, but I am not going to check the code for it, that the function just calculates the chi-squared statistics according to the formula that you would use for the contingency table, but using any feature values.
Can the χ² test be used without a contingency table?
So there are two things here, 1. can the chi-squared test be used on data that cannot be represented as a contingency table? 2. what is scikit-learn doing? most commonly, when you search for the chi-
Can the χ² test be used without a contingency table? So there are two things here, 1. can the chi-squared test be used on data that cannot be represented as a contingency table? 2. what is scikit-learn doing? most commonly, when you search for the chi-squared test, you will find the test on the contingency table, however, the chi-squared is more general, and it refers to any test that uses the chi-squared distribution. With this, you can test the goodness of fit of many different models, not just the contingency table. The most important test here is the likelihood-ratio test, which is also a chi-squared test, because the test statistics should follow the chi-squared distribution. This test compares the goodness of fit of two nested models, and most of your standard tests can be expressed as comparing two nested models and are equivalent to some form. So the chi-squared test can be used to test pretty much anything. It is not uncommon that scikit-learn implements some statistical procedures badly. This is because, as developers say, it is a machine learning library and not a stats library and the developers are also not statisticians but machine learners. And they get a bit rude and defensive when someone points out that something doesn't work as a stats person would expect (at least in the past). Anyway. In the docs for chi2 it says This score can be used to select the n_features features with the highest values for the test chi-squared statistic from X, which must contain only non-negative features such as booleans or frequencies (e.g., term counts in document classification), relative to the classes. So it seems like the function is implementing the standard chi-squared test on the contingency tables, and in this case, its use in the iris tutorial would be wrong. I assume, but I am not going to check the code for it, that the function just calculates the chi-squared statistics according to the formula that you would use for the contingency table, but using any feature values.
Can the χ² test be used without a contingency table? So there are two things here, 1. can the chi-squared test be used on data that cannot be represented as a contingency table? 2. what is scikit-learn doing? most commonly, when you search for the chi-
33,774
Can the χ² test be used without a contingency table?
There are other chi-squared tests, but the numpy function chi2 performs the Pearson's chi-squared test for contingency tables. The Pearson's chi-squared test computes expected and observed frequencies and then passes these to a function that computes a chi-squared statistic with the formula $$\chi^2 = \sum_{\forall i} \frac{(O_i - E_i)^2}{E_i}$$ This formula is specific for counts data. This should not be used for continuous variables (and neither for frequencies mentioned in the source code comments and in the manual, frequencies are fractions of counts). The reason that you must use counts is because the statistic is based on a specific relationship between the mean and the variance of the counts for data that follows a multinomial distribution. With other type of data than counts, the relationship between the mean and variance can be completely different. def _chisquare(f_obs, f_exp): """Fast replacement for scipy.stats.chisquare. Version from https://github.com/scipy/scipy/pull/2525 with additional optimizations. """ f_obs = np.asarray(f_obs, dtype=np.float64) k = len(f_obs) # Reuse f_obs for chi-squared statistics chisq = f_obs chisq -= f_exp chisq **= 2 with np.errstate(invalid="ignore"): chisq /= f_exp chisq = chisq.sum(axis=0) return chisq, special.chdtrc(k - 1, chisq) def chi2(X, y): """Compute chi-squared stats between each non-negative feature and class. This score can be used to select the n_features features with the highest values for the test chi-squared statistic from X, which must contain only non-negative features such as booleans or frequencies (e.g., term counts in document classification), relative to the classes. Recall that the chi-square test measures dependence between stochastic variables, so using this function "weeds out" the features that are the most likely to be independent of class and therefore irrelevant for classification. Read more in the :ref:`User Guide <univariate_feature_selection>`. Parameters ---------- X : {array-like, sparse matrix}, shape = (n_samples, n_features_in) Sample vectors. y : array-like, shape = (n_samples,) Target vector (class labels). Returns ------- chi2 : array, shape = (n_features,) chi2 statistics of each feature. pval : array, shape = (n_features,) p-values of each feature. Notes ----- Complexity of this algorithm is O(n_classes * n_features). See also -------- f_classif: ANOVA F-value between label/feature for classification tasks. f_regression: F-value between label/feature for regression tasks. """ # XXX: we might want to do some of the following in logspace instead for # numerical stability. X = check_array(X, accept_sparse='csr') if np.any((X.data if issparse(X) else X) < 0): raise ValueError("Input X must be non-negative.") Y = LabelBinarizer().fit_transform(y) if Y.shape[1] == 1: Y = np.append(1 - Y, Y, axis=1) observed = safe_sparse_dot(Y.T, X) # n_classes * n_features feature_count = X.sum(axis=0).reshape(1, -1) class_prob = Y.mean(axis=0).reshape(1, -1) expected = np.dot(class_prob.T, feature_count) return _chisquare(observed, expected)
Can the χ² test be used without a contingency table?
There are other chi-squared tests, but the numpy function chi2 performs the Pearson's chi-squared test for contingency tables. The Pearson's chi-squared test computes expected and observed frequencies
Can the χ² test be used without a contingency table? There are other chi-squared tests, but the numpy function chi2 performs the Pearson's chi-squared test for contingency tables. The Pearson's chi-squared test computes expected and observed frequencies and then passes these to a function that computes a chi-squared statistic with the formula $$\chi^2 = \sum_{\forall i} \frac{(O_i - E_i)^2}{E_i}$$ This formula is specific for counts data. This should not be used for continuous variables (and neither for frequencies mentioned in the source code comments and in the manual, frequencies are fractions of counts). The reason that you must use counts is because the statistic is based on a specific relationship between the mean and the variance of the counts for data that follows a multinomial distribution. With other type of data than counts, the relationship between the mean and variance can be completely different. def _chisquare(f_obs, f_exp): """Fast replacement for scipy.stats.chisquare. Version from https://github.com/scipy/scipy/pull/2525 with additional optimizations. """ f_obs = np.asarray(f_obs, dtype=np.float64) k = len(f_obs) # Reuse f_obs for chi-squared statistics chisq = f_obs chisq -= f_exp chisq **= 2 with np.errstate(invalid="ignore"): chisq /= f_exp chisq = chisq.sum(axis=0) return chisq, special.chdtrc(k - 1, chisq) def chi2(X, y): """Compute chi-squared stats between each non-negative feature and class. This score can be used to select the n_features features with the highest values for the test chi-squared statistic from X, which must contain only non-negative features such as booleans or frequencies (e.g., term counts in document classification), relative to the classes. Recall that the chi-square test measures dependence between stochastic variables, so using this function "weeds out" the features that are the most likely to be independent of class and therefore irrelevant for classification. Read more in the :ref:`User Guide <univariate_feature_selection>`. Parameters ---------- X : {array-like, sparse matrix}, shape = (n_samples, n_features_in) Sample vectors. y : array-like, shape = (n_samples,) Target vector (class labels). Returns ------- chi2 : array, shape = (n_features,) chi2 statistics of each feature. pval : array, shape = (n_features,) p-values of each feature. Notes ----- Complexity of this algorithm is O(n_classes * n_features). See also -------- f_classif: ANOVA F-value between label/feature for classification tasks. f_regression: F-value between label/feature for regression tasks. """ # XXX: we might want to do some of the following in logspace instead for # numerical stability. X = check_array(X, accept_sparse='csr') if np.any((X.data if issparse(X) else X) < 0): raise ValueError("Input X must be non-negative.") Y = LabelBinarizer().fit_transform(y) if Y.shape[1] == 1: Y = np.append(1 - Y, Y, axis=1) observed = safe_sparse_dot(Y.T, X) # n_classes * n_features feature_count = X.sum(axis=0).reshape(1, -1) class_prob = Y.mean(axis=0).reshape(1, -1) expected = np.dot(class_prob.T, feature_count) return _chisquare(observed, expected)
Can the χ² test be used without a contingency table? There are other chi-squared tests, but the numpy function chi2 performs the Pearson's chi-squared test for contingency tables. The Pearson's chi-squared test computes expected and observed frequencies
33,775
Can the χ² test be used without a contingency table?
This question is in part about programming: How does sklearn.feature_selection uses the chi2 criterion to rank features? It's a good question, so not surprisingly it already has a good answer on Stack Overflow: How SelectKBest (chi2) calculates score? So let's consider another interesting question, this one more appropriate for Cross Validated: In what cases is scikit-learn's chi2 criterion useful for feature ranking and selection, if at all? The other answers discuss at length that the Pearson's chi-squared test is a test (for goodness of fit, homogeneity or independence) on contingency tables of counts. So if the features are not counts, then the chi2 criterion is not applicable. A couple of observations that might help to explain the procedure as a heuristic rather than as an appropriate application of statistical theory. (The intended use of SelectKBest?) The ranking of the features won't change if instead of raw counts we use relative frequencies as long as the counts are normalized by the same total. Since SelectKBest chooses a fixed number of features, the result will be the same. So at least on paper the procedure is fine for counts or frequencies (with the same total) though not positive continuous variables in general.٭ The chi-squared statistic is more sensitive to small differences in large counts. Intuitively, if some features have small total count and other features have large total count (eg. dummy variables vs term counts in document classification), the large-count features are more likely to be selected. The feature totals would act a bit like feature weights. So an appropriate use of the chi2 criterion is when the features are counts and of the same type; for example: all dummy variables or all term frequencies. (If there are different types of features, it may be better to SelectKBest from each type first and FeatureUnion the selected features, not the other way round.) The applicability of the procedure may still be limited though. As @EdM points out, automated feature selection, whatever the selection criterion, has many pitfalls. ٭ SelectKBest is designed to be generic, so it uses the scores not the pvalues to rank and select the top k features. We can define our own scoring functions as well.
Can the χ² test be used without a contingency table?
This question is in part about programming: How does sklearn.feature_selection uses the chi2 criterion to rank features? It's a good question, so not surprisingly it already has a good answer on Stack
Can the χ² test be used without a contingency table? This question is in part about programming: How does sklearn.feature_selection uses the chi2 criterion to rank features? It's a good question, so not surprisingly it already has a good answer on Stack Overflow: How SelectKBest (chi2) calculates score? So let's consider another interesting question, this one more appropriate for Cross Validated: In what cases is scikit-learn's chi2 criterion useful for feature ranking and selection, if at all? The other answers discuss at length that the Pearson's chi-squared test is a test (for goodness of fit, homogeneity or independence) on contingency tables of counts. So if the features are not counts, then the chi2 criterion is not applicable. A couple of observations that might help to explain the procedure as a heuristic rather than as an appropriate application of statistical theory. (The intended use of SelectKBest?) The ranking of the features won't change if instead of raw counts we use relative frequencies as long as the counts are normalized by the same total. Since SelectKBest chooses a fixed number of features, the result will be the same. So at least on paper the procedure is fine for counts or frequencies (with the same total) though not positive continuous variables in general.٭ The chi-squared statistic is more sensitive to small differences in large counts. Intuitively, if some features have small total count and other features have large total count (eg. dummy variables vs term counts in document classification), the large-count features are more likely to be selected. The feature totals would act a bit like feature weights. So an appropriate use of the chi2 criterion is when the features are counts and of the same type; for example: all dummy variables or all term frequencies. (If there are different types of features, it may be better to SelectKBest from each type first and FeatureUnion the selected features, not the other way round.) The applicability of the procedure may still be limited though. As @EdM points out, automated feature selection, whatever the selection criterion, has many pitfalls. ٭ SelectKBest is designed to be generic, so it uses the scores not the pvalues to rank and select the top k features. We can define our own scoring functions as well.
Can the χ² test be used without a contingency table? This question is in part about programming: How does sklearn.feature_selection uses the chi2 criterion to rank features? It's a good question, so not surprisingly it already has a good answer on Stack
33,776
Target encoding in test data and target leakage
There's an excellent tutorial on this in the "Learn from Top Kagglers: How to Win a Data Science Competition" Coursera course, which is currently unavailable in response to the affiliation of the course with Moscow State University. It answers several of these questions and I'm not aware of any other resource that is nearly as good (there's several good YouTube videos, but they gloss a bit over some details such as the target leakage within the training data issue). Another source that I've looked at is the "Approaching (almost) any Machine Learning problem" book by Abhishek Thakur, in particular the brief section from page 132 onwards. There may be also other good materials by other Kagglers, because this is a technique that is widely used in data science competitions, but has received comparatively little academic attention. Additionally, people taking part in serious data science competitions are extremely well incentivized to find approaches that generalize well to previously unseen data (or at least the test set for the competition, which they cannot see) and to avoid any target leakage including in their own evaluation of their models. It seems that for that reason academic descriptions of the topic tend to gloss over very important details that people regularly participating in data science competitions are aware of. I'm sure there's notable positive exceptions and the two communities do of course overlap substantially, so take these comments as subjective impressions of "average" papers I've seen. Calculating target encoding out-of-fold This is critical for giving you a fair evaluation of your model (including the target encoded features) on the validation part of a fold (i.e. nothing that is used as a predictor when evaluating the model in the validation part of a fold must in any way shape or form use the target information on the validation part of the fold). At this point, we have avoided target leakage to the validation part of each fold. That's important for evaluating our model. However, we still have target leakage between records in the training data. Example let's assume our training data is this: Record Category Label 1 A 0 2 A 0 3 A 0 4 B 0 5 B 0 6 B 1 7 B 1 8 C 1 9 C 1 10 C 1 In this example, the a target encoding of A = 0, B = 0.33 and C = 1.0 allows for overfitting, as the target encoding as a feature for record 1 already gives away that record 1 must have a label of 0, otherwise the target encoding would not be 0. Next, you might go for leave-current-record-out target encoding, but even that has issues: for records 4 and 5 (encoding 0.67), and records 6 and 7 (encoding 0.33) you again leak that 4 and 5 must be 0s, and 6 and 7 must be 1s (otherwise their labels wouldn't differ from other records with the same category). Additionally, this example shows a weird reversal of the effect of the target encoding as a predictor (i.e. lower target encoding means higher label value for the current record). In any case, some classes of models will be able to overfit to this target leakage (and this is really just overfitting, because you cannot leak the true label in such a way on a real test set). So, in summary, the remaining problem with "naive target encoding in a cross-validated fashion" does not lead to the CV-evaluation being invalid. However, it may negatively impact our model performance, because it leads to overfitting. There's approaches to reduce its impact (e.g. regularization, see next section) and try to even prevent that (e.g. further nested splitting of data within the training part of a fold-split for creating target encodings). Let's illustrate the latter idea: e.g. you split your data 5-fold, then each fold can be looked at as consisting of 5 parts (one validation part and 4 training parts), you can for each training part calculate the target encoding from the other 3 training parts and use that as features on this training part. Regularization Regularization here can mean multiple things. One commonly used technique is to take a weighted average with some weight parameter $\lambda \in [0,1]$ so that the target encoding would be $$\lambda \times \text{overall average} + (1-\lambda) \times \text{average for category}.$$ Another version is to pick some $N_\text{pseudo}>0$ (kind of a number of pseudo-observations that pull the category average to the overall average) and to base the regularization on the amount of records in the category $$N_\text{pseudo} / (N_\text{category}+N_\text{pseudo}) \times \text{overall average} \\ + N_\text{category} / (N_\text{category}+N_\text{pseudo}) \times \text{average for category}.$$ This second option has the nice feature of more regularization for smaller categories. You can now tune these parameters like other hyperparameters and pick ones that lead to good out-of-fold performance. This form of regularization does not really prevent target leakage, but can reduce the impact of target leakage within the training data of a fold-split (which is basically overfitting). Implementation on the test set This will - as it always should - be done the same way as you do for the validation part of each cross-validation fold-split. I.e. without using the test (or validation, in case of doing CV) target information, at all. I.e. the test set encoding is based on the target encoding for the training data. That's all the data with labels, if you re-train a model on all your data before applying it to the test data, or the training data of each fold split, if you apply the models from each of your CV folds to the test data and average their results. In the latter case, that means that each of those models would use a different target encoding. Note, that you may have to take care of previously unseen categories that you did not have in the training data. The good thing is that this might already occur in your cross-validation, in which case you can evaluate your approach for dealing with these via the cross-validation already. E.g. do you just use the overall average (and possibly also use a frequency encoding, or some other way of flagging that it's a rare category), or do you pool rare and/or not previously seen categories into a larger "other" category, or something else.
Target encoding in test data and target leakage
There's an excellent tutorial on this in the "Learn from Top Kagglers: How to Win a Data Science Competition" Coursera course, which is currently unavailable in response to the affiliation of the cour
Target encoding in test data and target leakage There's an excellent tutorial on this in the "Learn from Top Kagglers: How to Win a Data Science Competition" Coursera course, which is currently unavailable in response to the affiliation of the course with Moscow State University. It answers several of these questions and I'm not aware of any other resource that is nearly as good (there's several good YouTube videos, but they gloss a bit over some details such as the target leakage within the training data issue). Another source that I've looked at is the "Approaching (almost) any Machine Learning problem" book by Abhishek Thakur, in particular the brief section from page 132 onwards. There may be also other good materials by other Kagglers, because this is a technique that is widely used in data science competitions, but has received comparatively little academic attention. Additionally, people taking part in serious data science competitions are extremely well incentivized to find approaches that generalize well to previously unseen data (or at least the test set for the competition, which they cannot see) and to avoid any target leakage including in their own evaluation of their models. It seems that for that reason academic descriptions of the topic tend to gloss over very important details that people regularly participating in data science competitions are aware of. I'm sure there's notable positive exceptions and the two communities do of course overlap substantially, so take these comments as subjective impressions of "average" papers I've seen. Calculating target encoding out-of-fold This is critical for giving you a fair evaluation of your model (including the target encoded features) on the validation part of a fold (i.e. nothing that is used as a predictor when evaluating the model in the validation part of a fold must in any way shape or form use the target information on the validation part of the fold). At this point, we have avoided target leakage to the validation part of each fold. That's important for evaluating our model. However, we still have target leakage between records in the training data. Example let's assume our training data is this: Record Category Label 1 A 0 2 A 0 3 A 0 4 B 0 5 B 0 6 B 1 7 B 1 8 C 1 9 C 1 10 C 1 In this example, the a target encoding of A = 0, B = 0.33 and C = 1.0 allows for overfitting, as the target encoding as a feature for record 1 already gives away that record 1 must have a label of 0, otherwise the target encoding would not be 0. Next, you might go for leave-current-record-out target encoding, but even that has issues: for records 4 and 5 (encoding 0.67), and records 6 and 7 (encoding 0.33) you again leak that 4 and 5 must be 0s, and 6 and 7 must be 1s (otherwise their labels wouldn't differ from other records with the same category). Additionally, this example shows a weird reversal of the effect of the target encoding as a predictor (i.e. lower target encoding means higher label value for the current record). In any case, some classes of models will be able to overfit to this target leakage (and this is really just overfitting, because you cannot leak the true label in such a way on a real test set). So, in summary, the remaining problem with "naive target encoding in a cross-validated fashion" does not lead to the CV-evaluation being invalid. However, it may negatively impact our model performance, because it leads to overfitting. There's approaches to reduce its impact (e.g. regularization, see next section) and try to even prevent that (e.g. further nested splitting of data within the training part of a fold-split for creating target encodings). Let's illustrate the latter idea: e.g. you split your data 5-fold, then each fold can be looked at as consisting of 5 parts (one validation part and 4 training parts), you can for each training part calculate the target encoding from the other 3 training parts and use that as features on this training part. Regularization Regularization here can mean multiple things. One commonly used technique is to take a weighted average with some weight parameter $\lambda \in [0,1]$ so that the target encoding would be $$\lambda \times \text{overall average} + (1-\lambda) \times \text{average for category}.$$ Another version is to pick some $N_\text{pseudo}>0$ (kind of a number of pseudo-observations that pull the category average to the overall average) and to base the regularization on the amount of records in the category $$N_\text{pseudo} / (N_\text{category}+N_\text{pseudo}) \times \text{overall average} \\ + N_\text{category} / (N_\text{category}+N_\text{pseudo}) \times \text{average for category}.$$ This second option has the nice feature of more regularization for smaller categories. You can now tune these parameters like other hyperparameters and pick ones that lead to good out-of-fold performance. This form of regularization does not really prevent target leakage, but can reduce the impact of target leakage within the training data of a fold-split (which is basically overfitting). Implementation on the test set This will - as it always should - be done the same way as you do for the validation part of each cross-validation fold-split. I.e. without using the test (or validation, in case of doing CV) target information, at all. I.e. the test set encoding is based on the target encoding for the training data. That's all the data with labels, if you re-train a model on all your data before applying it to the test data, or the training data of each fold split, if you apply the models from each of your CV folds to the test data and average their results. In the latter case, that means that each of those models would use a different target encoding. Note, that you may have to take care of previously unseen categories that you did not have in the training data. The good thing is that this might already occur in your cross-validation, in which case you can evaluate your approach for dealing with these via the cross-validation already. E.g. do you just use the overall average (and possibly also use a frequency encoding, or some other way of flagging that it's a rare category), or do you pool rare and/or not previously seen categories into a larger "other" category, or something else.
Target encoding in test data and target leakage There's an excellent tutorial on this in the "Learn from Top Kagglers: How to Win a Data Science Competition" Coursera course, which is currently unavailable in response to the affiliation of the cour
33,777
Target encoding in test data and target leakage
I think there's confusion about what constitutes target leakage here. There are two places where this can come up: The more serious kind, and the kind which I think is generally meant by "target leakage": you incorporate some test set information into your model building, and so your test scores are biased. Incorporating training target information into your encoding process, which can lead to too much information available to your eventual model, leading to overfitting. But if you consider the encoding steps as part of the larger modeling pipeline, this isn't really "wrong", it's just another flavor of overfitting. You can reduce this effect by various kinds of regularization, see e.g. https://arxiv.org/abs/2104.00629. I don't understand why your source does nested cross-validation; the outer split seems sufficient to enforce that the target values for a row are taken from an averaged on other samples, reducing the overfitting effect described in point 2 above. And averaging the means across the inner folds seems likely to be very close to the simple average without the inner folds, assuming the distribution of categories is roughly preserved. There are other ways to accomplish this too, like smoothing with the prior, which is maybe what you meant in your second question? To your third question, usually you just use the average for the entire training set. You wouldn't want to use the targets from the test set if they were available, because that would fall into point 1 above. The entire modeling procedure assumes that test data is iid with the test data, so your last sentence of question 3 is not any different from any other aspect of the modeling process: you use the training data to make predictions about the test data.
Target encoding in test data and target leakage
I think there's confusion about what constitutes target leakage here. There are two places where this can come up: The more serious kind, and the kind which I think is generally meant by "target lea
Target encoding in test data and target leakage I think there's confusion about what constitutes target leakage here. There are two places where this can come up: The more serious kind, and the kind which I think is generally meant by "target leakage": you incorporate some test set information into your model building, and so your test scores are biased. Incorporating training target information into your encoding process, which can lead to too much information available to your eventual model, leading to overfitting. But if you consider the encoding steps as part of the larger modeling pipeline, this isn't really "wrong", it's just another flavor of overfitting. You can reduce this effect by various kinds of regularization, see e.g. https://arxiv.org/abs/2104.00629. I don't understand why your source does nested cross-validation; the outer split seems sufficient to enforce that the target values for a row are taken from an averaged on other samples, reducing the overfitting effect described in point 2 above. And averaging the means across the inner folds seems likely to be very close to the simple average without the inner folds, assuming the distribution of categories is roughly preserved. There are other ways to accomplish this too, like smoothing with the prior, which is maybe what you meant in your second question? To your third question, usually you just use the average for the entire training set. You wouldn't want to use the targets from the test set if they were available, because that would fall into point 1 above. The entire modeling procedure assumes that test data is iid with the test data, so your last sentence of question 3 is not any different from any other aspect of the modeling process: you use the training data to make predictions about the test data.
Target encoding in test data and target leakage I think there's confusion about what constitutes target leakage here. There are two places where this can come up: The more serious kind, and the kind which I think is generally meant by "target lea
33,778
How to estimate $P(x\le0)$ from $n$ samples of $x$?
The method you are using is very close to the MLE, which has reasonable estimation properties when the underlying parametric model is correct. The MLE has a property called functional invariance, which means that the MLE of a function of the parameters is that function of the MLE. Your method uses the sample variance estimator, which is a bias-corrected version of the MLE of the true variance, but your estimator should have reasonable properties if the underlying model is correct. Of course, you are correct that your estimator involves some variance, but that is true of any estimator in this situation. If you are confident that your data is from an exchangeable sequence (i.e., it is an IID model) then I would recommend you give serious consideration to instead using the empirical estimator, which is: $$\widehat{\mathbb{P}(X \leqslant 0)} = \frac{1}{n} \sum_{i=1}^n \mathbb{I}(x_i \leqslant 0).$$ This latter estimator also has good properties, but crucially, it does not rely on the assumption that the data are from a normal distribution. The empirical estimator is consistent for any underlying distribution (which your estimator is not) which makes it highly robust to model misspecification.
How to estimate $P(x\le0)$ from $n$ samples of $x$?
The method you are using is very close to the MLE, which has reasonable estimation properties when the underlying parametric model is correct. The MLE has a property called functional invariance, whi
How to estimate $P(x\le0)$ from $n$ samples of $x$? The method you are using is very close to the MLE, which has reasonable estimation properties when the underlying parametric model is correct. The MLE has a property called functional invariance, which means that the MLE of a function of the parameters is that function of the MLE. Your method uses the sample variance estimator, which is a bias-corrected version of the MLE of the true variance, but your estimator should have reasonable properties if the underlying model is correct. Of course, you are correct that your estimator involves some variance, but that is true of any estimator in this situation. If you are confident that your data is from an exchangeable sequence (i.e., it is an IID model) then I would recommend you give serious consideration to instead using the empirical estimator, which is: $$\widehat{\mathbb{P}(X \leqslant 0)} = \frac{1}{n} \sum_{i=1}^n \mathbb{I}(x_i \leqslant 0).$$ This latter estimator also has good properties, but crucially, it does not rely on the assumption that the data are from a normal distribution. The empirical estimator is consistent for any underlying distribution (which your estimator is not) which makes it highly robust to model misspecification.
How to estimate $P(x\le0)$ from $n$ samples of $x$? The method you are using is very close to the MLE, which has reasonable estimation properties when the underlying parametric model is correct. The MLE has a property called functional invariance, whi
33,779
How to estimate $P(x\le0)$ from $n$ samples of $x$?
You can do better. In the sense that among all the unbiased estimators, you can find the one with the smallest variance. Our goal is to estimate $\mathbb{P}(X<0)$. An unbiased estimator is $$\mathbb{1}(X_1<0)$$ because $\mathbb{E}[\mathbb{1}(X_1<0)]=\mathbb{P}(X_1<0)=\mathbb{P}(X<0)$. Since this is unbiased, from Lehmann–Scheffé theorem, we know that $$\mathbb{P}(X_1<0|\bar{X},S)=\mathbb{P}(X_1<0|\hat{\mu},\hat{\sigma}^2)$$is the uniform-minimum variance unbiased estimator or UMVUE, because $(\bar{X},S)$ is complete and sufficient. A few more step. The above equals to $$\mathbb{P}(\frac{X_1-\bar{X}}{S}<\frac{-\bar{X}}{S}|\bar{X},S)$$ By Basu's thorem, we know $\frac{X_1-\bar{X}}{S}$ is independent with $(\bar{X}, S)$, because $\frac{X_1-\bar{X}}{S}$ is ancillary and $(\bar{X}, S)$ is a sufficient complete statistic. This means the above equation is nothing but $$\mathbb{P}(\frac{X_1-\bar{X}}{S}<\frac{-\bar{X}}{S})$$ Now, all we are left to do is to find out the distribution of random variable $T=\frac{X_1-\bar{X}}{S}$, which is given precisely here. For more information, please see "Theory of Point Estimation SECOND EDITION Lehmann & Casella", example 2.2.2. ------------------- a comment on your estimator ------------------ Your estimator is good enough in the sense that it is consistent, namely, when you have a large sample size, your estimator will be very close to the true value. This is because of the consistency of the plug-in method, or it can be shown by the continuous mapping theorem. However, it is a biased estimator.
How to estimate $P(x\le0)$ from $n$ samples of $x$?
You can do better. In the sense that among all the unbiased estimators, you can find the one with the smallest variance. Our goal is to estimate $\mathbb{P}(X<0)$. An unbiased estimator is $$\mathbb{1
How to estimate $P(x\le0)$ from $n$ samples of $x$? You can do better. In the sense that among all the unbiased estimators, you can find the one with the smallest variance. Our goal is to estimate $\mathbb{P}(X<0)$. An unbiased estimator is $$\mathbb{1}(X_1<0)$$ because $\mathbb{E}[\mathbb{1}(X_1<0)]=\mathbb{P}(X_1<0)=\mathbb{P}(X<0)$. Since this is unbiased, from Lehmann–Scheffé theorem, we know that $$\mathbb{P}(X_1<0|\bar{X},S)=\mathbb{P}(X_1<0|\hat{\mu},\hat{\sigma}^2)$$is the uniform-minimum variance unbiased estimator or UMVUE, because $(\bar{X},S)$ is complete and sufficient. A few more step. The above equals to $$\mathbb{P}(\frac{X_1-\bar{X}}{S}<\frac{-\bar{X}}{S}|\bar{X},S)$$ By Basu's thorem, we know $\frac{X_1-\bar{X}}{S}$ is independent with $(\bar{X}, S)$, because $\frac{X_1-\bar{X}}{S}$ is ancillary and $(\bar{X}, S)$ is a sufficient complete statistic. This means the above equation is nothing but $$\mathbb{P}(\frac{X_1-\bar{X}}{S}<\frac{-\bar{X}}{S})$$ Now, all we are left to do is to find out the distribution of random variable $T=\frac{X_1-\bar{X}}{S}$, which is given precisely here. For more information, please see "Theory of Point Estimation SECOND EDITION Lehmann & Casella", example 2.2.2. ------------------- a comment on your estimator ------------------ Your estimator is good enough in the sense that it is consistent, namely, when you have a large sample size, your estimator will be very close to the true value. This is because of the consistency of the plug-in method, or it can be shown by the continuous mapping theorem. However, it is a biased estimator.
How to estimate $P(x\le0)$ from $n$ samples of $x$? You can do better. In the sense that among all the unbiased estimators, you can find the one with the smallest variance. Our goal is to estimate $\mathbb{P}(X<0)$. An unbiased estimator is $$\mathbb{1
33,780
How to estimate $P(x\le0)$ from $n$ samples of $x$?
Suppoxe $X \sim \mathsf{Norm}(\mu=1,\sigma=10).$ then $P(X < 0\,|\,\mu=1, \sigma=10) = 0.4602.$ pnorm(0, 1, 10) [1] 0.4601722 Now suppose you have a random sample x of size $n = 50$ from $\mathsf{Norm}(\mu=1,\sigma=10).$ set.seed(225) x = rnorm(50, 1, 10) mean(x); sd(x) [1] 2.747168 [1] 10.84025 Then the usual estimates are $\hat\mu=\bar X =2.7472,$ and $\hat\sigma = S = 10.84025.$ Accordingly, you suggest $\hat P(X < 0\,|\,\hat\mu,\hat\sigma) = 0.400.$ pnorm(0, mean(x), sd(x)) [1] 0.3999707 There are various ways of assessing the variability of this estimate of the probability. One possibility is to give a 95% parametric bootstrap confidence interval of the probability. There are many styles of bootstraps and bootstrapping is not the only possibility. To get the discussion started, the simple quantile bootstrap method shown below gives the interval $(0.286, 0.511),$ centered near our point estimate $0.40.$ Because this is a simulation procedure (starting with known $\mu$ and $\sigma$ to get the data for estimation), we know that the true probability is $0.46,$ and thus that this CI contains the true probability. However, in an actual application we would not know whether such a bootstrap CI covers (contains) the true probability; we can hope that, for 95% of samples of size $n=50,$ it does. set.seed(2021) a = mean(x); s = sd(x) # sample 'x' from above B = 3000; p.re = numeric(B) for(i in 1:B) { x.re = rnorm(50, a, s) p.re[i] = pnorm(0, mean(x.re), sd(x.re)) } quantile(p.re, c(.025,.975)) 2.5% 97.5% 0.2861269 0.5109705 length(unique(p.re)) [1] 3000 Here is a histogram of the 3000 uniquely different re-sampled probability estimates used in making the bootstrap. Sometimes correction for skewness in bootstrap CIs is warranted, but our bootstrap distribution seems roughly symmetrical. hist(p.re, prob=T, br=20, col="skyblue2") abline(v = c(0.286, 0.511), col="red", lwd=2, lty="dotted") I will be interested to see other ideas how to approach this problem.
How to estimate $P(x\le0)$ from $n$ samples of $x$?
Suppoxe $X \sim \mathsf{Norm}(\mu=1,\sigma=10).$ then $P(X < 0\,|\,\mu=1, \sigma=10) = 0.4602.$ pnorm(0, 1, 10) [1] 0.4601722 Now suppose you have a random sample x of size $n = 50$ from $\mathsf{Nor
How to estimate $P(x\le0)$ from $n$ samples of $x$? Suppoxe $X \sim \mathsf{Norm}(\mu=1,\sigma=10).$ then $P(X < 0\,|\,\mu=1, \sigma=10) = 0.4602.$ pnorm(0, 1, 10) [1] 0.4601722 Now suppose you have a random sample x of size $n = 50$ from $\mathsf{Norm}(\mu=1,\sigma=10).$ set.seed(225) x = rnorm(50, 1, 10) mean(x); sd(x) [1] 2.747168 [1] 10.84025 Then the usual estimates are $\hat\mu=\bar X =2.7472,$ and $\hat\sigma = S = 10.84025.$ Accordingly, you suggest $\hat P(X < 0\,|\,\hat\mu,\hat\sigma) = 0.400.$ pnorm(0, mean(x), sd(x)) [1] 0.3999707 There are various ways of assessing the variability of this estimate of the probability. One possibility is to give a 95% parametric bootstrap confidence interval of the probability. There are many styles of bootstraps and bootstrapping is not the only possibility. To get the discussion started, the simple quantile bootstrap method shown below gives the interval $(0.286, 0.511),$ centered near our point estimate $0.40.$ Because this is a simulation procedure (starting with known $\mu$ and $\sigma$ to get the data for estimation), we know that the true probability is $0.46,$ and thus that this CI contains the true probability. However, in an actual application we would not know whether such a bootstrap CI covers (contains) the true probability; we can hope that, for 95% of samples of size $n=50,$ it does. set.seed(2021) a = mean(x); s = sd(x) # sample 'x' from above B = 3000; p.re = numeric(B) for(i in 1:B) { x.re = rnorm(50, a, s) p.re[i] = pnorm(0, mean(x.re), sd(x.re)) } quantile(p.re, c(.025,.975)) 2.5% 97.5% 0.2861269 0.5109705 length(unique(p.re)) [1] 3000 Here is a histogram of the 3000 uniquely different re-sampled probability estimates used in making the bootstrap. Sometimes correction for skewness in bootstrap CIs is warranted, but our bootstrap distribution seems roughly symmetrical. hist(p.re, prob=T, br=20, col="skyblue2") abline(v = c(0.286, 0.511), col="red", lwd=2, lty="dotted") I will be interested to see other ideas how to approach this problem.
How to estimate $P(x\le0)$ from $n$ samples of $x$? Suppoxe $X \sim \mathsf{Norm}(\mu=1,\sigma=10).$ then $P(X < 0\,|\,\mu=1, \sigma=10) = 0.4602.$ pnorm(0, 1, 10) [1] 0.4601722 Now suppose you have a random sample x of size $n = 50$ from $\mathsf{Nor
33,781
How to estimate $P(x\le0)$ from $n$ samples of $x$?
The variance of your estimates doesn't matter. The mean and variance estimates are consistent, so the functional representation is consistent as well. We use the degree of freedom correction for the estimate of the sample variance because it's oh-so slightly biased, but that bias disappears in moderate samples. Note: the biased estimator (without the degrees of freedom correction) is the maximum likelihood estimator. MLEs and functions of them are consistent estimators of the parameters and functions of the parameters respectively a result of the continuous mapping theorem. When the normal assumption holds, your proposed estimator is better than the non-parametric estimate (the proportion of $X$ that is negative). It likely has lower variance too. To find the variance, you can use the law of total variance, or the delta-method.
How to estimate $P(x\le0)$ from $n$ samples of $x$?
The variance of your estimates doesn't matter. The mean and variance estimates are consistent, so the functional representation is consistent as well. We use the degree of freedom correction for the e
How to estimate $P(x\le0)$ from $n$ samples of $x$? The variance of your estimates doesn't matter. The mean and variance estimates are consistent, so the functional representation is consistent as well. We use the degree of freedom correction for the estimate of the sample variance because it's oh-so slightly biased, but that bias disappears in moderate samples. Note: the biased estimator (without the degrees of freedom correction) is the maximum likelihood estimator. MLEs and functions of them are consistent estimators of the parameters and functions of the parameters respectively a result of the continuous mapping theorem. When the normal assumption holds, your proposed estimator is better than the non-parametric estimate (the proportion of $X$ that is negative). It likely has lower variance too. To find the variance, you can use the law of total variance, or the delta-method.
How to estimate $P(x\le0)$ from $n$ samples of $x$? The variance of your estimates doesn't matter. The mean and variance estimates are consistent, so the functional representation is consistent as well. We use the degree of freedom correction for the e
33,782
Why highly correlated means higher variance?
Say you have a normal six-faced die. And you are interested in the mean of the two numbers you get after rolling it twice. Scenario 1: You roll the die twice and you get {5} and {3}. Their total is 8 and their mean is 4, while we know the expected value is 3.5. We roll again and we get {2} and {5}, their mean is 3.5. We got quite close to the true expected value. Scenario 2: You roll the die once, and then you roll the die until you get a number that is at most $\pm$1 away from your first roll. I roll a {6}, hence I can only get a {5} or a {6}. Their mean will be 5.5 or 6. After getting one of those I roll again, I get a {3}. The second roll is a {2}, their mean is 2.5. In Scenario 1 the two rolling of the die are independent and uncorrelated, hence they can freely explore the sample space. In Scenario 2 the two values are highly correlated, and the sample space is constrained for the second roll, hence it is easier to get more extreme sample means (like 1.5 or 5.5) more often. We also note that for Scenario 1 there are many ways you can get the same sample mean that corresponds to the true mean: {1} and {6}, {5} and {2}, {4} and {3}. Whereas in Scenario 2 only {3} and {4} will give you the true population mean, as such, the sample means are more variable in the latter case. Edit for negative covariance: Consider now a Scenario 3, which is similar to Scenario 2 in that the second roll is also constrained, but in this case the rule for the second roll is a little bit more tricky: if our first roll is below 3.5 (the expected value), we will only accept rolls that are at least $+$3 away from the first value, and if it is above 3.5, we will only accept rolls that are at least $-$3 away from the first value. We roll once and we get a {4}, the only value we can accept then will be a {1}, giving us a sample mean of 2.5. We roll again and we get a {2}, leaving us as possible values for the second roll only {5} and {6}. The sample mean will be 3.5 or 4. We can see that the sample space is constrained for both Scenario 2 and Scenario 3, but while the first constrains the space so that it is more likely to get extreme sample means—like {1} and {2}—the latter constrains the space so that it is more unlikely to get extreme sample means—it is not possible to get {1} and {2} anymore, nor {1} and {3}. As such, the possible sample means are less variable and closer to the true expected value. This is the effect of a high negative covariance, so the sign is relevant in interpreting the original statement.
Why highly correlated means higher variance?
Say you have a normal six-faced die. And you are interested in the mean of the two numbers you get after rolling it twice. Scenario 1: You roll the die twice and you get {5} and {3}. Their total is 8
Why highly correlated means higher variance? Say you have a normal six-faced die. And you are interested in the mean of the two numbers you get after rolling it twice. Scenario 1: You roll the die twice and you get {5} and {3}. Their total is 8 and their mean is 4, while we know the expected value is 3.5. We roll again and we get {2} and {5}, their mean is 3.5. We got quite close to the true expected value. Scenario 2: You roll the die once, and then you roll the die until you get a number that is at most $\pm$1 away from your first roll. I roll a {6}, hence I can only get a {5} or a {6}. Their mean will be 5.5 or 6. After getting one of those I roll again, I get a {3}. The second roll is a {2}, their mean is 2.5. In Scenario 1 the two rolling of the die are independent and uncorrelated, hence they can freely explore the sample space. In Scenario 2 the two values are highly correlated, and the sample space is constrained for the second roll, hence it is easier to get more extreme sample means (like 1.5 or 5.5) more often. We also note that for Scenario 1 there are many ways you can get the same sample mean that corresponds to the true mean: {1} and {6}, {5} and {2}, {4} and {3}. Whereas in Scenario 2 only {3} and {4} will give you the true population mean, as such, the sample means are more variable in the latter case. Edit for negative covariance: Consider now a Scenario 3, which is similar to Scenario 2 in that the second roll is also constrained, but in this case the rule for the second roll is a little bit more tricky: if our first roll is below 3.5 (the expected value), we will only accept rolls that are at least $+$3 away from the first value, and if it is above 3.5, we will only accept rolls that are at least $-$3 away from the first value. We roll once and we get a {4}, the only value we can accept then will be a {1}, giving us a sample mean of 2.5. We roll again and we get a {2}, leaving us as possible values for the second roll only {5} and {6}. The sample mean will be 3.5 or 4. We can see that the sample space is constrained for both Scenario 2 and Scenario 3, but while the first constrains the space so that it is more likely to get extreme sample means—like {1} and {2}—the latter constrains the space so that it is more unlikely to get extreme sample means—it is not possible to get {1} and {2} anymore, nor {1} and {3}. As such, the possible sample means are less variable and closer to the true expected value. This is the effect of a high negative covariance, so the sign is relevant in interpreting the original statement.
Why highly correlated means higher variance? Say you have a normal six-faced die. And you are interested in the mean of the two numbers you get after rolling it twice. Scenario 1: You roll the die twice and you get {5} and {3}. Their total is 8
33,783
Why highly correlated means higher variance?
The image below might give an intuitive view This image also shows that high correlation does not always mean higher variance, or is at least ambiguous (that is, the image on the left has a high negative correlation and the result is a low variance for the sum $x+y$).
Why highly correlated means higher variance?
The image below might give an intuitive view This image also shows that high correlation does not always mean higher variance, or is at least ambiguous (that is, the image on the left has a high nega
Why highly correlated means higher variance? The image below might give an intuitive view This image also shows that high correlation does not always mean higher variance, or is at least ambiguous (that is, the image on the left has a high negative correlation and the result is a low variance for the sum $x+y$).
Why highly correlated means higher variance? The image below might give an intuitive view This image also shows that high correlation does not always mean higher variance, or is at least ambiguous (that is, the image on the left has a high nega
33,784
Why highly correlated means higher variance?
An extreme example to complement the other answer: making $N$ exact copies of one sample gives me $N$ completely correlated samples. Clearly, this does not reduce the variance of any estimates made using the samples. We can show this with your formula making two copies $$ \text{Var}(\bar x) = \text{Var}\left(\frac{x+x}{2}\right) = \frac14 \left[\text{Var}(x) + \text{Var}(x) + 2\,\text{Cov}(x,x)\right] = \text{Var}(x) $$ The result can be seen by making $(x+x)/2 = x$ or by recognising that $\text{Cov}(x,x) = \text{Var}(x)$.
Why highly correlated means higher variance?
An extreme example to complement the other answer: making $N$ exact copies of one sample gives me $N$ completely correlated samples. Clearly, this does not reduce the variance of any estimates made us
Why highly correlated means higher variance? An extreme example to complement the other answer: making $N$ exact copies of one sample gives me $N$ completely correlated samples. Clearly, this does not reduce the variance of any estimates made using the samples. We can show this with your formula making two copies $$ \text{Var}(\bar x) = \text{Var}\left(\frac{x+x}{2}\right) = \frac14 \left[\text{Var}(x) + \text{Var}(x) + 2\,\text{Cov}(x,x)\right] = \text{Var}(x) $$ The result can be seen by making $(x+x)/2 = x$ or by recognising that $\text{Cov}(x,x) = \text{Var}(x)$.
Why highly correlated means higher variance? An extreme example to complement the other answer: making $N$ exact copies of one sample gives me $N$ completely correlated samples. Clearly, this does not reduce the variance of any estimates made us
33,785
Why highly correlated means higher variance?
Because "highly correlated" generally means Cov(X,Y) is +ve and "uncorrelated" means Cov(X,Y) is zero, so comparing "highly correlated" with "uncorrelated" using your expression you would have Var(𝑋+𝑌) highest in the "highly correlated" case (Var(X) and Var(Y) are always positive.
Why highly correlated means higher variance?
Because "highly correlated" generally means Cov(X,Y) is +ve and "uncorrelated" means Cov(X,Y) is zero, so comparing "highly correlated" with "uncorrelated" using your expression you would have Var(𝑋+𝑌
Why highly correlated means higher variance? Because "highly correlated" generally means Cov(X,Y) is +ve and "uncorrelated" means Cov(X,Y) is zero, so comparing "highly correlated" with "uncorrelated" using your expression you would have Var(𝑋+𝑌) highest in the "highly correlated" case (Var(X) and Var(Y) are always positive.
Why highly correlated means higher variance? Because "highly correlated" generally means Cov(X,Y) is +ve and "uncorrelated" means Cov(X,Y) is zero, so comparing "highly correlated" with "uncorrelated" using your expression you would have Var(𝑋+𝑌
33,786
What did Silverman (1981) mean by 'critical bandwidth'?
I hate animations in Web pages, but this question begs for an animated answer: These are KDEs for a set of three values (near -2.5, 0.5, and 2.5). Their bandwidths continually vary, growing from small to large. Watch as three peaks become two and ultimately one. A KDE puts a pile of "probability" at each data point. As the bandwidth widens, the pile "slumps." When you start with tiny bandwidths, each data value contributes its own discrete pile. As the bandwidths grow, the piles slump and merge and accumulate on top of each other (the thick blue line), ultimately becoming one single pile. Along the way, the maxima change discontinuously from the starting value of $n$ (assuming the kernel has a single maximum, which is almost always the case) to $1.$ The critical width for $k$ maxima is the first (smallest) width that reduces the KDE to a curve with no more than $k$ maxima.
What did Silverman (1981) mean by 'critical bandwidth'?
I hate animations in Web pages, but this question begs for an animated answer: These are KDEs for a set of three values (near -2.5, 0.5, and 2.5). Their bandwidths continually vary, growing from sma
What did Silverman (1981) mean by 'critical bandwidth'? I hate animations in Web pages, but this question begs for an animated answer: These are KDEs for a set of three values (near -2.5, 0.5, and 2.5). Their bandwidths continually vary, growing from small to large. Watch as three peaks become two and ultimately one. A KDE puts a pile of "probability" at each data point. As the bandwidth widens, the pile "slumps." When you start with tiny bandwidths, each data value contributes its own discrete pile. As the bandwidths grow, the piles slump and merge and accumulate on top of each other (the thick blue line), ultimately becoming one single pile. Along the way, the maxima change discontinuously from the starting value of $n$ (assuming the kernel has a single maximum, which is almost always the case) to $1.$ The critical width for $k$ maxima is the first (smallest) width that reduces the KDE to a curve with no more than $k$ maxima.
What did Silverman (1981) mean by 'critical bandwidth'? I hate animations in Web pages, but this question begs for an animated answer: These are KDEs for a set of three values (near -2.5, 0.5, and 2.5). Their bandwidths continually vary, growing from sma
33,787
What did Silverman (1981) mean by 'critical bandwidth'?
If you have a really wide bandwidth, you'll get one peak in your KDE. If you reduce it a bit, its still one peak. Keep reducing it until you get to the switchover point of adding a second peak. That bandwidth is $h(1)$. Now make it smaller still, until you get to the switchover between two peaks and three. That's $h(2)$. And so forth. At any bandwidth between $h(i-1)$ and $h(i)$ you will have $i$ peaks in your KDE. Silverman wanted a name for that set of $h$-values; he called them critical bandwidths. This comes up in his test for multimodality, for example.
What did Silverman (1981) mean by 'critical bandwidth'?
If you have a really wide bandwidth, you'll get one peak in your KDE. If you reduce it a bit, its still one peak. Keep reducing it until you get to the switchover point of adding a second peak. That b
What did Silverman (1981) mean by 'critical bandwidth'? If you have a really wide bandwidth, you'll get one peak in your KDE. If you reduce it a bit, its still one peak. Keep reducing it until you get to the switchover point of adding a second peak. That bandwidth is $h(1)$. Now make it smaller still, until you get to the switchover between two peaks and three. That's $h(2)$. And so forth. At any bandwidth between $h(i-1)$ and $h(i)$ you will have $i$ peaks in your KDE. Silverman wanted a name for that set of $h$-values; he called them critical bandwidths. This comes up in his test for multimodality, for example.
What did Silverman (1981) mean by 'critical bandwidth'? If you have a really wide bandwidth, you'll get one peak in your KDE. If you reduce it a bit, its still one peak. Keep reducing it until you get to the switchover point of adding a second peak. That b
33,788
Why does this set of data have no covariance?
The magnitude of covariance depends on the magnitude of the data and how close those data points are scattered around the mean of that data. It's easy to see when you look at the formula: $cov_{x,y}= \frac{\sum(x_i-\bar{x})(y_i-\bar{y})}{n-1}$ In your case, the deviance of the x1 and x2 data points to the mean of x1 and x2 are: x1-mean(x1) [1] 0.006043341 -0.012907669 0.003978501 -0.003950639 -0.006020309 -0.000423439 0.003873601 [8] -0.002634199 0.000193071 0.010008621 -0.004096619 0.000683561 -0.007403759 0.006433301 [15] 0.007553331 -0.002496069 0.008307881 -0.004780219 0.005430541 -0.007792829 x2-mean(x2) [1] 0.0039622385 -0.0093155415 0.0031978185 -0.0018427215 -0.0040443215 -0.0000098315 [7] 0.0030910485 -0.0013344815 0.0001757985 0.0054476185 -0.0023106915 0.0002776485 [13] -0.0052140815 0.0041823185 0.0047488885 -0.0011648015 0.0049787285 -0.0032981115 [19] 0.0038447385 -0.0053722615 Now if you multiply those two vectors with each other you obviously get quite small numbers: (x1-mean(x1)) * (x2-mean(x2)) [1] 2.394516e-05 1.202419e-04 1.272252e-05 7.279927e-06 2.434807e-05 4.163041e-09 1.197349e-05 [8] 3.515290e-06 3.394159e-08 5.452315e-05 9.466023e-06 1.897897e-07 3.860380e-05 2.690611e-05 [15] 3.586993e-05 2.907425e-06 4.136268e-05 1.576570e-05 2.087901e-05 4.186512e-05 Now take the sum and devide by $n-1$ and you have the covariance: sum((x1-mean(x1)) * (x2-mean(x2))) / (length(x1)-1) [1] 2.591596e-05 That's the reason why the magnitude of the covariance doesn't say much about strength of how x1 and x2 co-vary. By standardizing (or normalizing) the covariance, that is dividing it by the product of the standard deviation of x1 and x2 (very similar to the covariance, i.e. 2.609127e-05), $r=\frac{cov_{x,y}}{s_x s_y} = \frac{\sum(x_1-\bar{x})(y_i-\bar{y})}{(n-1) s_x s_y}$ you get the high correlation coefficient, of $r=0.99$, which confirms what you can see in your plot.
Why does this set of data have no covariance?
The magnitude of covariance depends on the magnitude of the data and how close those data points are scattered around the mean of that data. It's easy to see when you look at the formula: $cov_{x,y}=
Why does this set of data have no covariance? The magnitude of covariance depends on the magnitude of the data and how close those data points are scattered around the mean of that data. It's easy to see when you look at the formula: $cov_{x,y}= \frac{\sum(x_i-\bar{x})(y_i-\bar{y})}{n-1}$ In your case, the deviance of the x1 and x2 data points to the mean of x1 and x2 are: x1-mean(x1) [1] 0.006043341 -0.012907669 0.003978501 -0.003950639 -0.006020309 -0.000423439 0.003873601 [8] -0.002634199 0.000193071 0.010008621 -0.004096619 0.000683561 -0.007403759 0.006433301 [15] 0.007553331 -0.002496069 0.008307881 -0.004780219 0.005430541 -0.007792829 x2-mean(x2) [1] 0.0039622385 -0.0093155415 0.0031978185 -0.0018427215 -0.0040443215 -0.0000098315 [7] 0.0030910485 -0.0013344815 0.0001757985 0.0054476185 -0.0023106915 0.0002776485 [13] -0.0052140815 0.0041823185 0.0047488885 -0.0011648015 0.0049787285 -0.0032981115 [19] 0.0038447385 -0.0053722615 Now if you multiply those two vectors with each other you obviously get quite small numbers: (x1-mean(x1)) * (x2-mean(x2)) [1] 2.394516e-05 1.202419e-04 1.272252e-05 7.279927e-06 2.434807e-05 4.163041e-09 1.197349e-05 [8] 3.515290e-06 3.394159e-08 5.452315e-05 9.466023e-06 1.897897e-07 3.860380e-05 2.690611e-05 [15] 3.586993e-05 2.907425e-06 4.136268e-05 1.576570e-05 2.087901e-05 4.186512e-05 Now take the sum and devide by $n-1$ and you have the covariance: sum((x1-mean(x1)) * (x2-mean(x2))) / (length(x1)-1) [1] 2.591596e-05 That's the reason why the magnitude of the covariance doesn't say much about strength of how x1 and x2 co-vary. By standardizing (or normalizing) the covariance, that is dividing it by the product of the standard deviation of x1 and x2 (very similar to the covariance, i.e. 2.609127e-05), $r=\frac{cov_{x,y}}{s_x s_y} = \frac{\sum(x_1-\bar{x})(y_i-\bar{y})}{(n-1) s_x s_y}$ you get the high correlation coefficient, of $r=0.99$, which confirms what you can see in your plot.
Why does this set of data have no covariance? The magnitude of covariance depends on the magnitude of the data and how close those data points are scattered around the mean of that data. It's easy to see when you look at the formula: $cov_{x,y}=
33,789
Why does this set of data have no covariance?
Let's talk about what can be seen from a quick glance at the plot and some reasonableness checks (these are the sort of things one can do as a matter of course when looking at data, simply being armed with a few basic facts): However, first let's note that the $n$-denominator version of standard deviation can't exceed half the range (the $n-1$ denominator version can, but with more than a few observations not by much). The ranges on both variables are on the order of 0.02 (roughly) so the variances should be no more than about half that, squared, or about $10^{-4}$. Consequently, the observed values of the variances in your output make sense; they are both less than that, but more than a tenth of it. The absolute value of the covariance must then be no more than the geometric mean of the two variances (otherwise the correlation could exceed 1). So the absolute value of the covariance should not exceed $\frac14$ of the product of the ranges. So if the range of both variables were both close to $0.02$, we couldn't expect the absolute covariance to exceed $(0.02)^2/4=10^{-4}$. From that very rough analysis, nothing looks surprising. A more precise analysis comes from actually doing the calculations using more accurate ranges and then thinking about the shapes of the marginal distributions: the ranges are just under $0.023$ and $0.015$ respectively, so the covariance should not exceed $8.6\times 10^{-5}$, but since the marginal distributions are not nearly-symmetric-two-point distributions it must be quite a bit less than that. Indeed, if we say they're not so far from uniform, the covariance would be bounded by something nearer 1/12 the product, rather than 1/4 -- i.e. for roughly uniform variates with those ranges it would be less than about $2.9\times 10^{-5}$ -- but not a lot less because the correlation is high. [These variates aren't uniform - they're left skew - but it's close enough for our present purposes.] So just from looking at the range of each variable and the rough sense of the marginal distributions and correlation in the plot, I'd expect the covariance to be a bit less than $2.9\times 10^{-5}$. It is actually about $2.6\times 10^{-5}$. (Not so bad for a quick back-of-the-envelope calculation starting with ranges to two significant figures!)
Why does this set of data have no covariance?
Let's talk about what can be seen from a quick glance at the plot and some reasonableness checks (these are the sort of things one can do as a matter of course when looking at data, simply being armed
Why does this set of data have no covariance? Let's talk about what can be seen from a quick glance at the plot and some reasonableness checks (these are the sort of things one can do as a matter of course when looking at data, simply being armed with a few basic facts): However, first let's note that the $n$-denominator version of standard deviation can't exceed half the range (the $n-1$ denominator version can, but with more than a few observations not by much). The ranges on both variables are on the order of 0.02 (roughly) so the variances should be no more than about half that, squared, or about $10^{-4}$. Consequently, the observed values of the variances in your output make sense; they are both less than that, but more than a tenth of it. The absolute value of the covariance must then be no more than the geometric mean of the two variances (otherwise the correlation could exceed 1). So the absolute value of the covariance should not exceed $\frac14$ of the product of the ranges. So if the range of both variables were both close to $0.02$, we couldn't expect the absolute covariance to exceed $(0.02)^2/4=10^{-4}$. From that very rough analysis, nothing looks surprising. A more precise analysis comes from actually doing the calculations using more accurate ranges and then thinking about the shapes of the marginal distributions: the ranges are just under $0.023$ and $0.015$ respectively, so the covariance should not exceed $8.6\times 10^{-5}$, but since the marginal distributions are not nearly-symmetric-two-point distributions it must be quite a bit less than that. Indeed, if we say they're not so far from uniform, the covariance would be bounded by something nearer 1/12 the product, rather than 1/4 -- i.e. for roughly uniform variates with those ranges it would be less than about $2.9\times 10^{-5}$ -- but not a lot less because the correlation is high. [These variates aren't uniform - they're left skew - but it's close enough for our present purposes.] So just from looking at the range of each variable and the rough sense of the marginal distributions and correlation in the plot, I'd expect the covariance to be a bit less than $2.9\times 10^{-5}$. It is actually about $2.6\times 10^{-5}$. (Not so bad for a quick back-of-the-envelope calculation starting with ranges to two significant figures!)
Why does this set of data have no covariance? Let's talk about what can be seen from a quick glance at the plot and some reasonableness checks (these are the sort of things one can do as a matter of course when looking at data, simply being armed
33,790
Understand Link Function in Generalized Linear Model
So when you have binary response data, you have a "yes/no" or "1/0" outcome for each observation. However, what you are trying to estimate when doing a binary response regression is not a 1/0 outcome for each set of values of the independent variables you impose, but the probability that an individual with such characteristics will result in a "yes" outcome. Then the response is not discrete anymore, it's continuous (in the (0,1) interval). The response in the data (the true $y_i$) are, indeed, binary, but the estimated response (the $\Lambda(x_i'b)$ or $\Phi(x_i'b)$) are probabilities. The underlying meaning of these link functions is that they are the distribution we impose to the error term in the latent variable model. Imagine each individual has an underlying (unobservable) willingness to say "yes" (or be a 1) in the outcome. Then we model this willingness as $y_i^*$ using a linear regression on the individual's characteristics $x_i$ (which is a vector in multiple regression): $$y_i^*=x_i'\beta + \epsilon_i.$$ This is what is called a latent variable regression. If this individual's willingness was positive ($y_i^*>0$), the individual's observed outcome would be a "yes" ($y_i=1$), otherwise a "no". Note that the choice of threshold doesn't matter as the latent variable model has an intercept. In linear regression we assume the error term to be normally distributed. In binary response and other models, we need to impose/assume a distribution on the error terms. The link function is the cumulative probability function that the error terms follow. For instance, if it is logistic (and we will use that the logistic distribution is symmetric in the fourth equality), $$P(y_i=1)=P(y_i^*>0)=P(x_i'\beta + \epsilon_i>0)=P(\epsilon_i>-x_i'\beta)=P(\epsilon_i<x_i'\beta)=\Lambda(x_i'\beta).$$ If you assumed the errors to be normally distributed, then you would have a probit link, $\Phi(\cdot)$, instead of $\Lambda(\cdot)$.
Understand Link Function in Generalized Linear Model
So when you have binary response data, you have a "yes/no" or "1/0" outcome for each observation. However, what you are trying to estimate when doing a binary response regression is not a 1/0 outcome
Understand Link Function in Generalized Linear Model So when you have binary response data, you have a "yes/no" or "1/0" outcome for each observation. However, what you are trying to estimate when doing a binary response regression is not a 1/0 outcome for each set of values of the independent variables you impose, but the probability that an individual with such characteristics will result in a "yes" outcome. Then the response is not discrete anymore, it's continuous (in the (0,1) interval). The response in the data (the true $y_i$) are, indeed, binary, but the estimated response (the $\Lambda(x_i'b)$ or $\Phi(x_i'b)$) are probabilities. The underlying meaning of these link functions is that they are the distribution we impose to the error term in the latent variable model. Imagine each individual has an underlying (unobservable) willingness to say "yes" (or be a 1) in the outcome. Then we model this willingness as $y_i^*$ using a linear regression on the individual's characteristics $x_i$ (which is a vector in multiple regression): $$y_i^*=x_i'\beta + \epsilon_i.$$ This is what is called a latent variable regression. If this individual's willingness was positive ($y_i^*>0$), the individual's observed outcome would be a "yes" ($y_i=1$), otherwise a "no". Note that the choice of threshold doesn't matter as the latent variable model has an intercept. In linear regression we assume the error term to be normally distributed. In binary response and other models, we need to impose/assume a distribution on the error terms. The link function is the cumulative probability function that the error terms follow. For instance, if it is logistic (and we will use that the logistic distribution is symmetric in the fourth equality), $$P(y_i=1)=P(y_i^*>0)=P(x_i'\beta + \epsilon_i>0)=P(\epsilon_i>-x_i'\beta)=P(\epsilon_i<x_i'\beta)=\Lambda(x_i'\beta).$$ If you assumed the errors to be normally distributed, then you would have a probit link, $\Phi(\cdot)$, instead of $\Lambda(\cdot)$.
Understand Link Function in Generalized Linear Model So when you have binary response data, you have a "yes/no" or "1/0" outcome for each observation. However, what you are trying to estimate when doing a binary response regression is not a 1/0 outcome
33,791
Understand Link Function in Generalized Linear Model
Generalized linear model is defined in terms of linear predictor $$ \eta = X\beta $$ Next thing is probability distribution that describes conditional distribution of $Y$ and a link function $g$ that "provides the relationship between the linear predictor and the mean of the distribution function", since we are not predicting the values of $Y$ but rather conditional mean of $Y$ given predictors $X$, i.e. $$ E(Y|X) = g^{-1}(\eta) $$ In case of Gaussian family GLM (linear regression) identity function is used as a link function, so $E(Y|X) = \eta$, while in case of logistic regression logit function is used. (Inverse of) logit function transforms values of $\eta$ in $(-\infty, \infty)$ to $(0, 1)$, since logistic regression predicts probabilities of success, i.e. mean of Bernoulli distribution. Other functions are used for transforming linear predictors to means of different distributions, for example log function for Poisson regression, or inverse link for gamma regression. So link function does not link values of $Y$ (e.g. binary, in case of logistic regression) and linear predictor, but mean of the distribution of $Y$ with $\eta$ (actually, to translate the probabilities to $0$'s and $1$'s you would additionally need a decition rule). So the take-away message is that we are not predicting the values of $Y$ but instead describing it in terms of probabilistic model and estimating parameters of conditional distribution of $Y$ given $X$. For learning more about link functions and GLM's you can check Difference between 'link function' and 'canonical link function' for GLM, Purpose of the link function in generalized linear model and Difference between logit and probit models threads, the very good Wikipedia article on GLM's and the Generalized linear models book by McCullagh and Nelder.
Understand Link Function in Generalized Linear Model
Generalized linear model is defined in terms of linear predictor $$ \eta = X\beta $$ Next thing is probability distribution that describes conditional distribution of $Y$ and a link function $g$ that
Understand Link Function in Generalized Linear Model Generalized linear model is defined in terms of linear predictor $$ \eta = X\beta $$ Next thing is probability distribution that describes conditional distribution of $Y$ and a link function $g$ that "provides the relationship between the linear predictor and the mean of the distribution function", since we are not predicting the values of $Y$ but rather conditional mean of $Y$ given predictors $X$, i.e. $$ E(Y|X) = g^{-1}(\eta) $$ In case of Gaussian family GLM (linear regression) identity function is used as a link function, so $E(Y|X) = \eta$, while in case of logistic regression logit function is used. (Inverse of) logit function transforms values of $\eta$ in $(-\infty, \infty)$ to $(0, 1)$, since logistic regression predicts probabilities of success, i.e. mean of Bernoulli distribution. Other functions are used for transforming linear predictors to means of different distributions, for example log function for Poisson regression, or inverse link for gamma regression. So link function does not link values of $Y$ (e.g. binary, in case of logistic regression) and linear predictor, but mean of the distribution of $Y$ with $\eta$ (actually, to translate the probabilities to $0$'s and $1$'s you would additionally need a decition rule). So the take-away message is that we are not predicting the values of $Y$ but instead describing it in terms of probabilistic model and estimating parameters of conditional distribution of $Y$ given $X$. For learning more about link functions and GLM's you can check Difference between 'link function' and 'canonical link function' for GLM, Purpose of the link function in generalized linear model and Difference between logit and probit models threads, the very good Wikipedia article on GLM's and the Generalized linear models book by McCullagh and Nelder.
Understand Link Function in Generalized Linear Model Generalized linear model is defined in terms of linear predictor $$ \eta = X\beta $$ Next thing is probability distribution that describes conditional distribution of $Y$ and a link function $g$ that
33,792
Linear regression with log transformed data - large error [duplicate]
If you say your model is ln(y) = b*ln(x) + a it is only part of your model. Your actual model includes an error term: $\ln y_i = b\cdot \ln x_i + a + \varepsilon_i$ and you assume that the error distribution is $\varepsilon_i \sim \mathcal{N}(0,\,\sigma^2)$. Now let's back-transform it: $y_i = \exp(a) \cdot x_i^b \cdot \exp(\varepsilon_i)$ As you see, you have a multiplicative error term, i.e., a relative error with constant variation. As a result, you allow more deviation from the fitted line in your higher fitted values, i.e., you place less weight on them. This actually is often justified, but of course gives you larger residuals for higher values as you have observed. If you are not happy with this, you should not do transformation followed by OLS. One alternative would be a Generalized Linear Model, which models the error differently, or even non-linear regression.
Linear regression with log transformed data - large error [duplicate]
If you say your model is ln(y) = b*ln(x) + a it is only part of your model. Your actual model includes an error term: $\ln y_i = b\cdot \ln x_i + a + \varepsilon_i$ and you assume that the error distr
Linear regression with log transformed data - large error [duplicate] If you say your model is ln(y) = b*ln(x) + a it is only part of your model. Your actual model includes an error term: $\ln y_i = b\cdot \ln x_i + a + \varepsilon_i$ and you assume that the error distribution is $\varepsilon_i \sim \mathcal{N}(0,\,\sigma^2)$. Now let's back-transform it: $y_i = \exp(a) \cdot x_i^b \cdot \exp(\varepsilon_i)$ As you see, you have a multiplicative error term, i.e., a relative error with constant variation. As a result, you allow more deviation from the fitted line in your higher fitted values, i.e., you place less weight on them. This actually is often justified, but of course gives you larger residuals for higher values as you have observed. If you are not happy with this, you should not do transformation followed by OLS. One alternative would be a Generalized Linear Model, which models the error differently, or even non-linear regression.
Linear regression with log transformed data - large error [duplicate] If you say your model is ln(y) = b*ln(x) + a it is only part of your model. Your actual model includes an error term: $\ln y_i = b\cdot \ln x_i + a + \varepsilon_i$ and you assume that the error distr
33,793
Linear regression with log transformed data - large error [duplicate]
Roland already gave a good answer. To say the same thing another way - you shoved some dirt under a carpet. Then you cleaned the top of the carpet. The dirt is still there! There are several models that don't rely on normality of residuals. One that I think is very under-used is quantile regression. In R there is the quantreg package.
Linear regression with log transformed data - large error [duplicate]
Roland already gave a good answer. To say the same thing another way - you shoved some dirt under a carpet. Then you cleaned the top of the carpet. The dirt is still there! There are several models t
Linear regression with log transformed data - large error [duplicate] Roland already gave a good answer. To say the same thing another way - you shoved some dirt under a carpet. Then you cleaned the top of the carpet. The dirt is still there! There are several models that don't rely on normality of residuals. One that I think is very under-used is quantile regression. In R there is the quantreg package.
Linear regression with log transformed data - large error [duplicate] Roland already gave a good answer. To say the same thing another way - you shoved some dirt under a carpet. Then you cleaned the top of the carpet. The dirt is still there! There are several models t
33,794
SelectKBest - Feature Selection - Python - SciKit Learn
No, SelectKBest works differently. It takes as a parameter a score function, which must be applicable to a pair ($X$, $y$). The score function must return an array of scores, one for each feature $X[:, i]$ of $X$ (additionally, it can also return p-values, but these are neither needed nor required). SelectKBest then simply retains the first $k$ features of $X$ with the highest scores. So, for example, if you pass chi2 as a score function, SelectKBest will compute the chi2 statistic between each feature of $X$ and $y$ (assumed to be class labels). A small value will mean the feature is independent of $y$. A large value will mean the feature is non-randomly related to $y$, and so likely to provide important information. Only $k$ features will be retained. Finally, SelectKBest has a default behaviour implemented, so you can write select = SelectKBest() and then call select.fit_transform(X, y) (in fact I saw people do this). In this case SelectKBest uses the f_classif score function. This interpretes the values of $y$ as class labels and computes, for each feature $X[:, i]$ of $X$, an $F$-statistic. The formula used is exactly the one given here: one way ANOVA F-test, with $K$ the number of distinct values of $y$. A large score suggests that the means of the $K$ groups are not all equal. This is not very informative, and is true only when some rather stringent conditions are met: for example, the values $X[:, i]$ must come from normally distributed populations, and the population variance of the $K$ groups must be the same. I don't see why this should hold in practice, and without this assumption the $F$-values are meaningless. So using SelectKBest() carelessly might throw out many features for the wrong reasons.
SelectKBest - Feature Selection - Python - SciKit Learn
No, SelectKBest works differently. It takes as a parameter a score function, which must be applicable to a pair ($X$, $y$). The score function must return an array of scores, one for each feature $X[:
SelectKBest - Feature Selection - Python - SciKit Learn No, SelectKBest works differently. It takes as a parameter a score function, which must be applicable to a pair ($X$, $y$). The score function must return an array of scores, one for each feature $X[:, i]$ of $X$ (additionally, it can also return p-values, but these are neither needed nor required). SelectKBest then simply retains the first $k$ features of $X$ with the highest scores. So, for example, if you pass chi2 as a score function, SelectKBest will compute the chi2 statistic between each feature of $X$ and $y$ (assumed to be class labels). A small value will mean the feature is independent of $y$. A large value will mean the feature is non-randomly related to $y$, and so likely to provide important information. Only $k$ features will be retained. Finally, SelectKBest has a default behaviour implemented, so you can write select = SelectKBest() and then call select.fit_transform(X, y) (in fact I saw people do this). In this case SelectKBest uses the f_classif score function. This interpretes the values of $y$ as class labels and computes, for each feature $X[:, i]$ of $X$, an $F$-statistic. The formula used is exactly the one given here: one way ANOVA F-test, with $K$ the number of distinct values of $y$. A large score suggests that the means of the $K$ groups are not all equal. This is not very informative, and is true only when some rather stringent conditions are met: for example, the values $X[:, i]$ must come from normally distributed populations, and the population variance of the $K$ groups must be the same. I don't see why this should hold in practice, and without this assumption the $F$-values are meaningless. So using SelectKBest() carelessly might throw out many features for the wrong reasons.
SelectKBest - Feature Selection - Python - SciKit Learn No, SelectKBest works differently. It takes as a parameter a score function, which must be applicable to a pair ($X$, $y$). The score function must return an array of scores, one for each feature $X[:
33,795
Weighted Root Mean Square Error
As already noticed by whuber in a comment, it is not clear if your procedure of setting weights is valid. Notice that in non-weighted RMSE larger areas already have greater weight on the estimate since they are larger, so they appear more often in your data. That is why, as suggested, people rather down-weight such subpopulations, so that the final estimate treats all the subpopulations more evenly. However if you wanted to use weighted RMSE, then recall that RMSE is by design pretty close to standard deviation, so why not look at how weighted variance is calculated? $$ \sigma^2 = \sum_{i=1}^n w_i (x_i - \bar x)^2 $$ where weights are non-negative and $\sum_{i=1}^n w_i = 1$. The same you can take weighted RMSE as $$ \text{RMSE} = \sqrt{\sum_{i=1}^n w_i (\hat x_i - x_i)^2} $$ Notice that we take sum of weighted differences, not the mean. Unweighted mean is the same as weighted mean with weights that are all equal to $w_i = 1/n$, so if you took arithmetic mean, it would be like dividing RMSE by $n$ second time. Check also: Weighted Variance, one more time
Weighted Root Mean Square Error
As already noticed by whuber in a comment, it is not clear if your procedure of setting weights is valid. Notice that in non-weighted RMSE larger areas already have greater weight on the estimate sinc
Weighted Root Mean Square Error As already noticed by whuber in a comment, it is not clear if your procedure of setting weights is valid. Notice that in non-weighted RMSE larger areas already have greater weight on the estimate since they are larger, so they appear more often in your data. That is why, as suggested, people rather down-weight such subpopulations, so that the final estimate treats all the subpopulations more evenly. However if you wanted to use weighted RMSE, then recall that RMSE is by design pretty close to standard deviation, so why not look at how weighted variance is calculated? $$ \sigma^2 = \sum_{i=1}^n w_i (x_i - \bar x)^2 $$ where weights are non-negative and $\sum_{i=1}^n w_i = 1$. The same you can take weighted RMSE as $$ \text{RMSE} = \sqrt{\sum_{i=1}^n w_i (\hat x_i - x_i)^2} $$ Notice that we take sum of weighted differences, not the mean. Unweighted mean is the same as weighted mean with weights that are all equal to $w_i = 1/n$, so if you took arithmetic mean, it would be like dividing RMSE by $n$ second time. Check also: Weighted Variance, one more time
Weighted Root Mean Square Error As already noticed by whuber in a comment, it is not clear if your procedure of setting weights is valid. Notice that in non-weighted RMSE larger areas already have greater weight on the estimate sinc
33,796
Weighted Root Mean Square Error
This is a very old thread, but I would change David Dickson's function as follows. weighted.rmse <- function(actual, predicted, weight){ sqrt(sum((predicted-actual)^2*weight)/sum(weight)) } Tim's answer is only valid if weights sum to 1, but this function generalizes it so that it is valid with any (non-normalized) set of weights.
Weighted Root Mean Square Error
This is a very old thread, but I would change David Dickson's function as follows. weighted.rmse <- function(actual, predicted, weight){ sqrt(sum((predicted-actual)^2*weight)/sum(weight)) } Tim's
Weighted Root Mean Square Error This is a very old thread, but I would change David Dickson's function as follows. weighted.rmse <- function(actual, predicted, weight){ sqrt(sum((predicted-actual)^2*weight)/sum(weight)) } Tim's answer is only valid if weights sum to 1, but this function generalizes it so that it is valid with any (non-normalized) set of weights.
Weighted Root Mean Square Error This is a very old thread, but I would change David Dickson's function as follows. weighted.rmse <- function(actual, predicted, weight){ sqrt(sum((predicted-actual)^2*weight)/sum(weight)) } Tim's
33,797
Weighted Root Mean Square Error
If you do not mind doing some reading, I recommend looking up Sampling: Design and Analysis by Lohr or Sampling by Thompson for examples on model based weighting schemes for mean squared error (MSE). I'm sure you'll find copies online by doing a simple Google search. Since your data seems deal with area (location), I recommend reviewing the chapters on Spatial Sampling in Sampling. Note that you should try to understand how your data was sampled (obtained) as that will affect the weights.
Weighted Root Mean Square Error
If you do not mind doing some reading, I recommend looking up Sampling: Design and Analysis by Lohr or Sampling by Thompson for examples on model based weighting schemes for mean squared error (MSE).
Weighted Root Mean Square Error If you do not mind doing some reading, I recommend looking up Sampling: Design and Analysis by Lohr or Sampling by Thompson for examples on model based weighting schemes for mean squared error (MSE). I'm sure you'll find copies online by doing a simple Google search. Since your data seems deal with area (location), I recommend reviewing the chapters on Spatial Sampling in Sampling. Note that you should try to understand how your data was sampled (obtained) as that will affect the weights.
Weighted Root Mean Square Error If you do not mind doing some reading, I recommend looking up Sampling: Design and Analysis by Lohr or Sampling by Thompson for examples on model based weighting schemes for mean squared error (MSE).
33,798
How to cross validate stepwise logistic regression?
The Elements of Statistical Learning puts the answer quite clearly (second edition, p. 246): In general, with a multistep modeling procedure, cross-validation must be applied to the entire sequence of modeling steps. In particular, samples must be “left out” before any selection or filtering steps are applied. There is one qualification: initial unsupervised screening steps can be done before samples are left out. In this type of analysis the problem is that the "ground truth" deduced from your sample might not represent the "ground truth" in the population. Cross-validation can help with generalizing results to the population, but only if all steps of the modeling procedure are repeated for each fold of validation. As both I and @user777 recommend, you will probably be better off if you use a method other than stepwise selection to deal with your correlated predictor variables. With highly correlated predictors, stepwise selection will almost certainly lead to highly varying choices of predictors from fold to fold. Regularization methods deal with correlated predictors much better. Ridge regression, for example, is essentially a principal-components regression with weights on the components, so that highly correlated variables tend to show up together in the same components.
How to cross validate stepwise logistic regression?
The Elements of Statistical Learning puts the answer quite clearly (second edition, p. 246): In general, with a multistep modeling procedure, cross-validation must be applied to the entire sequence o
How to cross validate stepwise logistic regression? The Elements of Statistical Learning puts the answer quite clearly (second edition, p. 246): In general, with a multistep modeling procedure, cross-validation must be applied to the entire sequence of modeling steps. In particular, samples must be “left out” before any selection or filtering steps are applied. There is one qualification: initial unsupervised screening steps can be done before samples are left out. In this type of analysis the problem is that the "ground truth" deduced from your sample might not represent the "ground truth" in the population. Cross-validation can help with generalizing results to the population, but only if all steps of the modeling procedure are repeated for each fold of validation. As both I and @user777 recommend, you will probably be better off if you use a method other than stepwise selection to deal with your correlated predictor variables. With highly correlated predictors, stepwise selection will almost certainly lead to highly varying choices of predictors from fold to fold. Regularization methods deal with correlated predictors much better. Ridge regression, for example, is essentially a principal-components regression with weights on the components, so that highly correlated variables tend to show up together in the same components.
How to cross validate stepwise logistic regression? The Elements of Statistical Learning puts the answer quite clearly (second edition, p. 246): In general, with a multistep modeling procedure, cross-validation must be applied to the entire sequence o
33,799
How to cross validate stepwise logistic regression?
The 1970s called. It wants its antiquated, dilapidated stepwise regression back. The 1990s called. It wants you to employ the ad hoc heuristic methods, including LASSO!!!!, advocated in The Elements of Statistical Learning, as quoted in EdM's answer. The new millennium called. It's telling you to forget all that ad hoc nonsense and employ a systematic mixed integer optimization approach to choosing the best subsets. This is the way to go, baby. "Best Subset Selection via a Modern Optimization Lens", Bertsimas, King, Mazumder. It will blow The Elements of Statistical Learning recommendations out of the water. Of course, there may not be canned R packages ready to go just yet. Final version of article later published in The Annals of Statistics (Open Access).
How to cross validate stepwise logistic regression?
The 1970s called. It wants its antiquated, dilapidated stepwise regression back. The 1990s called. It wants you to employ the ad hoc heuristic methods, including LASSO!!!!, advocated in The Elements o
How to cross validate stepwise logistic regression? The 1970s called. It wants its antiquated, dilapidated stepwise regression back. The 1990s called. It wants you to employ the ad hoc heuristic methods, including LASSO!!!!, advocated in The Elements of Statistical Learning, as quoted in EdM's answer. The new millennium called. It's telling you to forget all that ad hoc nonsense and employ a systematic mixed integer optimization approach to choosing the best subsets. This is the way to go, baby. "Best Subset Selection via a Modern Optimization Lens", Bertsimas, King, Mazumder. It will blow The Elements of Statistical Learning recommendations out of the water. Of course, there may not be canned R packages ready to go just yet. Final version of article later published in The Annals of Statistics (Open Access).
How to cross validate stepwise logistic regression? The 1970s called. It wants its antiquated, dilapidated stepwise regression back. The 1990s called. It wants you to employ the ad hoc heuristic methods, including LASSO!!!!, advocated in The Elements o
33,800
Find distribution and transform to normal distribution
The data looks like having an exponential distribution. For transformation, simple log seems to work fine. hist(log(dph), freq=FALSE, ylim=c(0, .4)) lines(seq(-6, 6, by=0.01), dnorm(seq(-6, 6, by=0.01), 2, 1), col="red") qqnorm(log(dph), ylim=c(0, 5)) qqline(log(dph), col="red")
Find distribution and transform to normal distribution
The data looks like having an exponential distribution. For transformation, simple log seems to work fine. hist(log(dph), freq=FALSE, ylim=c(0, .4)) lines(seq(-6, 6, by=0.01), dnorm(seq(-6, 6, by=0.01
Find distribution and transform to normal distribution The data looks like having an exponential distribution. For transformation, simple log seems to work fine. hist(log(dph), freq=FALSE, ylim=c(0, .4)) lines(seq(-6, 6, by=0.01), dnorm(seq(-6, 6, by=0.01), 2, 1), col="red") qqnorm(log(dph), ylim=c(0, 5)) qqline(log(dph), col="red")
Find distribution and transform to normal distribution The data looks like having an exponential distribution. For transformation, simple log seems to work fine. hist(log(dph), freq=FALSE, ylim=c(0, .4)) lines(seq(-6, 6, by=0.01), dnorm(seq(-6, 6, by=0.01