idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
38,201
Why does statistical significance increase with data, BUT the effects may not be meaningful?
I have read that "When you get more and more data, you can find statistically significant differences wherever you look" This is not always true, however if your null hypothesis is that two groups of people are exactly 100% the same then it is, because that null hypothesis will almost always or always be false. Instead if your null hypothesis is that the speed of light is 299,792,458 m/s and you measure this many times without using tools that are biased to make measurement error in one direction or the other, then you are not more likely to get significance. Why is this case? (any intuitive examples that show this behavior?) When it is the case, it is because the null hypothesis is false or there is some bias of the measuring tool. Why do such increases in statistical difference do not necessarily imply that the observed effects are meaningful / important? Because very small differences are just as likely to arise from other reasons other than the one you did the experiment to test (e.g. problem with the measuring device, baseline difference between groups) and there is no way to guess which has occurred. Note that this is always the case even if the effect is large, it is just less likely (as far as I can tell this is "hand-wavy", but intuitively obvious) to observe a large effect if all factors except your independent variable were held relatively constant. Also very small differences usually do not provide any reason to take action based on the result. The cost of performing the action will usually outweigh the benefits. Edit: Another thing is that obviously in the case of a null hypothesis predicted by theory, a non-significant result is important as your theory has been corroborated. Even in the case of the more common "always false" null hypothesis, data results in "non-significance" could be meaningful. Lack of significance, especially for large sample size, tells you that any effect/difference is small relative to background noise. I would say the practice of ignoring non-significant results is seriously flawed.
Why does statistical significance increase with data, BUT the effects may not be meaningful?
I have read that "When you get more and more data, you can find statistically significant differences wherever you look" This is not always true, however if your null hypothesis is that two groups
Why does statistical significance increase with data, BUT the effects may not be meaningful? I have read that "When you get more and more data, you can find statistically significant differences wherever you look" This is not always true, however if your null hypothesis is that two groups of people are exactly 100% the same then it is, because that null hypothesis will almost always or always be false. Instead if your null hypothesis is that the speed of light is 299,792,458 m/s and you measure this many times without using tools that are biased to make measurement error in one direction or the other, then you are not more likely to get significance. Why is this case? (any intuitive examples that show this behavior?) When it is the case, it is because the null hypothesis is false or there is some bias of the measuring tool. Why do such increases in statistical difference do not necessarily imply that the observed effects are meaningful / important? Because very small differences are just as likely to arise from other reasons other than the one you did the experiment to test (e.g. problem with the measuring device, baseline difference between groups) and there is no way to guess which has occurred. Note that this is always the case even if the effect is large, it is just less likely (as far as I can tell this is "hand-wavy", but intuitively obvious) to observe a large effect if all factors except your independent variable were held relatively constant. Also very small differences usually do not provide any reason to take action based on the result. The cost of performing the action will usually outweigh the benefits. Edit: Another thing is that obviously in the case of a null hypothesis predicted by theory, a non-significant result is important as your theory has been corroborated. Even in the case of the more common "always false" null hypothesis, data results in "non-significance" could be meaningful. Lack of significance, especially for large sample size, tells you that any effect/difference is small relative to background noise. I would say the practice of ignoring non-significant results is seriously flawed.
Why does statistical significance increase with data, BUT the effects may not be meaningful? I have read that "When you get more and more data, you can find statistically significant differences wherever you look" This is not always true, however if your null hypothesis is that two groups
38,202
Why does statistical significance increase with data, BUT the effects may not be meaningful?
I also wish to highlight that you will not always find statistically significant results, even with nearly infinite data. Statistically significant results only represent what are likely to be true differences, regardless of size. If this difference does not exist, then the number of cases does not matter. Consider two samples of 10 million trees that are, on average, the exact same height. Having an overall sample of 20 million trees will never result in the difference being statistically significant. It is always important to assess the size of the effect when results are statistically significant. The results are important/meaningful within the context of what you are exploring. The importance will always depend. 1% difference may be very unimportant when considering shoe size, but be very meaningful when it represents odds of dying from a disease in a population of 10 billion.
Why does statistical significance increase with data, BUT the effects may not be meaningful?
I also wish to highlight that you will not always find statistically significant results, even with nearly infinite data. Statistically significant results only represent what are likely to be true di
Why does statistical significance increase with data, BUT the effects may not be meaningful? I also wish to highlight that you will not always find statistically significant results, even with nearly infinite data. Statistically significant results only represent what are likely to be true differences, regardless of size. If this difference does not exist, then the number of cases does not matter. Consider two samples of 10 million trees that are, on average, the exact same height. Having an overall sample of 20 million trees will never result in the difference being statistically significant. It is always important to assess the size of the effect when results are statistically significant. The results are important/meaningful within the context of what you are exploring. The importance will always depend. 1% difference may be very unimportant when considering shoe size, but be very meaningful when it represents odds of dying from a disease in a population of 10 billion.
Why does statistical significance increase with data, BUT the effects may not be meaningful? I also wish to highlight that you will not always find statistically significant results, even with nearly infinite data. Statistically significant results only represent what are likely to be true di
38,203
Why does statistical significance increase with data, BUT the effects may not be meaningful?
It is not necessarily the case that you will always find significant differences as you increase sample size, but it becomes more and more likely. As several people have pointed out, truly identical samples may not result in a significant difference. What it does do is make very, very small differences much more likely to be detected - differences that we, in the real world, can't really act on in any meaningful fashion. For example, if I told you that the average IQ of one group was 100.0001 and the other group 100.0002, would you really be able to treat the second group as "smarter" (given all the caveats around IQ as a measure of intelligence)? I'll use an example from my own work: I was simulating an intervention in a hospital to help prevent patients from developing a particular disease. My data set was a number of simulated hospitals with treatment, and a number of hospitals without treatment. The difference between them was statistically significant, and strongly so. This was entirely because the "No Treatment" hospitals had a few examples with slightly more infections. But in most meaningful ways, the two arms were identical. They had the same median number of cases, the same minimum, the same 75th percentile, and 95th percentile and even 99th percentile number of cases. The significance was entirely driven by a few edge cases at the extreme end of the distribution…and a large sample size. The effect of the treatment was, in the real world, utterly undetectable and meaningless. But because I had a large sample size, it was statistically significant. If I had wanted it to be more so, I could have gone to dinner and let the simulation run longer, but that wouldn't have made the intervention any more effective.
Why does statistical significance increase with data, BUT the effects may not be meaningful?
It is not necessarily the case that you will always find significant differences as you increase sample size, but it becomes more and more likely. As several people have pointed out, truly identical s
Why does statistical significance increase with data, BUT the effects may not be meaningful? It is not necessarily the case that you will always find significant differences as you increase sample size, but it becomes more and more likely. As several people have pointed out, truly identical samples may not result in a significant difference. What it does do is make very, very small differences much more likely to be detected - differences that we, in the real world, can't really act on in any meaningful fashion. For example, if I told you that the average IQ of one group was 100.0001 and the other group 100.0002, would you really be able to treat the second group as "smarter" (given all the caveats around IQ as a measure of intelligence)? I'll use an example from my own work: I was simulating an intervention in a hospital to help prevent patients from developing a particular disease. My data set was a number of simulated hospitals with treatment, and a number of hospitals without treatment. The difference between them was statistically significant, and strongly so. This was entirely because the "No Treatment" hospitals had a few examples with slightly more infections. But in most meaningful ways, the two arms were identical. They had the same median number of cases, the same minimum, the same 75th percentile, and 95th percentile and even 99th percentile number of cases. The significance was entirely driven by a few edge cases at the extreme end of the distribution…and a large sample size. The effect of the treatment was, in the real world, utterly undetectable and meaningless. But because I had a large sample size, it was statistically significant. If I had wanted it to be more so, I could have gone to dinner and let the simulation run longer, but that wouldn't have made the intervention any more effective.
Why does statistical significance increase with data, BUT the effects may not be meaningful? It is not necessarily the case that you will always find significant differences as you increase sample size, but it becomes more and more likely. As several people have pointed out, truly identical s
38,204
Why does statistical significance increase with data, BUT the effects may not be meaningful?
Assume you have a treatment for the common cold which may or may not work. You administer it to one person, and that person gets better. It could be that your treatment is working, or it could be that the person just happened to get better by chance. Now if you apply this treatment to two people and they both get better, this is already more convincing... what are the odds that two people you gave a treatment to both got better? Now imagine that you give the treatment to a group of 500 people and they all get better, while in another group of 500 people that don't receive your treatment, only 10 get better. It could be that the group that you treated just happened to be more lucky, but as the number of people increases, the odds of that fluke happening become extremely small... it's more likely that your treatment actually has an effect. The more data you have, the less likely it is that the patterns you observe are a fluke.
Why does statistical significance increase with data, BUT the effects may not be meaningful?
Assume you have a treatment for the common cold which may or may not work. You administer it to one person, and that person gets better. It could be that your treatment is working, or it could be that
Why does statistical significance increase with data, BUT the effects may not be meaningful? Assume you have a treatment for the common cold which may or may not work. You administer it to one person, and that person gets better. It could be that your treatment is working, or it could be that the person just happened to get better by chance. Now if you apply this treatment to two people and they both get better, this is already more convincing... what are the odds that two people you gave a treatment to both got better? Now imagine that you give the treatment to a group of 500 people and they all get better, while in another group of 500 people that don't receive your treatment, only 10 get better. It could be that the group that you treated just happened to be more lucky, but as the number of people increases, the odds of that fluke happening become extremely small... it's more likely that your treatment actually has an effect. The more data you have, the less likely it is that the patterns you observe are a fluke.
Why does statistical significance increase with data, BUT the effects may not be meaningful? Assume you have a treatment for the common cold which may or may not work. You administer it to one person, and that person gets better. It could be that your treatment is working, or it could be that
38,205
Why does statistical significance increase with data, BUT the effects may not be meaningful?
I believe user27840 has the right answer, but doesn't quite nail the intuition... Let's take a common case: you are comparing the means of two groups, and your null hypothesis is that they are (exactly) equal. The test is also making an assumption about the distribution of the data, often that the data has a "normal distribution". (Technically, this phrase is wrong, but it's commonly used.) The mean, by itself, doesn't have a lot of meaning. There is also the standard error of the mean, which reflects your uncertainty of the actual mean value. The standard error of the mean is tied in to the assumed distribution, and the more points you have in your calculation, the smaller it will be: the more certain you are of your calculation of the mean. This is what works against you. With a small amount of data, the standard errors of your means will be larger, and unless the means are far apart, you will have to reject the null hypothesis (that they are equal) because the means may appear different but your uncertainty is large enough that you can't be sure. As you get more and more data, you become more and more sure of the mean -- the standard errors of your means shrink -- and the means can be closer and closer together but you will still be sure they're not the same. The problem is, you are able to be certain about smaller and smaller diffences, but practically-speaking very small differences don't matter. Of course, the units you're measuring in and the subject matter determine what is a "very small" difference.
Why does statistical significance increase with data, BUT the effects may not be meaningful?
I believe user27840 has the right answer, but doesn't quite nail the intuition... Let's take a common case: you are comparing the means of two groups, and your null hypothesis is that they are (exactl
Why does statistical significance increase with data, BUT the effects may not be meaningful? I believe user27840 has the right answer, but doesn't quite nail the intuition... Let's take a common case: you are comparing the means of two groups, and your null hypothesis is that they are (exactly) equal. The test is also making an assumption about the distribution of the data, often that the data has a "normal distribution". (Technically, this phrase is wrong, but it's commonly used.) The mean, by itself, doesn't have a lot of meaning. There is also the standard error of the mean, which reflects your uncertainty of the actual mean value. The standard error of the mean is tied in to the assumed distribution, and the more points you have in your calculation, the smaller it will be: the more certain you are of your calculation of the mean. This is what works against you. With a small amount of data, the standard errors of your means will be larger, and unless the means are far apart, you will have to reject the null hypothesis (that they are equal) because the means may appear different but your uncertainty is large enough that you can't be sure. As you get more and more data, you become more and more sure of the mean -- the standard errors of your means shrink -- and the means can be closer and closer together but you will still be sure they're not the same. The problem is, you are able to be certain about smaller and smaller diffences, but practically-speaking very small differences don't matter. Of course, the units you're measuring in and the subject matter determine what is a "very small" difference.
Why does statistical significance increase with data, BUT the effects may not be meaningful? I believe user27840 has the right answer, but doesn't quite nail the intuition... Let's take a common case: you are comparing the means of two groups, and your null hypothesis is that they are (exactl
38,206
Why does statistical significance increase with data, BUT the effects may not be meaningful?
The traditional hypothesis testing framework seeks to keep type I error constant. The likelihood paradigm could be said to be more meaningful in this regard, as both type I and type II errors $\rightarrow 0$ as $n \rightarrow \infty$. This makes pure likelihood-based (as opposed to sample-space-based frequentist methods) approach less likely to find trivial differences.
Why does statistical significance increase with data, BUT the effects may not be meaningful?
The traditional hypothesis testing framework seeks to keep type I error constant. The likelihood paradigm could be said to be more meaningful in this regard, as both type I and type II errors $\right
Why does statistical significance increase with data, BUT the effects may not be meaningful? The traditional hypothesis testing framework seeks to keep type I error constant. The likelihood paradigm could be said to be more meaningful in this regard, as both type I and type II errors $\rightarrow 0$ as $n \rightarrow \infty$. This makes pure likelihood-based (as opposed to sample-space-based frequentist methods) approach less likely to find trivial differences.
Why does statistical significance increase with data, BUT the effects may not be meaningful? The traditional hypothesis testing framework seeks to keep type I error constant. The likelihood paradigm could be said to be more meaningful in this regard, as both type I and type II errors $\right
38,207
Why does statistical significance increase with data, BUT the effects may not be meaningful?
I think this happens because people analyse their data after it has been observed. Further this analysis is not done "just for the sake of it" - you may change your mind about what is important based on this analysis (as you should). As a simple example, the comparison of the means of two groups - say jurisdiction A has higher test scores than jurisdiction B. But after analysing the data, you find that the distribution of scores jurisdiction A has three modes, and jurisdiction B has two modes. After seeing this, why would you care if "overall" the means are different or not? You are likely to dismiss the original hypothesis as "meaningless", and report the "interesting finding" of a multi modal distribution, possibly a statistical significant test to go with it. Follow up analysis would likely look for a variable that captures these modes. This has been referred to as "researcher degrees of freedom" and does not get accounted for in your standard p-value. This is because your test statistic is now a function of your analysis. To see this, note that if you were to repeat the process (say in a follow up survey) you would analyse the new data set. Additionally, this problem becomes worse as your data sets become larger, because there is much richer types of analysis you can do and more "real" differences that you can detect. For example, you can't detect three modes with a small data set.
Why does statistical significance increase with data, BUT the effects may not be meaningful?
I think this happens because people analyse their data after it has been observed. Further this analysis is not done "just for the sake of it" - you may change your mind about what is important based
Why does statistical significance increase with data, BUT the effects may not be meaningful? I think this happens because people analyse their data after it has been observed. Further this analysis is not done "just for the sake of it" - you may change your mind about what is important based on this analysis (as you should). As a simple example, the comparison of the means of two groups - say jurisdiction A has higher test scores than jurisdiction B. But after analysing the data, you find that the distribution of scores jurisdiction A has three modes, and jurisdiction B has two modes. After seeing this, why would you care if "overall" the means are different or not? You are likely to dismiss the original hypothesis as "meaningless", and report the "interesting finding" of a multi modal distribution, possibly a statistical significant test to go with it. Follow up analysis would likely look for a variable that captures these modes. This has been referred to as "researcher degrees of freedom" and does not get accounted for in your standard p-value. This is because your test statistic is now a function of your analysis. To see this, note that if you were to repeat the process (say in a follow up survey) you would analyse the new data set. Additionally, this problem becomes worse as your data sets become larger, because there is much richer types of analysis you can do and more "real" differences that you can detect. For example, you can't detect three modes with a small data set.
Why does statistical significance increase with data, BUT the effects may not be meaningful? I think this happens because people analyse their data after it has been observed. Further this analysis is not done "just for the sake of it" - you may change your mind about what is important based
38,208
Effect size and statistical significance
Yes, this may completely make sense. In fact, it is also possible (perhaps rarer) to see a large estimated effect size without there being statistically significant evidence it isn't zero. The issue is that your effect size is just a point estimate and hence is a random variable that depends on the particular sample you have available for analysis. If you construct a 95% confidence interval for your estimate you will see that it includes zero, which is why your p-values are above 0.05.
Effect size and statistical significance
Yes, this may completely make sense. In fact, it is also possible (perhaps rarer) to see a large estimated effect size without there being statistically significant evidence it isn't zero. The issu
Effect size and statistical significance Yes, this may completely make sense. In fact, it is also possible (perhaps rarer) to see a large estimated effect size without there being statistically significant evidence it isn't zero. The issue is that your effect size is just a point estimate and hence is a random variable that depends on the particular sample you have available for analysis. If you construct a 95% confidence interval for your estimate you will see that it includes zero, which is why your p-values are above 0.05.
Effect size and statistical significance Yes, this may completely make sense. In fact, it is also possible (perhaps rarer) to see a large estimated effect size without there being statistically significant evidence it isn't zero. The issu
38,209
Effect size and statistical significance
Yes. This basically means that you see a medium (say) effect, but you can't be sure at the 95% level that what you see is not due to a random fluctuation. This probably happens because your sample is too small. You want to have a look at http://en.wikipedia.org/wiki/Statistical_power (And to all the professionals here: yes, I know this is all very imprecise, vague, and wrong. I'm just trying to match the level of the answer to the level of the question.)
Effect size and statistical significance
Yes. This basically means that you see a medium (say) effect, but you can't be sure at the 95% level that what you see is not due to a random fluctuation. This probably happens because your sample is
Effect size and statistical significance Yes. This basically means that you see a medium (say) effect, but you can't be sure at the 95% level that what you see is not due to a random fluctuation. This probably happens because your sample is too small. You want to have a look at http://en.wikipedia.org/wiki/Statistical_power (And to all the professionals here: yes, I know this is all very imprecise, vague, and wrong. I'm just trying to match the level of the answer to the level of the question.)
Effect size and statistical significance Yes. This basically means that you see a medium (say) effect, but you can't be sure at the 95% level that what you see is not due to a random fluctuation. This probably happens because your sample is
38,210
Effect size and statistical significance
What a p-value is: A p-value answers this question: If, in the population from which this sample is drawn, there was really no effect at all, how likely is a result as extreme or more extreme than the one we got in this sample? That is all it means. This question is almost never of interest Effect sizes (like Cohen's d) are much more important in nearly all cases. They answer the question: How big is the effect? That is what we are usually interested in. I don't think we should make our answers so simple that they are wrong; I think we can help less-schooled questioners understand what is really going on. And, in this case, I think that can be done fairly easily.
Effect size and statistical significance
What a p-value is: A p-value answers this question: If, in the population from which this sample is drawn, there was really no effect at all, how likely is a result as extreme or more extreme than the
Effect size and statistical significance What a p-value is: A p-value answers this question: If, in the population from which this sample is drawn, there was really no effect at all, how likely is a result as extreme or more extreme than the one we got in this sample? That is all it means. This question is almost never of interest Effect sizes (like Cohen's d) are much more important in nearly all cases. They answer the question: How big is the effect? That is what we are usually interested in. I don't think we should make our answers so simple that they are wrong; I think we can help less-schooled questioners understand what is really going on. And, in this case, I think that can be done fairly easily.
Effect size and statistical significance What a p-value is: A p-value answers this question: If, in the population from which this sample is drawn, there was really no effect at all, how likely is a result as extreme or more extreme than the
38,211
Should a multiple choice poll contain a neutral response?
Neutral points can mean many different things to many people. The way you labeled the middle choice yourself reflects this uncertainty. Some reasons for choosing the neutral point from the perspective of a participant: I don't care to really think about my answer to this question (I just want to get paid and leave) I have no strong opinion on this question I don't understand the question, but don't want to ask (I just want to get paid and leave) with regards to the given aspect, the product is truly medium in quality, i.e., it neither excels nor falls short of my expectations with regards to the given aspect, the product has some high-quality features, and some low-quality features Without further qualification, the people who choose the middle category can thus represent a very heterogeneous collection of attitudes / cognitions. With good labeling, some of this confusion can be avoided. You can also present a separate "no answer" category. However, participants often interpret such a category as a signal to only provide an answer if they feel very confident in their choice. In other words, participants then tend to choose "no answer" because they feel they're not well-informed enough to make a choice that meets the questionnaire-designers quality standards. IMHO there's no right answer to your question. You have to be very careful in labeling some or all of the presented choices, do lots of pre-testing with additional free interviews of participants on how they perceived the options. If you're really pragmatic, you just choose a standard label-set for which you can cite an article that everybody else always cites and be done with it.
Should a multiple choice poll contain a neutral response?
Neutral points can mean many different things to many people. The way you labeled the middle choice yourself reflects this uncertainty. Some reasons for choosing the neutral point from the perspective
Should a multiple choice poll contain a neutral response? Neutral points can mean many different things to many people. The way you labeled the middle choice yourself reflects this uncertainty. Some reasons for choosing the neutral point from the perspective of a participant: I don't care to really think about my answer to this question (I just want to get paid and leave) I have no strong opinion on this question I don't understand the question, but don't want to ask (I just want to get paid and leave) with regards to the given aspect, the product is truly medium in quality, i.e., it neither excels nor falls short of my expectations with regards to the given aspect, the product has some high-quality features, and some low-quality features Without further qualification, the people who choose the middle category can thus represent a very heterogeneous collection of attitudes / cognitions. With good labeling, some of this confusion can be avoided. You can also present a separate "no answer" category. However, participants often interpret such a category as a signal to only provide an answer if they feel very confident in their choice. In other words, participants then tend to choose "no answer" because they feel they're not well-informed enough to make a choice that meets the questionnaire-designers quality standards. IMHO there's no right answer to your question. You have to be very careful in labeling some or all of the presented choices, do lots of pre-testing with additional free interviews of participants on how they perceived the options. If you're really pragmatic, you just choose a standard label-set for which you can cite an article that everybody else always cites and be done with it.
Should a multiple choice poll contain a neutral response? Neutral points can mean many different things to many people. The way you labeled the middle choice yourself reflects this uncertainty. Some reasons for choosing the neutral point from the perspective
38,212
Should a multiple choice poll contain a neutral response?
I think this whole "force people to choose" thing is just a complete red herring. People say it to me all the time. To me it sounds like "force people to state the capital of Uzbekistan". They don't know, and forcing them won't make them know any better. With that mini-rant over, my only sensible contribution is to say that you should always pilot surveys whenever you can. Pilot both versions, see who uses the "don't know" category in the one where it's included, and look at the distribution of responses. And talk to the people who filled it out. "Were you sure of your answer?" "What made you say 'don't know' here"- that kind of thing.
Should a multiple choice poll contain a neutral response?
I think this whole "force people to choose" thing is just a complete red herring. People say it to me all the time. To me it sounds like "force people to state the capital of Uzbekistan". They don't k
Should a multiple choice poll contain a neutral response? I think this whole "force people to choose" thing is just a complete red herring. People say it to me all the time. To me it sounds like "force people to state the capital of Uzbekistan". They don't know, and forcing them won't make them know any better. With that mini-rant over, my only sensible contribution is to say that you should always pilot surveys whenever you can. Pilot both versions, see who uses the "don't know" category in the one where it's included, and look at the distribution of responses. And talk to the people who filled it out. "Were you sure of your answer?" "What made you say 'don't know' here"- that kind of thing.
Should a multiple choice poll contain a neutral response? I think this whole "force people to choose" thing is just a complete red herring. People say it to me all the time. To me it sounds like "force people to state the capital of Uzbekistan". They don't k
38,213
Should a multiple choice poll contain a neutral response?
I try to avoid questions with more than two answers, as it is impossible to compare them between users. (good vs. very good can be very subjective). I rephrase most questions into binary type (giving though the possibility to be indiffernt): "Would you use the product everyday?" Yes No Indifferent "Would you recommend the product to your friends?" etc. I found results obtained with this method to be way more consistent with the feeling I got from later interviews and performance tests. However, my work so far focuses on Human Computer Interaction questionnaires. The best is anyways to conduct real person interviews, as you learn more from them. Of course, they are also very time consuming :(
Should a multiple choice poll contain a neutral response?
I try to avoid questions with more than two answers, as it is impossible to compare them between users. (good vs. very good can be very subjective). I rephrase most questions into binary type (giving
Should a multiple choice poll contain a neutral response? I try to avoid questions with more than two answers, as it is impossible to compare them between users. (good vs. very good can be very subjective). I rephrase most questions into binary type (giving though the possibility to be indiffernt): "Would you use the product everyday?" Yes No Indifferent "Would you recommend the product to your friends?" etc. I found results obtained with this method to be way more consistent with the feeling I got from later interviews and performance tests. However, my work so far focuses on Human Computer Interaction questionnaires. The best is anyways to conduct real person interviews, as you learn more from them. Of course, they are also very time consuming :(
Should a multiple choice poll contain a neutral response? I try to avoid questions with more than two answers, as it is impossible to compare them between users. (good vs. very good can be very subjective). I rephrase most questions into binary type (giving
38,214
Should a multiple choice poll contain a neutral response?
I would suggest that when trying to judge people's like or dislike of something, there are a few relevant scales of measure: How strong are any positive feelings for the product. How strong are any negative feelings for the product. How thoroughly have things that would cause positive or negative feelings been explored. Since it's possible for strong negative feelings to be generated by problems which are easily fixed (and in some cases, the fact that a problem should be easily fixable may increase the extent of negative feelings), a company should want to know if there are many people who have both strong positive feelings and strong negative feelings, since addressing those people's complaints could generate massive goodwill relatively cheaply. Additionally, a company may benefit from knowing if there are many people who discover things they don't like about a product before they delve deeply into it, and thus never go beyond those complaints. Further, if a product is designed to be useful at both a "shallow" level and a "deep" level, and only 10% of users go deep but 90% of those people have trouble, that would represent a very different picture from what one would get merely knowing that 9% of users had trouble. A common problem with many surveys that try to restrict users' options is that they often don't provide choices that reflect what users want to say. Asking about positive and negative feelings separately will help users who think a product has some value but also have some major peeves with it; further, asking how well users have explored a product will help distinguish those who haven't found problems because there really aren't any, from those who haven't found problems because they've not used the product much.
Should a multiple choice poll contain a neutral response?
I would suggest that when trying to judge people's like or dislike of something, there are a few relevant scales of measure: How strong are any positive feelings for the product. How strong are any n
Should a multiple choice poll contain a neutral response? I would suggest that when trying to judge people's like or dislike of something, there are a few relevant scales of measure: How strong are any positive feelings for the product. How strong are any negative feelings for the product. How thoroughly have things that would cause positive or negative feelings been explored. Since it's possible for strong negative feelings to be generated by problems which are easily fixed (and in some cases, the fact that a problem should be easily fixable may increase the extent of negative feelings), a company should want to know if there are many people who have both strong positive feelings and strong negative feelings, since addressing those people's complaints could generate massive goodwill relatively cheaply. Additionally, a company may benefit from knowing if there are many people who discover things they don't like about a product before they delve deeply into it, and thus never go beyond those complaints. Further, if a product is designed to be useful at both a "shallow" level and a "deep" level, and only 10% of users go deep but 90% of those people have trouble, that would represent a very different picture from what one would get merely knowing that 9% of users had trouble. A common problem with many surveys that try to restrict users' options is that they often don't provide choices that reflect what users want to say. Asking about positive and negative feelings separately will help users who think a product has some value but also have some major peeves with it; further, asking how well users have explored a product will help distinguish those who haven't found problems because there really aren't any, from those who haven't found problems because they've not used the product much.
Should a multiple choice poll contain a neutral response? I would suggest that when trying to judge people's like or dislike of something, there are a few relevant scales of measure: How strong are any positive feelings for the product. How strong are any n
38,215
Should a multiple choice poll contain a neutral response?
If you wish to detect overt opinions then put in a neutral option. If you wish to detect any potential positive or negative bias then leave it out. As caracal said, label things as unambiguously as possible with respect to what you wish the options to reflect. I've seen studies where only the form of response was changed. When there were only two options, like / dislike, then two stimuli were rated as very strongly liked in roughly equal proportions. When subjects were subsequently given an infinite rating scale with neither like nor dislike in the middle the rating differences between the two stimuli were vast (75% of the scale vs. 4%). This suggests that with a limited scale and no neutral option you can detect very small biases as large effects so you should be careful in interpreting such scales and use them judiciously.
Should a multiple choice poll contain a neutral response?
If you wish to detect overt opinions then put in a neutral option. If you wish to detect any potential positive or negative bias then leave it out. As caracal said, label things as unambiguously as
Should a multiple choice poll contain a neutral response? If you wish to detect overt opinions then put in a neutral option. If you wish to detect any potential positive or negative bias then leave it out. As caracal said, label things as unambiguously as possible with respect to what you wish the options to reflect. I've seen studies where only the form of response was changed. When there were only two options, like / dislike, then two stimuli were rated as very strongly liked in roughly equal proportions. When subjects were subsequently given an infinite rating scale with neither like nor dislike in the middle the rating differences between the two stimuli were vast (75% of the scale vs. 4%). This suggests that with a limited scale and no neutral option you can detect very small biases as large effects so you should be careful in interpreting such scales and use them judiciously.
Should a multiple choice poll contain a neutral response? If you wish to detect overt opinions then put in a neutral option. If you wish to detect any potential positive or negative bias then leave it out. As caracal said, label things as unambiguously as
38,216
Should a multiple choice poll contain a neutral response?
The fact that you are forcing the respondent for a positive answer or negative answer, in this situation, is not correct; the respondent may be undecided and it is a possibility; it is more when it is a new product. If you are developing an instrument to measure quality it is better that you use 5 options as given by you.
Should a multiple choice poll contain a neutral response?
The fact that you are forcing the respondent for a positive answer or negative answer, in this situation, is not correct; the respondent may be undecided and it is a possibility; it is more when it is
Should a multiple choice poll contain a neutral response? The fact that you are forcing the respondent for a positive answer or negative answer, in this situation, is not correct; the respondent may be undecided and it is a possibility; it is more when it is a new product. If you are developing an instrument to measure quality it is better that you use 5 options as given by you.
Should a multiple choice poll contain a neutral response? The fact that you are forcing the respondent for a positive answer or negative answer, in this situation, is not correct; the respondent may be undecided and it is a possibility; it is more when it is
38,217
Expected value of random variable that is defined by another
Your random variable takes the value $1$ with probability $p+\frac{1-p}{k}$, and takes each value $j\in\{2, \dots, k\}$ with probability $\frac{1-p}{k}$. So the expectation is simply $$ \begin{align*} EX = & 1\times \big(p+\frac{1-p}{k}\big) + \sum_{j=2}^k j\times\frac{1-p}{k} \\ = & p+\frac{1-p}{k}\times \sum_{j=1}^k j \\ = & p+\frac{1-p}{k}\times\frac{k(k+1)}{2} \\ = & p+\frac{(1-p)(k+1)}{2}. \end{align*}$$
Expected value of random variable that is defined by another
Your random variable takes the value $1$ with probability $p+\frac{1-p}{k}$, and takes each value $j\in\{2, \dots, k\}$ with probability $\frac{1-p}{k}$. So the expectation is simply $$ \begin{align*}
Expected value of random variable that is defined by another Your random variable takes the value $1$ with probability $p+\frac{1-p}{k}$, and takes each value $j\in\{2, \dots, k\}$ with probability $\frac{1-p}{k}$. So the expectation is simply $$ \begin{align*} EX = & 1\times \big(p+\frac{1-p}{k}\big) + \sum_{j=2}^k j\times\frac{1-p}{k} \\ = & p+\frac{1-p}{k}\times \sum_{j=1}^k j \\ = & p+\frac{1-p}{k}\times\frac{k(k+1)}{2} \\ = & p+\frac{(1-p)(k+1)}{2}. \end{align*}$$
Expected value of random variable that is defined by another Your random variable takes the value $1$ with probability $p+\frac{1-p}{k}$, and takes each value $j\in\{2, \dots, k\}$ with probability $\frac{1-p}{k}$. So the expectation is simply $$ \begin{align*}
38,218
Expected value of random variable that is defined by another
Your answer is okay and is justified below. You can write:$$X=B+(1-B)Y$$where $B\sim\text{Bernoulli}(p)$ and $Y\sim\text{Unif}(k)$ are independent random variables. Then:$$\mathbb EX=\mathbb EB+\mathbb E[(1-B)Y]=\mathbb EB+\mathbb E(1-B)\mathbb EY=p+(1-p)\left(\frac12+\frac12k\right)$$ This way of working is recommendable in situations where the rv is defined by means of cases.
Expected value of random variable that is defined by another
Your answer is okay and is justified below. You can write:$$X=B+(1-B)Y$$where $B\sim\text{Bernoulli}(p)$ and $Y\sim\text{Unif}(k)$ are independent random variables. Then:$$\mathbb EX=\mathbb EB+\math
Expected value of random variable that is defined by another Your answer is okay and is justified below. You can write:$$X=B+(1-B)Y$$where $B\sim\text{Bernoulli}(p)$ and $Y\sim\text{Unif}(k)$ are independent random variables. Then:$$\mathbb EX=\mathbb EB+\mathbb E[(1-B)Y]=\mathbb EB+\mathbb E(1-B)\mathbb EY=p+(1-p)\left(\frac12+\frac12k\right)$$ This way of working is recommendable in situations where the rv is defined by means of cases.
Expected value of random variable that is defined by another Your answer is okay and is justified below. You can write:$$X=B+(1-B)Y$$where $B\sim\text{Bernoulli}(p)$ and $Y\sim\text{Unif}(k)$ are independent random variables. Then:$$\mathbb EX=\mathbb EB+\math
38,219
Lasso coefficient for some features is higher than Linear Regression Coefficient
As German Demidov notes, this is perfectly fine. The Lasso will shrink some of your coefficients to zero, but it does not have the property of shrinking all coefficients compared to the OLS estimate. Rather, it may increase some coefficients to "compensate" for the ones it has shrunk. There is nothing to worry about. (A very good question, though.)
Lasso coefficient for some features is higher than Linear Regression Coefficient
As German Demidov notes, this is perfectly fine. The Lasso will shrink some of your coefficients to zero, but it does not have the property of shrinking all coefficients compared to the OLS estimate.
Lasso coefficient for some features is higher than Linear Regression Coefficient As German Demidov notes, this is perfectly fine. The Lasso will shrink some of your coefficients to zero, but it does not have the property of shrinking all coefficients compared to the OLS estimate. Rather, it may increase some coefficients to "compensate" for the ones it has shrunk. There is nothing to worry about. (A very good question, though.)
Lasso coefficient for some features is higher than Linear Regression Coefficient As German Demidov notes, this is perfectly fine. The Lasso will shrink some of your coefficients to zero, but it does not have the property of shrinking all coefficients compared to the OLS estimate.
38,220
Lasso coefficient for some features is higher than Linear Regression Coefficient
Lasso coefficients can shrink again while you get closer to the OLS solution. See for instance: Why under joint least squares direction is it possible for some coefficients to decrease in LARS regression? Here is an image of the relationship between the coefficients and the error. The lasso balances the error (depicted by the green surface) and the size of the coefficients (the red surface). For a given amount of regularisation it might be that some some parameters 'overshoot' and are larger than the actual OLS. By having these parameters larger you will have other parameters lower. This situation happens when one parameter can take the role of several others. In that case, initially this parameter will be able to model the outcome very well even with a small coefficient (and one that is above the true model coefficient), but if you allow the total of coefficients to be larger then the others may catch up. A clear illustration of this principle is in this question where a coefficient that should be zero is initially possitive. This happens because the parameter models the outcome better than the true model when the penalty is high: Is Ridge more robust than Lasso on feature selection?
Lasso coefficient for some features is higher than Linear Regression Coefficient
Lasso coefficients can shrink again while you get closer to the OLS solution. See for instance: Why under joint least squares direction is it possible for some coefficients to decrease in LARS regress
Lasso coefficient for some features is higher than Linear Regression Coefficient Lasso coefficients can shrink again while you get closer to the OLS solution. See for instance: Why under joint least squares direction is it possible for some coefficients to decrease in LARS regression? Here is an image of the relationship between the coefficients and the error. The lasso balances the error (depicted by the green surface) and the size of the coefficients (the red surface). For a given amount of regularisation it might be that some some parameters 'overshoot' and are larger than the actual OLS. By having these parameters larger you will have other parameters lower. This situation happens when one parameter can take the role of several others. In that case, initially this parameter will be able to model the outcome very well even with a small coefficient (and one that is above the true model coefficient), but if you allow the total of coefficients to be larger then the others may catch up. A clear illustration of this principle is in this question where a coefficient that should be zero is initially possitive. This happens because the parameter models the outcome better than the true model when the penalty is high: Is Ridge more robust than Lasso on feature selection?
Lasso coefficient for some features is higher than Linear Regression Coefficient Lasso coefficients can shrink again while you get closer to the OLS solution. See for instance: Why under joint least squares direction is it possible for some coefficients to decrease in LARS regress
38,221
Under Berkson's Fallacy, why can't popular novels be terrible because the masses fail to appreciate quality?
This reminds me strongly of Sturgeon's Law: 90% of everything is crud. The "law" was coined by scifi author and critic Theodore Sturgeon as a response to people being dismissive of scifi because "90% of it is crud". He realized that the same was true of any genre, but that (say) cozy mystery fans knew how to spot works in the cozy mystery genre that were good (for any definition of good), but would be choosing scifi works more-or-less at random. Without enough grounding in the genre to know what to look for (or what to look out for), "90% of scifi is crud" but "most cozy mysteries are good" because the speaker knows how to spot the good ones and avoid the bad. Quality can help determine popularity. But, so can flukes, fads, historical inertia, and any number of other factors. Not so long ago, if you published a book with a cover that vaguely implied werewolves and teenage-angst-filled love triangles, it'd sell passably well even if it wasn't Great Literature: werewolf romance novels were a fad, and some of them got high enough on the charts to qualify as popular. And, of course, quality isn't a guarantee of popularity! Again, due to flukes, fads, etc., a superb work might not get the recognition and popularity it deserves - consider Vincent van Gogh, who famously died in poverty, unrecognized as a great artist until years after his death. How many other van Goghs are out there, their work never rediscovered? To an extent, there's also the matter of taste. What I find terrible you might think is a Great Work of Literature. Putting all of that together, the larger quote makes sense: You know how popular novels are terrible? It’s not because the masses don’t appreciate quality. It’s because there’s a Great Square of Novels, and the only novels you ever hear about are the ones in the Acceptable Triangle, which are either popular or good. That is: The books you've heard of are in the Acceptable Triangle - they're popular and/or they're good. Most novels are terrible, though, so a lot of terrible novels become popular for other reasons. Relatively few novels are Great Works, and some of those get lost in the sea of mediocrity. Therefore, most popular novels are terrible not because the masses don't appreciate quality but because most works are terrible to start with.
Under Berkson's Fallacy, why can't popular novels be terrible because the masses fail to appreciate
This reminds me strongly of Sturgeon's Law: 90% of everything is crud. The "law" was coined by scifi author and critic Theodore Sturgeon as a response to people being dismissive of scifi because "90
Under Berkson's Fallacy, why can't popular novels be terrible because the masses fail to appreciate quality? This reminds me strongly of Sturgeon's Law: 90% of everything is crud. The "law" was coined by scifi author and critic Theodore Sturgeon as a response to people being dismissive of scifi because "90% of it is crud". He realized that the same was true of any genre, but that (say) cozy mystery fans knew how to spot works in the cozy mystery genre that were good (for any definition of good), but would be choosing scifi works more-or-less at random. Without enough grounding in the genre to know what to look for (or what to look out for), "90% of scifi is crud" but "most cozy mysteries are good" because the speaker knows how to spot the good ones and avoid the bad. Quality can help determine popularity. But, so can flukes, fads, historical inertia, and any number of other factors. Not so long ago, if you published a book with a cover that vaguely implied werewolves and teenage-angst-filled love triangles, it'd sell passably well even if it wasn't Great Literature: werewolf romance novels were a fad, and some of them got high enough on the charts to qualify as popular. And, of course, quality isn't a guarantee of popularity! Again, due to flukes, fads, etc., a superb work might not get the recognition and popularity it deserves - consider Vincent van Gogh, who famously died in poverty, unrecognized as a great artist until years after his death. How many other van Goghs are out there, their work never rediscovered? To an extent, there's also the matter of taste. What I find terrible you might think is a Great Work of Literature. Putting all of that together, the larger quote makes sense: You know how popular novels are terrible? It’s not because the masses don’t appreciate quality. It’s because there’s a Great Square of Novels, and the only novels you ever hear about are the ones in the Acceptable Triangle, which are either popular or good. That is: The books you've heard of are in the Acceptable Triangle - they're popular and/or they're good. Most novels are terrible, though, so a lot of terrible novels become popular for other reasons. Relatively few novels are Great Works, and some of those get lost in the sea of mediocrity. Therefore, most popular novels are terrible not because the masses don't appreciate quality but because most works are terrible to start with.
Under Berkson's Fallacy, why can't popular novels be terrible because the masses fail to appreciate This reminds me strongly of Sturgeon's Law: 90% of everything is crud. The "law" was coined by scifi author and critic Theodore Sturgeon as a response to people being dismissive of scifi because "90
38,222
Under Berkson's Fallacy, why can't popular novels be terrible because the masses fail to appreciate quality?
Berkson's Paradox doesn't prove that there's no correlation between quality and popularity, and Ellenberg isn't claiming that it does - it just counters the argument that "there is a negative correlation between quality and popularity among novels I have heard of, therefore quality and popularity are negatively correlated".
Under Berkson's Fallacy, why can't popular novels be terrible because the masses fail to appreciate
Berkson's Paradox doesn't prove that there's no correlation between quality and popularity, and Ellenberg isn't claiming that it does - it just counters the argument that "there is a negative correlat
Under Berkson's Fallacy, why can't popular novels be terrible because the masses fail to appreciate quality? Berkson's Paradox doesn't prove that there's no correlation between quality and popularity, and Ellenberg isn't claiming that it does - it just counters the argument that "there is a negative correlation between quality and popularity among novels I have heard of, therefore quality and popularity are negatively correlated".
Under Berkson's Fallacy, why can't popular novels be terrible because the masses fail to appreciate Berkson's Paradox doesn't prove that there's no correlation between quality and popularity, and Ellenberg isn't claiming that it does - it just counters the argument that "there is a negative correlat
38,223
Why is the intercept in multiple regression changing when including/excluding regressors?
In addition to @DaveT's helpful answer, here are a few more clarifications regarding the estimated intercepts in your models. Model 1 The (true) intercept in your first model lm(mpg ~ 1, data=mtcars) represents the mean value of mpg for all cars represented by the ones included in this data set, regardless of their displacement (disp) or horse power (hp). In this sense, the (true) intercept is simply the unconditional mean of mpg. Based on the data, its value is estimated to be 20.091. Model 2 The (true) intercept in your second model: lm(mpg ~ disp, data=mtcars) represents the mean value of mpg for all cars represented by the ones included in this data set which share the same displacement (disp) value of 0. This intercept is estimated from the data to be 29.599855. Because displacement is a measure of the engine size of a car, it doesn't make sense that you would have a car with a displacement of 0, suggesting that the intercept interpretation in this model is meaningless in the real world. To get a meaningful interpretation for the intercept in your second model, you could center the disp variable around its observed mean value in the data (presuming disp has an an approximately normal distribution) and re-fit the model: disp.cen <- mtcars$disp - mean(mtcars$disp) lm(mpg ~ disp.cen, data=mtcars) In the re-fitted second model, the intercept will represent the mean value of mpg for all cars represented by the ones included in this data set which have a "typical" displacement (disp). Here, a "typical" displacement means the average displacement observed in the data. Model 3 The (true) intercept in your third model: lm(mpg ~ disp + hp, data=mtcars)) represents the mean value of mpg for all cars represented by the ones included in this data set which share the same displacement (disp) value of 0 and the same horse power (hp) value of 0. This intercept is estimated from the data to be 30.735904. Because displacement is a measure of the engine size of a car and horse power is a measure of the engine power of a car, it doesn't make sense that you would have a car with a displacement of 0 and a horse power of 0, suggesting that the intercept interpretation in this model is meaningless. To get a meaningful interpretation for the intercept in your third model, you could center the disp variable around its observed mean value in the data (presuming disp has an an approximately normal distribution), center the hp variable around its observed mean value in the data (presuming hp has an an approximately normal distribution), and then re-fit the model: disp.cen <- mtcars$disp - mean(mtcars$disp) hp.cen <- mtcars$hp - mean(mtcars$hp) lm(mpg ~ disp.cen + hp.cen, data=mtcars)) In the re-fitted third model, the intercept will represent the mean value of mpg for all cars represented by the ones included in this data set which have a "typical" displacement (disp) and a "typical" horse power (hp). Here, a "typical" displacement means the average displacement observed in the data, whereas a typical horse power means the average horse power observed in the data. Addendum The word expected is synonimous with the word mean in this answer. Thus, the expected value of the variable mpg is the same as the mean (or average) value. There are two types of mean values for the mpg variable - unconditional and conditional. The unconditional mean of mpg refers to the mean value of mpg across all cars represented by the ones in the dataset, regardless of their other caracteristics (e.g., disp, hp). In other words, you would mix together all cars represented by the ones in your data - those with high disp and high hp, those with high disp and low hp, etc. - and compute their mean mpg value, which is an unconditional mean value (in the sense that it does NOT depend on other car characteristics). The conditional mean of mpg refers to the mean value of mpg across those cars represented by the ones in the dataset which share one or more caracteristics. You could have: A conditional mean of mpg given disp; A conditional mean of mpg given hp; A conditional mean of mpg given disp and hp. The conditional mean of mpg given disp refers to the mean value of mpg across all cars represented by the ones in your data set which share the same displacement (disp). Since disp can take multiple values, each of its values gives rise to a different conditional mean of mpg given disp. The model that describes how the conditional mean of mpg given disp varies as a function of the disp values is: lm(mpg ~ disp, data = mtcars) This model assumes that the conditional mean of mpg given disp is a linear function of disp. The conditional mean of mpg given hp refers to the mean value of mpg across all cars represented by the ones in your data set which share the same horse power (hp). Since hp can take multiple values, each of its values gives rise to a different conditional mean of mpg given hp. The model that describes how the conditional mean of mpg given hp varies as a function of the hp values is: lm(mpg ~ hp, data = mtcars) This model assumes that the conditional mean of mpg given hp is a linear function of hp. The conditional mean of mpg given disp and hp refers to the mean value of mpg across all cars represented by the ones in your data set which share the same displacement (disp) and the same horse power (hp). Since disp and hp can both take multiple values, each of their combination of values gives rise to a different conditional mean of mpg given disp and hp. The model that describes how the conditional mean of mpg given disp and hp varies as a function of the disp and hp values is: lm(mpg ~ disp + hp, data = mtcars) Of course, you could also have a model like: lm(mpg ~ disp*hp, data = mtcars) The first of the above models assumes that disp and hp have independent effects on mpg, while the second assumes that the effect of disp on mpg depends on the effect of hp and the other way around.
Why is the intercept in multiple regression changing when including/excluding regressors?
In addition to @DaveT's helpful answer, here are a few more clarifications regarding the estimated intercepts in your models. Model 1 The (true) intercept in your first model lm(mpg ~ 1, data=mtcars)
Why is the intercept in multiple regression changing when including/excluding regressors? In addition to @DaveT's helpful answer, here are a few more clarifications regarding the estimated intercepts in your models. Model 1 The (true) intercept in your first model lm(mpg ~ 1, data=mtcars) represents the mean value of mpg for all cars represented by the ones included in this data set, regardless of their displacement (disp) or horse power (hp). In this sense, the (true) intercept is simply the unconditional mean of mpg. Based on the data, its value is estimated to be 20.091. Model 2 The (true) intercept in your second model: lm(mpg ~ disp, data=mtcars) represents the mean value of mpg for all cars represented by the ones included in this data set which share the same displacement (disp) value of 0. This intercept is estimated from the data to be 29.599855. Because displacement is a measure of the engine size of a car, it doesn't make sense that you would have a car with a displacement of 0, suggesting that the intercept interpretation in this model is meaningless in the real world. To get a meaningful interpretation for the intercept in your second model, you could center the disp variable around its observed mean value in the data (presuming disp has an an approximately normal distribution) and re-fit the model: disp.cen <- mtcars$disp - mean(mtcars$disp) lm(mpg ~ disp.cen, data=mtcars) In the re-fitted second model, the intercept will represent the mean value of mpg for all cars represented by the ones included in this data set which have a "typical" displacement (disp). Here, a "typical" displacement means the average displacement observed in the data. Model 3 The (true) intercept in your third model: lm(mpg ~ disp + hp, data=mtcars)) represents the mean value of mpg for all cars represented by the ones included in this data set which share the same displacement (disp) value of 0 and the same horse power (hp) value of 0. This intercept is estimated from the data to be 30.735904. Because displacement is a measure of the engine size of a car and horse power is a measure of the engine power of a car, it doesn't make sense that you would have a car with a displacement of 0 and a horse power of 0, suggesting that the intercept interpretation in this model is meaningless. To get a meaningful interpretation for the intercept in your third model, you could center the disp variable around its observed mean value in the data (presuming disp has an an approximately normal distribution), center the hp variable around its observed mean value in the data (presuming hp has an an approximately normal distribution), and then re-fit the model: disp.cen <- mtcars$disp - mean(mtcars$disp) hp.cen <- mtcars$hp - mean(mtcars$hp) lm(mpg ~ disp.cen + hp.cen, data=mtcars)) In the re-fitted third model, the intercept will represent the mean value of mpg for all cars represented by the ones included in this data set which have a "typical" displacement (disp) and a "typical" horse power (hp). Here, a "typical" displacement means the average displacement observed in the data, whereas a typical horse power means the average horse power observed in the data. Addendum The word expected is synonimous with the word mean in this answer. Thus, the expected value of the variable mpg is the same as the mean (or average) value. There are two types of mean values for the mpg variable - unconditional and conditional. The unconditional mean of mpg refers to the mean value of mpg across all cars represented by the ones in the dataset, regardless of their other caracteristics (e.g., disp, hp). In other words, you would mix together all cars represented by the ones in your data - those with high disp and high hp, those with high disp and low hp, etc. - and compute their mean mpg value, which is an unconditional mean value (in the sense that it does NOT depend on other car characteristics). The conditional mean of mpg refers to the mean value of mpg across those cars represented by the ones in the dataset which share one or more caracteristics. You could have: A conditional mean of mpg given disp; A conditional mean of mpg given hp; A conditional mean of mpg given disp and hp. The conditional mean of mpg given disp refers to the mean value of mpg across all cars represented by the ones in your data set which share the same displacement (disp). Since disp can take multiple values, each of its values gives rise to a different conditional mean of mpg given disp. The model that describes how the conditional mean of mpg given disp varies as a function of the disp values is: lm(mpg ~ disp, data = mtcars) This model assumes that the conditional mean of mpg given disp is a linear function of disp. The conditional mean of mpg given hp refers to the mean value of mpg across all cars represented by the ones in your data set which share the same horse power (hp). Since hp can take multiple values, each of its values gives rise to a different conditional mean of mpg given hp. The model that describes how the conditional mean of mpg given hp varies as a function of the hp values is: lm(mpg ~ hp, data = mtcars) This model assumes that the conditional mean of mpg given hp is a linear function of hp. The conditional mean of mpg given disp and hp refers to the mean value of mpg across all cars represented by the ones in your data set which share the same displacement (disp) and the same horse power (hp). Since disp and hp can both take multiple values, each of their combination of values gives rise to a different conditional mean of mpg given disp and hp. The model that describes how the conditional mean of mpg given disp and hp varies as a function of the disp and hp values is: lm(mpg ~ disp + hp, data = mtcars) Of course, you could also have a model like: lm(mpg ~ disp*hp, data = mtcars) The first of the above models assumes that disp and hp have independent effects on mpg, while the second assumes that the effect of disp on mpg depends on the effect of hp and the other way around.
Why is the intercept in multiple regression changing when including/excluding regressors? In addition to @DaveT's helpful answer, here are a few more clarifications regarding the estimated intercepts in your models. Model 1 The (true) intercept in your first model lm(mpg ~ 1, data=mtcars)
38,224
Why is the intercept in multiple regression changing when including/excluding regressors?
Your professor comments concerning the conditional mean is when x meets a particular condition. In this case the intercept is the conditional mean of y when x=0. If x never takes the value of 0, then there is no conditional mean for x=0. As a simple example let us look at y=(-x+10) for x from 0 to 10. If we fit model to the data with no independent variables then the best prediction for y is the mean of y, in this example y=5 (the intercept). Let us repeat the model with a single independent variable. The model now is y= 10-x, so thus the intercept is now 10. So the intercept has change from 5 (with no independent variable) to 10 (with a single variable). If we started with a more complex dataset and as we add terms to the model, the intercept and coefficients will change. Hopefully this example helps explains why the intercept changes with changes in the model.
Why is the intercept in multiple regression changing when including/excluding regressors?
Your professor comments concerning the conditional mean is when x meets a particular condition. In this case the intercept is the conditional mean of y when x=0. If x never takes the value of 0, then
Why is the intercept in multiple regression changing when including/excluding regressors? Your professor comments concerning the conditional mean is when x meets a particular condition. In this case the intercept is the conditional mean of y when x=0. If x never takes the value of 0, then there is no conditional mean for x=0. As a simple example let us look at y=(-x+10) for x from 0 to 10. If we fit model to the data with no independent variables then the best prediction for y is the mean of y, in this example y=5 (the intercept). Let us repeat the model with a single independent variable. The model now is y= 10-x, so thus the intercept is now 10. So the intercept has change from 5 (with no independent variable) to 10 (with a single variable). If we started with a more complex dataset and as we add terms to the model, the intercept and coefficients will change. Hopefully this example helps explains why the intercept changes with changes in the model.
Why is the intercept in multiple regression changing when including/excluding regressors? Your professor comments concerning the conditional mean is when x meets a particular condition. In this case the intercept is the conditional mean of y when x=0. If x never takes the value of 0, then
38,225
Why is the intercept in multiple regression changing when including/excluding regressors?
Question part 1 The constant/intercept is defined as the mean of the dependent variable when you set all of the independent variables in your model to zero. In short: The intercept term relates to the prediction based on the fitted model, when all independent variables are set to zero. This prediction may be more or less good based on bias and noise. What is changing when you include more regressors is that the model has more or less bias, and this will influence the prediction of the intercept. Example case Say we model points according to $$y = 0.5 x^2 +5x+30 + \epsilon$$ with Gaussian noise $\epsilon \sim N(\mu = 0, \sigma^2 = 9)$ and let the parameters $x$ be normal distributed $x \sim N(\mu = -3, \sigma^2 = 2)$ set.seed(1) x <- rnorm(n=400, mu= -3, sigma = 1.4) y <- 30 + 5*x + 0.5*x^2 + rnorm(n = 400, mu=0, sigma= 9) Then it will look like this (I have highlighted the points around $x=0$ in purple): The model can also be expressed as: $$y \vert x \sim N(\mu = 0.5 x^2 +5x+30 ,\sigma^2=9) $$ set.seed(1) x <- rnorm(n=400, mu= -3, sigma = 1.4) y <- rnorm(n = 400, mu=30 + 5*x + 0.5*x^2, sigma= 9) which means that the value of $y$ conditional on $x$ is distributed as a normal distribution with mean $\mu = 0.5 x^2 +5x+30$ and variance $\sigma^2=9$. Answer The constant/intercept is defined as the mean of the dependent variable when you set all of the independent variables in your model to zero. This is only for the true quadratic curve $$y = 0.5 x^2 +5x+30$$ which has intercept $30$. Only for the true intercept can we say that the intercept relates to the mean of the data points conditional on the value $x=0$. I have marked this point in the figure with a purple square dot. For the fitted curves... $$\begin{array}{rcccccccl} y &=& & & &+& {20.1} &+& \epsilon \\ y &=& &+& 2.072 \, x &+&{26.421} &+&\epsilon \\ y &=& 0.3959 \, x^2 &+& 4.4453 \, x &+& \underbrace{{29.2484}}_{\text{intercept terms}} &+& \epsilon \end{array}$$ ...the intercept terms do not refer exactly to the mean of the data (conditional on $x=0$). But more precisely do they refer to the predicted (conditional) mean of the data. And as you can see those predictions can be more or less good due to bias and/or noise. I have marked these points in the figure with white square dots. In the special case that you fit an intercept only model $y=a+\epsilon$ then the predicted intercept-term $\hat{a}$ will happen to coincide with the unconditional/global mean of the data sample $\hat{a} = \bar{x}$. Note that this only means $\bar {x} $ (the mean of some observed sample) is a predictor for the true mean of the entire population (it is not equal to it). Question part 2 So when in my last model, disp and hp are zero, the mean should be 30.7?! Obviously there's a distinction between "being zero" and "being included in the model/estimation". The distinction is as following: When disp is not in the model then the intercept will refer to the mean of mpg for all values of disp. When disp is in the model but set at zero then the intercept will refer to the mean of mpg for the value of disp=0. The image below will try to explain intuitively what this 'conditional on disp=0' means. Note: I have augmented the data with values from another cars set to make the histograms better looking (From: https://github.com/RodolfoViana/exploratory-data-analysis-dataset-cars and http://www.rpubs.com/dksmith01/cars ). On the left you see the joint distribution of mpg and disp. On the right (in the margin) you see the marginal distribution of mpg only. This marginal distribution can be split up based on conditions on disp. In this image it is for sketched displacement below 100, between 100 and 300, and between 300 and 500 cubic inches. The intercept (displacement = 0) would just be another condition (other than the three conditions sketched below). For cars it would not make physical/practical sense to have the regressors set at zero (also note the broken gray line that I added, which is the model $\text{mgp}={270}/{\sqrt{\text{disp}}}$; this is probably a more realistic model and that line will never intercept the y-axis at disp=0). The position of the intercept is arbitrary and you can place it anywhere with a shift of variables (think for instance of the temperature scale where 0 degrees Fahrenheit/Kelvin/Celcius all mean something different).
Why is the intercept in multiple regression changing when including/excluding regressors?
Question part 1 The constant/intercept is defined as the mean of the dependent variable when you set all of the independent variables in your model to zero. In short: The intercept term relates to t
Why is the intercept in multiple regression changing when including/excluding regressors? Question part 1 The constant/intercept is defined as the mean of the dependent variable when you set all of the independent variables in your model to zero. In short: The intercept term relates to the prediction based on the fitted model, when all independent variables are set to zero. This prediction may be more or less good based on bias and noise. What is changing when you include more regressors is that the model has more or less bias, and this will influence the prediction of the intercept. Example case Say we model points according to $$y = 0.5 x^2 +5x+30 + \epsilon$$ with Gaussian noise $\epsilon \sim N(\mu = 0, \sigma^2 = 9)$ and let the parameters $x$ be normal distributed $x \sim N(\mu = -3, \sigma^2 = 2)$ set.seed(1) x <- rnorm(n=400, mu= -3, sigma = 1.4) y <- 30 + 5*x + 0.5*x^2 + rnorm(n = 400, mu=0, sigma= 9) Then it will look like this (I have highlighted the points around $x=0$ in purple): The model can also be expressed as: $$y \vert x \sim N(\mu = 0.5 x^2 +5x+30 ,\sigma^2=9) $$ set.seed(1) x <- rnorm(n=400, mu= -3, sigma = 1.4) y <- rnorm(n = 400, mu=30 + 5*x + 0.5*x^2, sigma= 9) which means that the value of $y$ conditional on $x$ is distributed as a normal distribution with mean $\mu = 0.5 x^2 +5x+30$ and variance $\sigma^2=9$. Answer The constant/intercept is defined as the mean of the dependent variable when you set all of the independent variables in your model to zero. This is only for the true quadratic curve $$y = 0.5 x^2 +5x+30$$ which has intercept $30$. Only for the true intercept can we say that the intercept relates to the mean of the data points conditional on the value $x=0$. I have marked this point in the figure with a purple square dot. For the fitted curves... $$\begin{array}{rcccccccl} y &=& & & &+& {20.1} &+& \epsilon \\ y &=& &+& 2.072 \, x &+&{26.421} &+&\epsilon \\ y &=& 0.3959 \, x^2 &+& 4.4453 \, x &+& \underbrace{{29.2484}}_{\text{intercept terms}} &+& \epsilon \end{array}$$ ...the intercept terms do not refer exactly to the mean of the data (conditional on $x=0$). But more precisely do they refer to the predicted (conditional) mean of the data. And as you can see those predictions can be more or less good due to bias and/or noise. I have marked these points in the figure with white square dots. In the special case that you fit an intercept only model $y=a+\epsilon$ then the predicted intercept-term $\hat{a}$ will happen to coincide with the unconditional/global mean of the data sample $\hat{a} = \bar{x}$. Note that this only means $\bar {x} $ (the mean of some observed sample) is a predictor for the true mean of the entire population (it is not equal to it). Question part 2 So when in my last model, disp and hp are zero, the mean should be 30.7?! Obviously there's a distinction between "being zero" and "being included in the model/estimation". The distinction is as following: When disp is not in the model then the intercept will refer to the mean of mpg for all values of disp. When disp is in the model but set at zero then the intercept will refer to the mean of mpg for the value of disp=0. The image below will try to explain intuitively what this 'conditional on disp=0' means. Note: I have augmented the data with values from another cars set to make the histograms better looking (From: https://github.com/RodolfoViana/exploratory-data-analysis-dataset-cars and http://www.rpubs.com/dksmith01/cars ). On the left you see the joint distribution of mpg and disp. On the right (in the margin) you see the marginal distribution of mpg only. This marginal distribution can be split up based on conditions on disp. In this image it is for sketched displacement below 100, between 100 and 300, and between 300 and 500 cubic inches. The intercept (displacement = 0) would just be another condition (other than the three conditions sketched below). For cars it would not make physical/practical sense to have the regressors set at zero (also note the broken gray line that I added, which is the model $\text{mgp}={270}/{\sqrt{\text{disp}}}$; this is probably a more realistic model and that line will never intercept the y-axis at disp=0). The position of the intercept is arbitrary and you can place it anywhere with a shift of variables (think for instance of the temperature scale where 0 degrees Fahrenheit/Kelvin/Celcius all mean something different).
Why is the intercept in multiple regression changing when including/excluding regressors? Question part 1 The constant/intercept is defined as the mean of the dependent variable when you set all of the independent variables in your model to zero. In short: The intercept term relates to t
38,226
How to visualize an evolution of a distribution in time?
EDIT Didn't see it was a python question. I think the idea stands, but not sure if python implementation readily available. You might want to look into so called ridge-plots using R package ggridges (and gganimate for animation). Below is an example (obviously doesn't have to be an animation): Code is available in this gist.
How to visualize an evolution of a distribution in time?
EDIT Didn't see it was a python question. I think the idea stands, but not sure if python implementation readily available. You might want to look into so called ridge-plots using R package ggridges
How to visualize an evolution of a distribution in time? EDIT Didn't see it was a python question. I think the idea stands, but not sure if python implementation readily available. You might want to look into so called ridge-plots using R package ggridges (and gganimate for animation). Below is an example (obviously doesn't have to be an animation): Code is available in this gist.
How to visualize an evolution of a distribution in time? EDIT Didn't see it was a python question. I think the idea stands, but not sure if python implementation readily available. You might want to look into so called ridge-plots using R package ggridges
38,227
How to visualize an evolution of a distribution in time?
Always think about the nature of your situation: Who will be the audience for this plot? What are my goals for it? What are my data? You state that you have a "distribution which depends on a parameter which evolves over time". If your audience is fairly sophisticated, and this is a known, studied distribution (e.g., a Weibull), then you could estimate the changing parameter for each day, plot it on a scatterplot, and smooth it with something simple like a LOWESS line. Here's an example. These are coded in R, but they are intended to be easy to follow by people who don't use R, and should be possible to translate to Python. library(fitdistrplus) # we'll use this package set.seed(9234) # this makes the example exactly reproducible day = rep(1:16, each=100) # 1 through 16, each repeated 100x y = c() # an empty vector to hold the data for(i in 1:16){ # generates data from Weibull distributions w/ the y = c(y, # shape parameter increasing (but decelerating) by day rweibull(n=100, shape=(.5 + .1*i - .002*(i^2)), scale=1)) } # estimate Weibull shape & scale parameters by MLE for each day d = t(sapply(split(y, day), function(x){ fitdist(x, distr="weibull")$estimate })) d = data.frame(day=1:16, d) d # day shape scale # 1 1 0.6311143 0.9871380 # 2 2 0.7501392 1.0168905 # 3 3 0.7510426 0.8853516 # 4 4 0.8142484 0.8701132 # 5 5 0.9081937 1.1098466 # 6 6 0.9679144 1.0668120 # 7 7 1.0347746 1.0638731 # 8 8 1.1496184 0.9775989 # 9 9 1.1724681 1.0758072 # 10 10 1.2396152 0.9975250 # 11 11 1.2519313 0.8847656 # 12 12 1.4648643 1.0801915 # 13 13 1.3258313 0.9113326 # 14 14 1.4301392 0.9699252 # 15 15 1.5494493 1.0448072 # 16 16 1.5989056 1.0133831 windows() plot(shape~day, d) lines(lowess(d$day, d$shape), col="red") If the data weren't from a known distribution, you could plot multiple lines tracing fixed quantiles over time. dq = t(sapply(split(y, day), function(x){ quantile(x, probs=c(0.25, 0.5, 0.75, 0.95)) })) dq = data.frame(day=1:16, dq) names(dq) = c("day", "25%", "50%", "75%", "95%") dq # day 25% 50% 75% # 1 1 0.1447001 0.5212207 1.628061 # 2 2 0.2318992 0.6657878 1.394435 # 3 3 0.1868559 0.5122787 1.618891 # 4 4 0.1665822 0.6402280 1.259112 # 5 5 0.3038764 0.6778831 1.418966 # 6 6 0.2508331 0.7469482 1.447055 # 7 7 0.2759569 0.7411599 1.527585 # 8 8 0.2774774 0.7496496 1.421123 # 9 9 0.3630679 0.9203537 1.343523 # 10 10 0.3788195 0.6613015 1.263599 # 11 11 0.3514467 0.6411170 1.110531 # 12 12 0.4697239 0.8562416 1.253663 # 13 13 0.3281270 0.6732758 1.113507 # 14 14 0.4498140 0.7440592 1.143489 # 15 15 0.4391240 0.8440031 1.371128 # 16 16 0.4726235 0.8386493 1.210914 windows() plot(1,1, xlim=c(1,16), ylim=c(0, 7), xlab="day", ylab="value", type="n") lines(dq$day, dq$`25%`, col="red", lty=2) lines(dq$day, dq$`50%`, col="black", lty=1) lines(dq$day, dq$`75%`, col="blue", lty=3) lines(dq$day, dq$`95%`, col="purple", lty=4) legend("topright", legend=c("95%", "75%", "50%", "25%"), lty=c(4,3,1,2), col=c("purple", "blue", "black", "red")) If your audience won't be as sophisticated, and wouldn't know what a "Weibull" is or be thrown off by trying to follow the idea of the "75th percentile", and you want something with more pizzazz, make a panel of kernel density plots. (Admittedly, adibender's plot has more pizzazz than this.) windows() par(mfrow=c(4,4)) for(i in 1:16){ plot(density(y[day==i]), main=paste("day", i), xlab="", xlim=range(y), ylim=c(0, .8), ylab="", axes=FALSE); box() }
How to visualize an evolution of a distribution in time?
Always think about the nature of your situation: Who will be the audience for this plot? What are my goals for it? What are my data? You state that you have a "distribution which depends on a para
How to visualize an evolution of a distribution in time? Always think about the nature of your situation: Who will be the audience for this plot? What are my goals for it? What are my data? You state that you have a "distribution which depends on a parameter which evolves over time". If your audience is fairly sophisticated, and this is a known, studied distribution (e.g., a Weibull), then you could estimate the changing parameter for each day, plot it on a scatterplot, and smooth it with something simple like a LOWESS line. Here's an example. These are coded in R, but they are intended to be easy to follow by people who don't use R, and should be possible to translate to Python. library(fitdistrplus) # we'll use this package set.seed(9234) # this makes the example exactly reproducible day = rep(1:16, each=100) # 1 through 16, each repeated 100x y = c() # an empty vector to hold the data for(i in 1:16){ # generates data from Weibull distributions w/ the y = c(y, # shape parameter increasing (but decelerating) by day rweibull(n=100, shape=(.5 + .1*i - .002*(i^2)), scale=1)) } # estimate Weibull shape & scale parameters by MLE for each day d = t(sapply(split(y, day), function(x){ fitdist(x, distr="weibull")$estimate })) d = data.frame(day=1:16, d) d # day shape scale # 1 1 0.6311143 0.9871380 # 2 2 0.7501392 1.0168905 # 3 3 0.7510426 0.8853516 # 4 4 0.8142484 0.8701132 # 5 5 0.9081937 1.1098466 # 6 6 0.9679144 1.0668120 # 7 7 1.0347746 1.0638731 # 8 8 1.1496184 0.9775989 # 9 9 1.1724681 1.0758072 # 10 10 1.2396152 0.9975250 # 11 11 1.2519313 0.8847656 # 12 12 1.4648643 1.0801915 # 13 13 1.3258313 0.9113326 # 14 14 1.4301392 0.9699252 # 15 15 1.5494493 1.0448072 # 16 16 1.5989056 1.0133831 windows() plot(shape~day, d) lines(lowess(d$day, d$shape), col="red") If the data weren't from a known distribution, you could plot multiple lines tracing fixed quantiles over time. dq = t(sapply(split(y, day), function(x){ quantile(x, probs=c(0.25, 0.5, 0.75, 0.95)) })) dq = data.frame(day=1:16, dq) names(dq) = c("day", "25%", "50%", "75%", "95%") dq # day 25% 50% 75% # 1 1 0.1447001 0.5212207 1.628061 # 2 2 0.2318992 0.6657878 1.394435 # 3 3 0.1868559 0.5122787 1.618891 # 4 4 0.1665822 0.6402280 1.259112 # 5 5 0.3038764 0.6778831 1.418966 # 6 6 0.2508331 0.7469482 1.447055 # 7 7 0.2759569 0.7411599 1.527585 # 8 8 0.2774774 0.7496496 1.421123 # 9 9 0.3630679 0.9203537 1.343523 # 10 10 0.3788195 0.6613015 1.263599 # 11 11 0.3514467 0.6411170 1.110531 # 12 12 0.4697239 0.8562416 1.253663 # 13 13 0.3281270 0.6732758 1.113507 # 14 14 0.4498140 0.7440592 1.143489 # 15 15 0.4391240 0.8440031 1.371128 # 16 16 0.4726235 0.8386493 1.210914 windows() plot(1,1, xlim=c(1,16), ylim=c(0, 7), xlab="day", ylab="value", type="n") lines(dq$day, dq$`25%`, col="red", lty=2) lines(dq$day, dq$`50%`, col="black", lty=1) lines(dq$day, dq$`75%`, col="blue", lty=3) lines(dq$day, dq$`95%`, col="purple", lty=4) legend("topright", legend=c("95%", "75%", "50%", "25%"), lty=c(4,3,1,2), col=c("purple", "blue", "black", "red")) If your audience won't be as sophisticated, and wouldn't know what a "Weibull" is or be thrown off by trying to follow the idea of the "75th percentile", and you want something with more pizzazz, make a panel of kernel density plots. (Admittedly, adibender's plot has more pizzazz than this.) windows() par(mfrow=c(4,4)) for(i in 1:16){ plot(density(y[day==i]), main=paste("day", i), xlab="", xlim=range(y), ylim=c(0, .8), ylab="", axes=FALSE); box() }
How to visualize an evolution of a distribution in time? Always think about the nature of your situation: Who will be the audience for this plot? What are my goals for it? What are my data? You state that you have a "distribution which depends on a para
38,228
How to visualize an evolution of a distribution in time?
Ridge plots can be done with Python Seaborn. I found this beautiful example in their documentation and will repost it here. import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt sns.set_theme(style="white", rc={"axes.facecolor": (0, 0, 0, 0)}) # Create random data rs = np.random.RandomState(1979) x = rs.randn(500) g = np.tile(list("ABCDEFGHIJ"), 50) df = pd.DataFrame(dict(x=x, g=g)) m = df.g.map(ord) df["x"] += m # Initialize the FacetGrid object pal = sns.cubehelix_palette(10, rot=-.25, light=.7) g = sns.FacetGrid(df, row="g", hue="g", aspect=15, height=.5, palette=pal) # Draw the densities in a few steps g.map(sns.kdeplot, "x", bw_adjust=.5, clip_on=False, fill=True, alpha=1, linewidth=1.5) g.map(sns.kdeplot, "x", clip_on=False, color="w", lw=2, bw_adjust=.5) # passing color=None to refline() uses the hue mapping g.refline(y=0, linewidth=2, linestyle="-", color=None, clip_on=False) # Define and use a simple function to label the plot in axes coordinates def label(x, color, label): ax = plt.gca() ax.text(0, .2, label, fontweight="bold", color=color, ha="left", va="center", transform=ax.transAxes) g.map(label, "x") # Set the subplots to overlap g.figure.subplots_adjust(hspace=-.25) # Remove axes details that don't play well with overlap g.set_titles("") g.set(yticks=[], ylabel="") g.despine(bottom=True, left=True)
How to visualize an evolution of a distribution in time?
Ridge plots can be done with Python Seaborn. I found this beautiful example in their documentation and will repost it here. import numpy as np import pandas as pd import seaborn as sns import matplotl
How to visualize an evolution of a distribution in time? Ridge plots can be done with Python Seaborn. I found this beautiful example in their documentation and will repost it here. import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt sns.set_theme(style="white", rc={"axes.facecolor": (0, 0, 0, 0)}) # Create random data rs = np.random.RandomState(1979) x = rs.randn(500) g = np.tile(list("ABCDEFGHIJ"), 50) df = pd.DataFrame(dict(x=x, g=g)) m = df.g.map(ord) df["x"] += m # Initialize the FacetGrid object pal = sns.cubehelix_palette(10, rot=-.25, light=.7) g = sns.FacetGrid(df, row="g", hue="g", aspect=15, height=.5, palette=pal) # Draw the densities in a few steps g.map(sns.kdeplot, "x", bw_adjust=.5, clip_on=False, fill=True, alpha=1, linewidth=1.5) g.map(sns.kdeplot, "x", clip_on=False, color="w", lw=2, bw_adjust=.5) # passing color=None to refline() uses the hue mapping g.refline(y=0, linewidth=2, linestyle="-", color=None, clip_on=False) # Define and use a simple function to label the plot in axes coordinates def label(x, color, label): ax = plt.gca() ax.text(0, .2, label, fontweight="bold", color=color, ha="left", va="center", transform=ax.transAxes) g.map(label, "x") # Set the subplots to overlap g.figure.subplots_adjust(hspace=-.25) # Remove axes details that don't play well with overlap g.set_titles("") g.set(yticks=[], ylabel="") g.despine(bottom=True, left=True)
How to visualize an evolution of a distribution in time? Ridge plots can be done with Python Seaborn. I found this beautiful example in their documentation and will repost it here. import numpy as np import pandas as pd import seaborn as sns import matplotl
38,229
How to visualize an evolution of a distribution in time?
Assuming you have an empirical distribution for each day, as for example a store looking at total payment by each customer, per day. You can look upon this as a time series of histograms, and that could be plotted in various ways, maybe by a series of boxplots. If you have some example data we could try various options! A similar question was asked&answered here: https://stackoverflow.com/questions/11690194/time-series-histogram
How to visualize an evolution of a distribution in time?
Assuming you have an empirical distribution for each day, as for example a store looking at total payment by each customer, per day. You can look upon this as a time series of histograms, and that cou
How to visualize an evolution of a distribution in time? Assuming you have an empirical distribution for each day, as for example a store looking at total payment by each customer, per day. You can look upon this as a time series of histograms, and that could be plotted in various ways, maybe by a series of boxplots. If you have some example data we could try various options! A similar question was asked&answered here: https://stackoverflow.com/questions/11690194/time-series-histogram
How to visualize an evolution of a distribution in time? Assuming you have an empirical distribution for each day, as for example a store looking at total payment by each customer, per day. You can look upon this as a time series of histograms, and that cou
38,230
If I can make up priors, why can't I make up posteriors?
Well, in Bayesian statistics, you don't just "make up" your priors. You should be building a prior that best captures your knowledge before seeing the data. Otherwise, why anyone should care about the output of your Bayesian analysis is very hard to justify. So while it's true that the practitioner has some sense of freedom in creating a prior, it should be tied to something meaningful in order for an analysis to be useful. With that said, the prior isn't the only part of a Bayesian analysis that allows this freedom. A practitioner is offered the same freedom in constructing the likelihood function, which defines the relation between the data and the model. Just as using nonsense priors will lead to a nonsense posterior, using a nonsense likelihood will also lead to a nonsense posterior. So in practice, ideally one should chose a likelihood function such that it is flexible enough to handle one's uncertainty, yet constrained enough to make inference with limited data possible. To demonstrate, consider two somewhat extreme examples. Suppose we are interested in determining the effect of a continuous-valued treatment on patients. In order to learn anything from the data, we must choice a model with that flexibility. If we were to simply leave out "treatment" from our set of regression parameters, no matter what our outcome was, we could report "given the data, our model estimates no effect of treatment". On the other extreme, suppose we have a model so flexible that we don't constrain the treatment effect to have a finite number of discontinuities. Then, (without strong priors, at least), we have almost no hope of having any sort of convergence of our estimated treatment effect no matter our sample size. Thus, our inference can be completely butchered by poor choices of likelihood functions, just as it could be by poor choices of priors. Of course, in reality we wouldn't chose either of these extremes, but we still do make these types of choices. How flexible a treatment effect are we going to allow: linear, splines, interaction with other variables? There's always the tradeoff between "sufficiently flexible" and "estimatable given our sample size". If we're smart, our likelihood functions should include reasonable constraints (i.e., treatment continuous treatment effect probably relatively smooth, probably doesn't include very high order interaction effects). This is essentially the same art as picking a prior: you want to constrain your inference with prior knowledge, and allow flexibility where there is uncertainty. The whole point of using data is to help constrain some of that the flexibility that stems from our uncertainty. In summary, a practitioner has freedom in selection of both the prior and the likelihood function. In order for an analysis to be in anyway meaningful, both choices should be a relatively good approximation of real phenomena. EDIT: In the comments, @nanoman brings up an interesting take on the problem. One way we can think that the likelihood function is a generic, non-subjective function. As such, all possible models can be included in the functional form likelihood before the prior. But typically, the prior only puts positive probability on a finite set of functional forms of the likelihood. Thus, without the prior, inference is impossible as the likelihood would be too flexible to ever make any form of inference. While this isn't the universally accepted definition of prior and likelihood function, this view does have a few advantages. For one, this is very natural in Bayesian model selection. In this case, rather than just putting priors on parameters of a single model, the prior puts probability over a set of competing models. But second, and I believe more to @nanoman's point, is that this view cleanly divides inference into subjective (prior) and non-subjective (likelihood function). This is nice, because it clearly demonstrates one cannot learn anything without some subjective constraints as the likelihood would be too flexible. It also clearly demonstrates that once someone hands you a tractable likelihood function, some subjective information must have snuck in.
If I can make up priors, why can't I make up posteriors?
Well, in Bayesian statistics, you don't just "make up" your priors. You should be building a prior that best captures your knowledge before seeing the data. Otherwise, why anyone should care about the
If I can make up priors, why can't I make up posteriors? Well, in Bayesian statistics, you don't just "make up" your priors. You should be building a prior that best captures your knowledge before seeing the data. Otherwise, why anyone should care about the output of your Bayesian analysis is very hard to justify. So while it's true that the practitioner has some sense of freedom in creating a prior, it should be tied to something meaningful in order for an analysis to be useful. With that said, the prior isn't the only part of a Bayesian analysis that allows this freedom. A practitioner is offered the same freedom in constructing the likelihood function, which defines the relation between the data and the model. Just as using nonsense priors will lead to a nonsense posterior, using a nonsense likelihood will also lead to a nonsense posterior. So in practice, ideally one should chose a likelihood function such that it is flexible enough to handle one's uncertainty, yet constrained enough to make inference with limited data possible. To demonstrate, consider two somewhat extreme examples. Suppose we are interested in determining the effect of a continuous-valued treatment on patients. In order to learn anything from the data, we must choice a model with that flexibility. If we were to simply leave out "treatment" from our set of regression parameters, no matter what our outcome was, we could report "given the data, our model estimates no effect of treatment". On the other extreme, suppose we have a model so flexible that we don't constrain the treatment effect to have a finite number of discontinuities. Then, (without strong priors, at least), we have almost no hope of having any sort of convergence of our estimated treatment effect no matter our sample size. Thus, our inference can be completely butchered by poor choices of likelihood functions, just as it could be by poor choices of priors. Of course, in reality we wouldn't chose either of these extremes, but we still do make these types of choices. How flexible a treatment effect are we going to allow: linear, splines, interaction with other variables? There's always the tradeoff between "sufficiently flexible" and "estimatable given our sample size". If we're smart, our likelihood functions should include reasonable constraints (i.e., treatment continuous treatment effect probably relatively smooth, probably doesn't include very high order interaction effects). This is essentially the same art as picking a prior: you want to constrain your inference with prior knowledge, and allow flexibility where there is uncertainty. The whole point of using data is to help constrain some of that the flexibility that stems from our uncertainty. In summary, a practitioner has freedom in selection of both the prior and the likelihood function. In order for an analysis to be in anyway meaningful, both choices should be a relatively good approximation of real phenomena. EDIT: In the comments, @nanoman brings up an interesting take on the problem. One way we can think that the likelihood function is a generic, non-subjective function. As such, all possible models can be included in the functional form likelihood before the prior. But typically, the prior only puts positive probability on a finite set of functional forms of the likelihood. Thus, without the prior, inference is impossible as the likelihood would be too flexible to ever make any form of inference. While this isn't the universally accepted definition of prior and likelihood function, this view does have a few advantages. For one, this is very natural in Bayesian model selection. In this case, rather than just putting priors on parameters of a single model, the prior puts probability over a set of competing models. But second, and I believe more to @nanoman's point, is that this view cleanly divides inference into subjective (prior) and non-subjective (likelihood function). This is nice, because it clearly demonstrates one cannot learn anything without some subjective constraints as the likelihood would be too flexible. It also clearly demonstrates that once someone hands you a tractable likelihood function, some subjective information must have snuck in.
If I can make up priors, why can't I make up posteriors? Well, in Bayesian statistics, you don't just "make up" your priors. You should be building a prior that best captures your knowledge before seeing the data. Otherwise, why anyone should care about the
38,231
If I can make up priors, why can't I make up posteriors?
If you have a belief about the distribution of your data after seeing data, then why would you be estimating its parameters with data? You already have the parameters.
If I can make up priors, why can't I make up posteriors?
If you have a belief about the distribution of your data after seeing data, then why would you be estimating its parameters with data? You already have the parameters.
If I can make up priors, why can't I make up posteriors? If you have a belief about the distribution of your data after seeing data, then why would you be estimating its parameters with data? You already have the parameters.
If I can make up priors, why can't I make up posteriors? If you have a belief about the distribution of your data after seeing data, then why would you be estimating its parameters with data? You already have the parameters.
38,232
If I can make up priors, why can't I make up posteriors?
In case of many problems in statistics you have some data, let's denote it as $X$, and want to learn about some "parameter" $\theta$ of the distribution of the data, i.e. calculate the $\theta|X$ kind of things (conditional distribution, conditional expectation etc.). There are several ways how can this be achieved, including maximum likelihood, and without getting into discussion if and which of them is better, you can consider using Bayes theorem as one of them. One of the advantages of using Bayes theorem, is that it let's you directly given that you know conditional distribution of the data given the parameter (likelihood) and the distribution of the parameter (prior), then you simply calculate $$ \overbrace{p(\theta|X)}^\text{posterior} = \frac{\overbrace{p(X|\theta)}^\text{likelihood}\;\overbrace{p(\theta)}^\text{prior}}{p(X)} $$ Likelihood is the conditional distribution of your data, so it is a matter of understanding your data and choosing some distribution that approximates it best, and it is rather uncontroversial concept. As about prior, notice that for the above formula to work you need some prior. In perfect world, you would know a priori the distribution of $\theta$ and applied it to get the posterior. In real world, this is something that you assume, given your best knowledge, and plug-in to Bayes theorem. You could choose an "uninformative" prior $p(\theta) \propto 1$, but there are many arguments that such priors are neither "uninformative", nor reasonable. What I'm trying to say, is that there are many ways how you could come up with some distribution for a prior. Some consider priors as a blessing, since they make it possible to bring your out-of-data knowledge into the model, while others, for exactly the same reason, consider them as problematic. Answering your question, sure you can assume that the distribution of the parameter given data is something. On day-to-day basis all the time we make our decisions based on some assumptions, that not always are rigorously validated. However the difference between prior and posterior is that the posterior is something that you learned from the data (and the prior). If it isn't, but your wild guess, then it's not a posterior any more. As about why we allow ourselves to "make up" priors, there are two answers depending on who you ask: either it is that (a) for the machinery to work we need some prior, or (b) we know something in advance that want to include it in our model, and thanks to priors this is possible. In either case, we usually expect the data to have "final word" rather then the priors.
If I can make up priors, why can't I make up posteriors?
In case of many problems in statistics you have some data, let's denote it as $X$, and want to learn about some "parameter" $\theta$ of the distribution of the data, i.e. calculate the $\theta|X$ kind
If I can make up priors, why can't I make up posteriors? In case of many problems in statistics you have some data, let's denote it as $X$, and want to learn about some "parameter" $\theta$ of the distribution of the data, i.e. calculate the $\theta|X$ kind of things (conditional distribution, conditional expectation etc.). There are several ways how can this be achieved, including maximum likelihood, and without getting into discussion if and which of them is better, you can consider using Bayes theorem as one of them. One of the advantages of using Bayes theorem, is that it let's you directly given that you know conditional distribution of the data given the parameter (likelihood) and the distribution of the parameter (prior), then you simply calculate $$ \overbrace{p(\theta|X)}^\text{posterior} = \frac{\overbrace{p(X|\theta)}^\text{likelihood}\;\overbrace{p(\theta)}^\text{prior}}{p(X)} $$ Likelihood is the conditional distribution of your data, so it is a matter of understanding your data and choosing some distribution that approximates it best, and it is rather uncontroversial concept. As about prior, notice that for the above formula to work you need some prior. In perfect world, you would know a priori the distribution of $\theta$ and applied it to get the posterior. In real world, this is something that you assume, given your best knowledge, and plug-in to Bayes theorem. You could choose an "uninformative" prior $p(\theta) \propto 1$, but there are many arguments that such priors are neither "uninformative", nor reasonable. What I'm trying to say, is that there are many ways how you could come up with some distribution for a prior. Some consider priors as a blessing, since they make it possible to bring your out-of-data knowledge into the model, while others, for exactly the same reason, consider them as problematic. Answering your question, sure you can assume that the distribution of the parameter given data is something. On day-to-day basis all the time we make our decisions based on some assumptions, that not always are rigorously validated. However the difference between prior and posterior is that the posterior is something that you learned from the data (and the prior). If it isn't, but your wild guess, then it's not a posterior any more. As about why we allow ourselves to "make up" priors, there are two answers depending on who you ask: either it is that (a) for the machinery to work we need some prior, or (b) we know something in advance that want to include it in our model, and thanks to priors this is possible. In either case, we usually expect the data to have "final word" rather then the priors.
If I can make up priors, why can't I make up posteriors? In case of many problems in statistics you have some data, let's denote it as $X$, and want to learn about some "parameter" $\theta$ of the distribution of the data, i.e. calculate the $\theta|X$ kind
38,233
If I can make up priors, why can't I make up posteriors?
Philosophically, there is nothing wrong with “eliciting a posterior.” It’s a bit more difficult to do in a coherent manner than with priors (because you need to respect the likelihood), but IMO you are asking a really good question. To turn this into something practical, “making up” a posterior is a potentially useful way to elicit a prior. That is, I take all data realizations $X = x$ and ask myself what the posterior $\pi(\theta \mid x)$ would be. If I do this in a fashion that is consistent with the likelihood, then I will have equivalently specified $\pi(\theta)$. This is sometimes called “downdating.” Once you realize this, you will see that “making up the prior” and “making up the posterior” are basically the same thing. As I said, it is tricky to do this ina manner which is consistent with the likelihood, but even if you do it for just a few values of $x$ it can be very illuminating about what a good prior will look like.
If I can make up priors, why can't I make up posteriors?
Philosophically, there is nothing wrong with “eliciting a posterior.” It’s a bit more difficult to do in a coherent manner than with priors (because you need to respect the likelihood), but IMO you ar
If I can make up priors, why can't I make up posteriors? Philosophically, there is nothing wrong with “eliciting a posterior.” It’s a bit more difficult to do in a coherent manner than with priors (because you need to respect the likelihood), but IMO you are asking a really good question. To turn this into something practical, “making up” a posterior is a potentially useful way to elicit a prior. That is, I take all data realizations $X = x$ and ask myself what the posterior $\pi(\theta \mid x)$ would be. If I do this in a fashion that is consistent with the likelihood, then I will have equivalently specified $\pi(\theta)$. This is sometimes called “downdating.” Once you realize this, you will see that “making up the prior” and “making up the posterior” are basically the same thing. As I said, it is tricky to do this ina manner which is consistent with the likelihood, but even if you do it for just a few values of $x$ it can be very illuminating about what a good prior will look like.
If I can make up priors, why can't I make up posteriors? Philosophically, there is nothing wrong with “eliciting a posterior.” It’s a bit more difficult to do in a coherent manner than with priors (because you need to respect the likelihood), but IMO you ar
38,234
Completely different results from lme() and lmer()
As it was noted in this answer, and also mentioned in one of the comments, the problem seems to be a local maximum. To see this more clearly I have written below a simple code to calculate the negative log-likelihood of this model and do the optimization using optim(). Starting with different initial values leads to the two different solutions: # data multi <- structure(list(x = c(4.9, 4.84, 4.91, 5, 4.95, 3.94, 3.88, 3.95, 4.04, 3.99, 2.97, 2.92, 2.99, 3.08, 3.03, 2.01, 1.96, 2.03, 2.12, 2.07, 1.05, 1, 1.07, 1.16, 1.11), y = c(3.2, 3.21, 3.256, 3.25, 3.256, 3.386, 3.396, 3.442, 3.436, 3.442, 3.572, 3.582, 3.628, 3.622, 3.628, 3.758, 3.768, 3.814, 3.808, 3.814, 3.944, 3.954, 4, 3.994, 4), pid = 1:25, gid = c(1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 4L, 5L, 5L, 5L, 5L, 5L)), class = "data.frame", row.names = c(NA, -25L)) # function to calculate the negative log-likelihood of the random intercepts model library("mvtnorm") logLik <- function (thetas, y, X, id) { ncX <- ncol(X) betas <- thetas[seq_len(ncX)] sigma_b <- exp(thetas[ncX + 1]) sigma <- exp(thetas[ncX + 2]) eta <- c(X %*% betas) unq_id <- unique(id) n <- length(unq_id) lL <- numeric(n) for (i in seq_len(n)) { id_i <- id == unq_id[i] n_i <- sum(id_i) V_i <- matrix(sigma_b^2, n_i, n_i) diag(V_i) <- diag(V_i) + sigma^2 lL[i] <- dmvnorm(y[id_i], mean = eta[id_i], sigma = V_i, log = TRUE) } - sum(lL, na.rm = TRUE) } # optimization using as initial values 0 for the fixed effects, # and 1 for the variance components opt <- optim(rep(0, 4), logLik, method = "BFGS", y = multi$y, X = cbind(1, multi$x), id = multi$gid) opt$par[1:2] # fixed effects #> [1] 2.855872 0.250341 exp(opt$par[3]) # sd random intercepts #> [1] 0.6029724 exp(opt$par[4]) # sd error terms #> [1] 0.01997889 # optimization using as initial values 4 & -0.2 for the fixed effects, # and 0.0003 and 0.034 for the variance components opt2 <- optim(c(4, -0.2, -8, -3.4), logLik, method = "BFGS", y = multi$y, X = cbind(1, multi$x), id = multi$gid) opt2$par[1:2] # fixed effects #> [1] 4.1846965 -0.1928397 exp(opt2$par[3]) # sd random intercepts #> [1] 0.000270746 exp(opt2$par[4]) # sd error terms #> [1] 0.03239167
Completely different results from lme() and lmer()
As it was noted in this answer, and also mentioned in one of the comments, the problem seems to be a local maximum. To see this more clearly I have written below a simple code to calculate the negativ
Completely different results from lme() and lmer() As it was noted in this answer, and also mentioned in one of the comments, the problem seems to be a local maximum. To see this more clearly I have written below a simple code to calculate the negative log-likelihood of this model and do the optimization using optim(). Starting with different initial values leads to the two different solutions: # data multi <- structure(list(x = c(4.9, 4.84, 4.91, 5, 4.95, 3.94, 3.88, 3.95, 4.04, 3.99, 2.97, 2.92, 2.99, 3.08, 3.03, 2.01, 1.96, 2.03, 2.12, 2.07, 1.05, 1, 1.07, 1.16, 1.11), y = c(3.2, 3.21, 3.256, 3.25, 3.256, 3.386, 3.396, 3.442, 3.436, 3.442, 3.572, 3.582, 3.628, 3.622, 3.628, 3.758, 3.768, 3.814, 3.808, 3.814, 3.944, 3.954, 4, 3.994, 4), pid = 1:25, gid = c(1L, 1L, 1L, 1L, 1L, 2L, 2L, 2L, 2L, 2L, 3L, 3L, 3L, 3L, 3L, 4L, 4L, 4L, 4L, 4L, 5L, 5L, 5L, 5L, 5L)), class = "data.frame", row.names = c(NA, -25L)) # function to calculate the negative log-likelihood of the random intercepts model library("mvtnorm") logLik <- function (thetas, y, X, id) { ncX <- ncol(X) betas <- thetas[seq_len(ncX)] sigma_b <- exp(thetas[ncX + 1]) sigma <- exp(thetas[ncX + 2]) eta <- c(X %*% betas) unq_id <- unique(id) n <- length(unq_id) lL <- numeric(n) for (i in seq_len(n)) { id_i <- id == unq_id[i] n_i <- sum(id_i) V_i <- matrix(sigma_b^2, n_i, n_i) diag(V_i) <- diag(V_i) + sigma^2 lL[i] <- dmvnorm(y[id_i], mean = eta[id_i], sigma = V_i, log = TRUE) } - sum(lL, na.rm = TRUE) } # optimization using as initial values 0 for the fixed effects, # and 1 for the variance components opt <- optim(rep(0, 4), logLik, method = "BFGS", y = multi$y, X = cbind(1, multi$x), id = multi$gid) opt$par[1:2] # fixed effects #> [1] 2.855872 0.250341 exp(opt$par[3]) # sd random intercepts #> [1] 0.6029724 exp(opt$par[4]) # sd error terms #> [1] 0.01997889 # optimization using as initial values 4 & -0.2 for the fixed effects, # and 0.0003 and 0.034 for the variance components opt2 <- optim(c(4, -0.2, -8, -3.4), logLik, method = "BFGS", y = multi$y, X = cbind(1, multi$x), id = multi$gid) opt2$par[1:2] # fixed effects #> [1] 4.1846965 -0.1928397 exp(opt2$par[3]) # sd random intercepts #> [1] 0.000270746 exp(opt2$par[4]) # sd error terms #> [1] 0.03239167
Completely different results from lme() and lmer() As it was noted in this answer, and also mentioned in one of the comments, the problem seems to be a local maximum. To see this more clearly I have written below a simple code to calculate the negativ
38,235
Completely different results from lme() and lmer()
I agree with @DimitrisRizopoulos's answer, and have a few more points to make. I will start by saying that I am unhappy that lmer doesn't find the best answer - even though I suspect this situation is probably limited to small, unusual (see below) data sets. One of the reasons that lme may do better is that it fits on the log-standard-deviation scale, which may make the minimum near zero "broader". You can get lmer to replicate the lme results by setting an explicit, lower starting value for the scaled standard deviation (start=...); based on the explorations below, start=8 or any lower value should work OK. For what it's worth, this will lead to an estimated random effects variance of 0 (and a "singular fit" message, and an answer that's equivalent to leaving out the random effects component entirely and using lm() ...) In this particular case using the "nloptwrap" optimizer doesn't help; in fact all of the optimizers that lmer can use, starting from the default starting values ($\theta$ (scaled standard deviation) = 1.0), find the higher local minimum away from zero. here is code equivalent to the approach lmer uses to find the starting value by default, when only intercept-valued random effects are present (see here): v0 <- with(multi,var(ave(y,gid))) ## variance among group values v.e <- var(multi$y)-v0 ## residual var ~ total var - group variance sqrt(v0/v.e) ## convert to scaled standard deviation This leads to a starting value of $\theta=10.8$. We can see systematically how different starting values give different results: m0 <- lmer(y~x+(1|(gid)), data=multi, REML=TRUE) tvec2 <- seq(0,20,length=51) ff <- function(t0) getME(update(m0,start=t0),"theta") v <- sapply(tvec2,ff) plot(tvec2,v) abline(v=10.8,col="red") We can also explicitly visualize the (negative log-)likelihood surface: ## helper function to capture fitting trajectory cfun <- function(...) { cc <- capture.output(x <- do.call(lmer,c(list(...),list(verbose=100)))) gfun <- function(x,s) { as.numeric(gsub(s,"",grep(s,x,value=TRUE))) } it <- gfun(cc,"iteration: +") xval <- gfun(cc,"\tx = ") fval <- gfun(cc,"\tf\\(x\\) = +") attr(x,"optvals") <- data.frame(it,xval,fval) return(x) } c0 <- cfun(y~x+(1|(gid)), data=multi, REML=TRUE) c1 <- cfun(y~x+(1|(gid)), data=multi, REML=FALSE) f <- as.function(m0) tvec <- seq(0,100,length=101) dvec <- sapply(tvec,f) m3 <- update(m0,REML=FALSE) f2 <- as.function(m3) dvec2 <- sapply(tvec,f2) par(las=1,bty="l") matplot(tvec,cbind(dvec,dvec2),type="l", ylab="deviance/REMLcrit", xlab="scaled standard dev") with(attr(c0,"optvals"),text(xval,fval,it)) with(attr(c1,"optvals"),text(xval,fval,it,col=2)) legend("bottomright",c("REML","ML"), col=1:2,lty=1:2) The numbers show the sequence of values tried. We can see that it is only the slightly different shape of the ML curve that tips the optimizer toward the boundary fit rather than the interior fit. Are these data artificial? The left plot below shows the data by group; the right plot shows the values with their group means subtracted. There is almost no variation among the 5 values within each group ... If we simulate data with the same properties (starting from the estimated coefficients), but where the variation is actually Gaussian, we don't get the same kind of multimodal surface at all: multi_sim <- transform(multi,y=simulate(m0,seed=101)[[1]]) f3 <- as.function(update(m0,data=multi_sim)) dvec3 <- sapply(tvec,f3) plot(tvec,dvec3,type="l")
Completely different results from lme() and lmer()
I agree with @DimitrisRizopoulos's answer, and have a few more points to make. I will start by saying that I am unhappy that lmer doesn't find the best answer - even though I suspect this situation i
Completely different results from lme() and lmer() I agree with @DimitrisRizopoulos's answer, and have a few more points to make. I will start by saying that I am unhappy that lmer doesn't find the best answer - even though I suspect this situation is probably limited to small, unusual (see below) data sets. One of the reasons that lme may do better is that it fits on the log-standard-deviation scale, which may make the minimum near zero "broader". You can get lmer to replicate the lme results by setting an explicit, lower starting value for the scaled standard deviation (start=...); based on the explorations below, start=8 or any lower value should work OK. For what it's worth, this will lead to an estimated random effects variance of 0 (and a "singular fit" message, and an answer that's equivalent to leaving out the random effects component entirely and using lm() ...) In this particular case using the "nloptwrap" optimizer doesn't help; in fact all of the optimizers that lmer can use, starting from the default starting values ($\theta$ (scaled standard deviation) = 1.0), find the higher local minimum away from zero. here is code equivalent to the approach lmer uses to find the starting value by default, when only intercept-valued random effects are present (see here): v0 <- with(multi,var(ave(y,gid))) ## variance among group values v.e <- var(multi$y)-v0 ## residual var ~ total var - group variance sqrt(v0/v.e) ## convert to scaled standard deviation This leads to a starting value of $\theta=10.8$. We can see systematically how different starting values give different results: m0 <- lmer(y~x+(1|(gid)), data=multi, REML=TRUE) tvec2 <- seq(0,20,length=51) ff <- function(t0) getME(update(m0,start=t0),"theta") v <- sapply(tvec2,ff) plot(tvec2,v) abline(v=10.8,col="red") We can also explicitly visualize the (negative log-)likelihood surface: ## helper function to capture fitting trajectory cfun <- function(...) { cc <- capture.output(x <- do.call(lmer,c(list(...),list(verbose=100)))) gfun <- function(x,s) { as.numeric(gsub(s,"",grep(s,x,value=TRUE))) } it <- gfun(cc,"iteration: +") xval <- gfun(cc,"\tx = ") fval <- gfun(cc,"\tf\\(x\\) = +") attr(x,"optvals") <- data.frame(it,xval,fval) return(x) } c0 <- cfun(y~x+(1|(gid)), data=multi, REML=TRUE) c1 <- cfun(y~x+(1|(gid)), data=multi, REML=FALSE) f <- as.function(m0) tvec <- seq(0,100,length=101) dvec <- sapply(tvec,f) m3 <- update(m0,REML=FALSE) f2 <- as.function(m3) dvec2 <- sapply(tvec,f2) par(las=1,bty="l") matplot(tvec,cbind(dvec,dvec2),type="l", ylab="deviance/REMLcrit", xlab="scaled standard dev") with(attr(c0,"optvals"),text(xval,fval,it)) with(attr(c1,"optvals"),text(xval,fval,it,col=2)) legend("bottomright",c("REML","ML"), col=1:2,lty=1:2) The numbers show the sequence of values tried. We can see that it is only the slightly different shape of the ML curve that tips the optimizer toward the boundary fit rather than the interior fit. Are these data artificial? The left plot below shows the data by group; the right plot shows the values with their group means subtracted. There is almost no variation among the 5 values within each group ... If we simulate data with the same properties (starting from the estimated coefficients), but where the variation is actually Gaussian, we don't get the same kind of multimodal surface at all: multi_sim <- transform(multi,y=simulate(m0,seed=101)[[1]]) f3 <- as.function(update(m0,data=multi_sim)) dvec3 <- sapply(tvec,f3) plot(tvec,dvec3,type="l")
Completely different results from lme() and lmer() I agree with @DimitrisRizopoulos's answer, and have a few more points to make. I will start by saying that I am unhappy that lmer doesn't find the best answer - even though I suspect this situation i
38,236
How can I estimate the highest posterior density interval from a set of x,y values describing the PDF?
Since you have some example points from the PDF rather than a closed-form representation, and you're looking for a highest-density interval rather than a highest-density set, it's easiest to do this by brute force. Consider all $a$ and $b$ among the $x$-coordinates you have, check that $[a, b]$ has at least the desired coverage, and return the interval of greatest density. Here's a simple-minded implementation that (a) is slow for large numbers of points and (b) treats the density of each $x_n$ as if it was the density between $x_{n-1}$ and $x$. hdi = function(x, x.density, coverage) {best = 0 for (ai in 1 : (length(x) - 1)) {for (bi in (ai + 1) : length(x)) {mass = sum(diff(x[ai : bi]) * x.density[(ai + 1) : bi]) if (mass >= coverage && mass / (x[bi] - x[ai]) > best) {best = mass / (x[bi] - x[ai]) ai.best = ai bi.best = bi}}} c(x[ai.best], x[bi.best])} An example: library(ggplot2) x = seq(0, 1, len = 1000) x.density = dbeta(x, shape1 = 10, shape2 = 2) interval = hdi(x, x.density, .8) qplot(x, x.density) + geom_vline(aes(xintercept = interval))
How can I estimate the highest posterior density interval from a set of x,y values describing the PD
Since you have some example points from the PDF rather than a closed-form representation, and you're looking for a highest-density interval rather than a highest-density set, it's easiest to do this b
How can I estimate the highest posterior density interval from a set of x,y values describing the PDF? Since you have some example points from the PDF rather than a closed-form representation, and you're looking for a highest-density interval rather than a highest-density set, it's easiest to do this by brute force. Consider all $a$ and $b$ among the $x$-coordinates you have, check that $[a, b]$ has at least the desired coverage, and return the interval of greatest density. Here's a simple-minded implementation that (a) is slow for large numbers of points and (b) treats the density of each $x_n$ as if it was the density between $x_{n-1}$ and $x$. hdi = function(x, x.density, coverage) {best = 0 for (ai in 1 : (length(x) - 1)) {for (bi in (ai + 1) : length(x)) {mass = sum(diff(x[ai : bi]) * x.density[(ai + 1) : bi]) if (mass >= coverage && mass / (x[bi] - x[ai]) > best) {best = mass / (x[bi] - x[ai]) ai.best = ai bi.best = bi}}} c(x[ai.best], x[bi.best])} An example: library(ggplot2) x = seq(0, 1, len = 1000) x.density = dbeta(x, shape1 = 10, shape2 = 2) interval = hdi(x, x.density, .8) qplot(x, x.density) + geom_vline(aes(xintercept = interval))
How can I estimate the highest posterior density interval from a set of x,y values describing the PD Since you have some example points from the PDF rather than a closed-form representation, and you're looking for a highest-density interval rather than a highest-density set, it's easiest to do this b
38,237
How can I estimate the highest posterior density interval from a set of x,y values describing the PDF?
Depending on your needs, there are efficient solutions available. The purpose of this answer is to show how you can formulate the problem, encapsulate the key ideas, and implement them using capabilities offered by your programming environment. Analysis Given any density function $f$, all highest posterior density sets are of the form $\{x\mid f(x) \ge h\}$ or $\{x\mid f(x) \gt h\}.$ For floating-point computing purposes we needn't distinguish between them, so let's just call either one $f^{[h]}.$ Writing the indicator function as $\mathcal I,$ the total probability of any such set is $$p_f(h) = \Pr\left(f^{[h]}\right) = \int_{\mathbb{R}} \mathcal{I}(f(x) \gt h) f(x)\mathrm{d}x.$$ Obtaining the $100\alpha\%$ highest probability density set therefore is a matter of solving $$0 = p_f(h) - \alpha.$$ This reduces the problem to one of root finding, which has well-established solutions and is widely implemented using accurate, efficient methods. Implementation Here is a naive R solution. ("Naive" means it contains no defenses against difficult-to-evaluate pdfs or bad inputs, such as when x.min and x.max do not include the entire support of the distribution.) It employs the function ifelse to implement $\mathcal I,$ integrate to integrate the pdf, and uniroot to find a root. Its arguments are the amount of probability alpha, a density function df, and the support of df (smallest and largest possible values). highest_alpha <- function(alpha, df, x.min, x.max, ...) { p <- function(h) { g <- function(x) {y <- df(x); ifelse(y > h, y, 0)} integrate(g, x.min, x.max, ...)$value - alpha } uniroot(p, c(x.min, x.max), tol=1e-12)$root } The question supposes the pdf is given in a somewhat awkward form as a sequence of vertices along its graph: its "spaghetti representation." No problem: just interpolate. Although in many cases an interpolation of the log pdf would be best, some care is needed to handle areas where the pdf might equal zero, so in this example I just use linear interpolation as implemented by approxfun. I do take measures to ensure the pdf is normalized, though, by first integrating it to find a normalizing constant. as.pdf <- function(x, y, ...) { f <- approxfun(x, y, method="linear", yleft=0, yright=0, rule=2) const <- integrate(f, min(x), max(x), ...)$value approxfun(x, y/const, method="linear", yleft=0, yright=0, rule=2) } Its arguments are the x and y coordinates, sorted by x, of the spaghetti representation of the pdf. These six lines of code will perform asymptotically well, typically requiring $O(n\log(n))$ calculations when the pdf is given by $n$ vertices of its graph. Examples As a test, highest_alpha was applied to a standard Normal pdf as shown in the first figure. The total probability was divided into the highest 1/6, highest 2/6, ... highest 6/6 and the corresponding areas have been colored accordingly. By symmetry, the boundaries must lie on the set of points $\Phi^{-1}(1/12), \Phi^{-1}(2/12), \ldots, \Phi^{-1}(11/12)$ where $\Phi$ is the standard normal CDF, so I have plotted those points as vertical black line segments: The black dots, of course, show the spaghetti representation of the pdf that was used. The horizontal lines are the values of $h$ found by highest_alpha. Let's see a solution for a more complex pdf. This one is a mixture of three Normal distributions. A close look shows that indeed the interpolation is working: many of the region boundaries fall between the spaghetti points. For completeness, here is the code used for the first example. The first four lines create the spaghetti representation while the last two do the calculations. i <- c(exp(-(1:10)), 1 - exp(-(1:10)), seq(0, 1, length.out=101)) x <- sort(qnorm(i)) x <- x[!is.infinite(x)] y <- dnorm(x) k <- 6 h <- sapply(1:(k-1) / k, function(h) highest_alpha(h, as.pdf(x, y), min(x), max(x))) The figure is made with ggplot2: library(ggplot2) n <- 501 X <- data.frame(x=seq(min(x), max(x), length.out=n)) X$y <- as.pdf(x, y)(X$x) X$Interval <- factor(rowSums(outer(X$y, h, ">"))) dx <- diff(range(x)) / n ggplot(X, aes(x, y)) + geom_hline(aes(yintercept=h, color=Density), size=1, show.legend=FALSE, data=data.frame(h=h, Density=factor(h, labels=signif(h, 2)))) + geom_vline(xintercept=qnorm(1:(2*k-1)/(2*k))) + geom_path(color="gray") + geom_col(aes(fill=Interval, color=NULL), alpha=0.5, width=dx) + geom_point(data=data.frame(x=x, y=y)) + scale_fill_manual(values=terrain.colors(k)) + scale_color_manual(values=terrain.colors(k)) + theme(panel.grid=element_blank()) I dodged one issue by computing explicitly only the heights corresponding to the highest-density regions. To find those regions, one has to determine where the pdf crosses those heights. Because that's just another root-finding exercise, I won't go into the (redundant) details. If you really must find an interval (a connected set), then you can formulate the problem as one of minimizing the interval length $\delta$ subject to the highest-probability constraint. That is, given $0 \lt \alpha \lt 1,$ the problem is to find an ordered pair $(x, \delta)$ where $\delta$ is a small as possible subject to the constraints $\delta \gt 0.$ $x$ and $x+\delta$ are in the domain of the pdf $f.$ $F_\alpha(x,\delta) \ge 0$ where $$F_\alpha(x,\delta) = \int_x^{x+\delta} f(x)\mathrm{d}x - \alpha.$$ This problem is amenable to the same approach: (1) use the capabilities of your programming environment to compute the integral accurately and efficiently and (2) employ an efficient two-dimensional constrained optimization routine. Most of the constraints are linear and the single non-linear constraint is differentiable, implying this approach has a good chance of succeeding and yielding accurate solutions.
How can I estimate the highest posterior density interval from a set of x,y values describing the PD
Depending on your needs, there are efficient solutions available. The purpose of this answer is to show how you can formulate the problem, encapsulate the key ideas, and implement them using capabili
How can I estimate the highest posterior density interval from a set of x,y values describing the PDF? Depending on your needs, there are efficient solutions available. The purpose of this answer is to show how you can formulate the problem, encapsulate the key ideas, and implement them using capabilities offered by your programming environment. Analysis Given any density function $f$, all highest posterior density sets are of the form $\{x\mid f(x) \ge h\}$ or $\{x\mid f(x) \gt h\}.$ For floating-point computing purposes we needn't distinguish between them, so let's just call either one $f^{[h]}.$ Writing the indicator function as $\mathcal I,$ the total probability of any such set is $$p_f(h) = \Pr\left(f^{[h]}\right) = \int_{\mathbb{R}} \mathcal{I}(f(x) \gt h) f(x)\mathrm{d}x.$$ Obtaining the $100\alpha\%$ highest probability density set therefore is a matter of solving $$0 = p_f(h) - \alpha.$$ This reduces the problem to one of root finding, which has well-established solutions and is widely implemented using accurate, efficient methods. Implementation Here is a naive R solution. ("Naive" means it contains no defenses against difficult-to-evaluate pdfs or bad inputs, such as when x.min and x.max do not include the entire support of the distribution.) It employs the function ifelse to implement $\mathcal I,$ integrate to integrate the pdf, and uniroot to find a root. Its arguments are the amount of probability alpha, a density function df, and the support of df (smallest and largest possible values). highest_alpha <- function(alpha, df, x.min, x.max, ...) { p <- function(h) { g <- function(x) {y <- df(x); ifelse(y > h, y, 0)} integrate(g, x.min, x.max, ...)$value - alpha } uniroot(p, c(x.min, x.max), tol=1e-12)$root } The question supposes the pdf is given in a somewhat awkward form as a sequence of vertices along its graph: its "spaghetti representation." No problem: just interpolate. Although in many cases an interpolation of the log pdf would be best, some care is needed to handle areas where the pdf might equal zero, so in this example I just use linear interpolation as implemented by approxfun. I do take measures to ensure the pdf is normalized, though, by first integrating it to find a normalizing constant. as.pdf <- function(x, y, ...) { f <- approxfun(x, y, method="linear", yleft=0, yright=0, rule=2) const <- integrate(f, min(x), max(x), ...)$value approxfun(x, y/const, method="linear", yleft=0, yright=0, rule=2) } Its arguments are the x and y coordinates, sorted by x, of the spaghetti representation of the pdf. These six lines of code will perform asymptotically well, typically requiring $O(n\log(n))$ calculations when the pdf is given by $n$ vertices of its graph. Examples As a test, highest_alpha was applied to a standard Normal pdf as shown in the first figure. The total probability was divided into the highest 1/6, highest 2/6, ... highest 6/6 and the corresponding areas have been colored accordingly. By symmetry, the boundaries must lie on the set of points $\Phi^{-1}(1/12), \Phi^{-1}(2/12), \ldots, \Phi^{-1}(11/12)$ where $\Phi$ is the standard normal CDF, so I have plotted those points as vertical black line segments: The black dots, of course, show the spaghetti representation of the pdf that was used. The horizontal lines are the values of $h$ found by highest_alpha. Let's see a solution for a more complex pdf. This one is a mixture of three Normal distributions. A close look shows that indeed the interpolation is working: many of the region boundaries fall between the spaghetti points. For completeness, here is the code used for the first example. The first four lines create the spaghetti representation while the last two do the calculations. i <- c(exp(-(1:10)), 1 - exp(-(1:10)), seq(0, 1, length.out=101)) x <- sort(qnorm(i)) x <- x[!is.infinite(x)] y <- dnorm(x) k <- 6 h <- sapply(1:(k-1) / k, function(h) highest_alpha(h, as.pdf(x, y), min(x), max(x))) The figure is made with ggplot2: library(ggplot2) n <- 501 X <- data.frame(x=seq(min(x), max(x), length.out=n)) X$y <- as.pdf(x, y)(X$x) X$Interval <- factor(rowSums(outer(X$y, h, ">"))) dx <- diff(range(x)) / n ggplot(X, aes(x, y)) + geom_hline(aes(yintercept=h, color=Density), size=1, show.legend=FALSE, data=data.frame(h=h, Density=factor(h, labels=signif(h, 2)))) + geom_vline(xintercept=qnorm(1:(2*k-1)/(2*k))) + geom_path(color="gray") + geom_col(aes(fill=Interval, color=NULL), alpha=0.5, width=dx) + geom_point(data=data.frame(x=x, y=y)) + scale_fill_manual(values=terrain.colors(k)) + scale_color_manual(values=terrain.colors(k)) + theme(panel.grid=element_blank()) I dodged one issue by computing explicitly only the heights corresponding to the highest-density regions. To find those regions, one has to determine where the pdf crosses those heights. Because that's just another root-finding exercise, I won't go into the (redundant) details. If you really must find an interval (a connected set), then you can formulate the problem as one of minimizing the interval length $\delta$ subject to the highest-probability constraint. That is, given $0 \lt \alpha \lt 1,$ the problem is to find an ordered pair $(x, \delta)$ where $\delta$ is a small as possible subject to the constraints $\delta \gt 0.$ $x$ and $x+\delta$ are in the domain of the pdf $f.$ $F_\alpha(x,\delta) \ge 0$ where $$F_\alpha(x,\delta) = \int_x^{x+\delta} f(x)\mathrm{d}x - \alpha.$$ This problem is amenable to the same approach: (1) use the capabilities of your programming environment to compute the integral accurately and efficiently and (2) employ an efficient two-dimensional constrained optimization routine. Most of the constraints are linear and the single non-linear constraint is differentiable, implying this approach has a good chance of succeeding and yielding accurate solutions.
How can I estimate the highest posterior density interval from a set of x,y values describing the PD Depending on your needs, there are efficient solutions available. The purpose of this answer is to show how you can formulate the problem, encapsulate the key ideas, and implement them using capabili
38,238
How to simulate Likert-scale data in R? [closed]
To perform the simulation, here is a one line solution using the sample function: sample(0:4, N, replace = TRUE, prob = c(0.1, 0.2, 0.4, 0.2, 0.1)) #where: # 0:4 is the sequence of values (0 to 4 in this case) # N is the number of samples (participants) # replace = TRUE for sampling with replacement # prob = c(0.1, 0.2, 0.4, 0.2, 0.1) is the probability of selection for each score.
How to simulate Likert-scale data in R? [closed]
To perform the simulation, here is a one line solution using the sample function: sample(0:4, N, replace = TRUE, prob = c(0.1, 0.2, 0.4, 0.2, 0.1)) #where: # 0:4 is the sequence of values (0 to 4 i
How to simulate Likert-scale data in R? [closed] To perform the simulation, here is a one line solution using the sample function: sample(0:4, N, replace = TRUE, prob = c(0.1, 0.2, 0.4, 0.2, 0.1)) #where: # 0:4 is the sequence of values (0 to 4 in this case) # N is the number of samples (participants) # replace = TRUE for sampling with replacement # prob = c(0.1, 0.2, 0.4, 0.2, 0.1) is the probability of selection for each score.
How to simulate Likert-scale data in R? [closed] To perform the simulation, here is a one line solution using the sample function: sample(0:4, N, replace = TRUE, prob = c(0.1, 0.2, 0.4, 0.2, 0.1)) #where: # 0:4 is the sequence of values (0 to 4 i
38,239
How to simulate Likert-scale data in R? [closed]
A likert scale, as the term is typically used, is just an ordinal rating scale. The phrase is often used for a single rating, which might have been called a likert item. Traditionally, the idea was that you would have a set of likert items that all measure the same thing and have the same measurement properties. The result is that you could sum (or average) the items and end up with a good measure of something that approximated a continuous, interval scale (see: levels of measurement). On the other hand, to simulate data, you need to know the distribution that the data should have. More generally, for simulation studies people generally want to have a data generating process for the resulting distribution. A likert scale is a type of data gathering instrument, not a distribution and not a data generating process. Thus, what you ultimately need is to specify a data generating process that you believe is appropriate for the eventual likert data that you want to simulate. After that, there is just the trivial implementational details specific to the software you intend to use (in your case, R). Because people conceptualize likert data as manifest data derived from a latent variable, the most common approach would be to simulate the latent variable according to the theorized distribution (perhaps a normal distribution), and then have a function that maps it to a small ordered set of numbers (e.g., $1, \ldots, 5$). Note that moving from the latent to the manifest variable makes many of the parameters of the latent variable's distribution unidentifiable, so you often needn't bother worrying about them. A simple approach would be to have just the two steps move directly to the final rating, but a more comprehensive approach could model each item with their own set of the two steps, and then have the likert scale combined from the items just as they would be in a real case. Here is an example, coded in R. I will imagine that there are 5 items that measure the same construct. As such, they are moderately correlated. Two items might be 'reverse scored', but I will assume that doesn't affect the result appreciably so I won't simulate that. However, I will make some more strongly related to the underlying variable than others, and I will make some biased towards higher or lower ratings. set.seed(8649) # this makes the example exactly reproducible N = 10 # this is how much data I'll generate latent = rnorm(N) # this is the actual latent variable I want to be measureing ##### generate latent responses to items item1 = latent + rnorm(N, mean=0, sd=0.2) # the strongest correlate item2 = latent + rnorm(N, mean=0, sd=0.3) item3 = latent + rnorm(N, mean=0, sd=0.5) item4 = latent + rnorm(N, mean=0, sd=1.0) item5 = latent + rnorm(N, mean=0, sd=1.2) # the weakest ##### convert latent responses to ordered categories item1 = findInterval(item1, vec=c(-Inf,-2.5,-1, 1,2.5,Inf)) # fairly unbiased item2 = findInterval(item2, vec=c(-Inf,-2.5,-1, 1,2.5,Inf)) item3 = findInterval(item3, vec=c(-Inf,-3, -2, 2,3, Inf)) # middle values typical item4 = findInterval(item4, vec=c(-Inf,-3, -2, 2,3, Inf)) item5 = findInterval(item5, vec=c(-Inf,-3.5,-3,-1,0.5,Inf)) # high ratings typical ##### combined into final scale manifest = round(rowMeans(cbind(item1, item2, item3, item4, item5)), 1) manifest # [1] 3.4 3.6 3.4 3.8 2.6 3.4 3.2 2.0 3.8 3.2 round(latent, 1) # [1] 1.3 0.6 0.2 1.0 -1.5 0.1 0.4 -2.5 2.3 -0.3 cor(manifest, latent) # [1] 0.9280074
How to simulate Likert-scale data in R? [closed]
A likert scale, as the term is typically used, is just an ordinal rating scale. The phrase is often used for a single rating, which might have been called a likert item. Traditionally, the idea was
How to simulate Likert-scale data in R? [closed] A likert scale, as the term is typically used, is just an ordinal rating scale. The phrase is often used for a single rating, which might have been called a likert item. Traditionally, the idea was that you would have a set of likert items that all measure the same thing and have the same measurement properties. The result is that you could sum (or average) the items and end up with a good measure of something that approximated a continuous, interval scale (see: levels of measurement). On the other hand, to simulate data, you need to know the distribution that the data should have. More generally, for simulation studies people generally want to have a data generating process for the resulting distribution. A likert scale is a type of data gathering instrument, not a distribution and not a data generating process. Thus, what you ultimately need is to specify a data generating process that you believe is appropriate for the eventual likert data that you want to simulate. After that, there is just the trivial implementational details specific to the software you intend to use (in your case, R). Because people conceptualize likert data as manifest data derived from a latent variable, the most common approach would be to simulate the latent variable according to the theorized distribution (perhaps a normal distribution), and then have a function that maps it to a small ordered set of numbers (e.g., $1, \ldots, 5$). Note that moving from the latent to the manifest variable makes many of the parameters of the latent variable's distribution unidentifiable, so you often needn't bother worrying about them. A simple approach would be to have just the two steps move directly to the final rating, but a more comprehensive approach could model each item with their own set of the two steps, and then have the likert scale combined from the items just as they would be in a real case. Here is an example, coded in R. I will imagine that there are 5 items that measure the same construct. As such, they are moderately correlated. Two items might be 'reverse scored', but I will assume that doesn't affect the result appreciably so I won't simulate that. However, I will make some more strongly related to the underlying variable than others, and I will make some biased towards higher or lower ratings. set.seed(8649) # this makes the example exactly reproducible N = 10 # this is how much data I'll generate latent = rnorm(N) # this is the actual latent variable I want to be measureing ##### generate latent responses to items item1 = latent + rnorm(N, mean=0, sd=0.2) # the strongest correlate item2 = latent + rnorm(N, mean=0, sd=0.3) item3 = latent + rnorm(N, mean=0, sd=0.5) item4 = latent + rnorm(N, mean=0, sd=1.0) item5 = latent + rnorm(N, mean=0, sd=1.2) # the weakest ##### convert latent responses to ordered categories item1 = findInterval(item1, vec=c(-Inf,-2.5,-1, 1,2.5,Inf)) # fairly unbiased item2 = findInterval(item2, vec=c(-Inf,-2.5,-1, 1,2.5,Inf)) item3 = findInterval(item3, vec=c(-Inf,-3, -2, 2,3, Inf)) # middle values typical item4 = findInterval(item4, vec=c(-Inf,-3, -2, 2,3, Inf)) item5 = findInterval(item5, vec=c(-Inf,-3.5,-3,-1,0.5,Inf)) # high ratings typical ##### combined into final scale manifest = round(rowMeans(cbind(item1, item2, item3, item4, item5)), 1) manifest # [1] 3.4 3.6 3.4 3.8 2.6 3.4 3.2 2.0 3.8 3.2 round(latent, 1) # [1] 1.3 0.6 0.2 1.0 -1.5 0.1 0.4 -2.5 2.3 -0.3 cor(manifest, latent) # [1] 0.9280074
How to simulate Likert-scale data in R? [closed] A likert scale, as the term is typically used, is just an ordinal rating scale. The phrase is often used for a single rating, which might have been called a likert item. Traditionally, the idea was
38,240
How to simulate Likert-scale data in R? [closed]
One way to generate Likert data is according to a proportional odds model. Here, the underlying distribution of the (latent) response is a logistic random variable with a center $\mu$ that can vary as a function of one or more predictors. The latent variable is then thresholded into as many categories with an arbitrary number of cutpoints. Achieving a target number of response categories is quite difficult, either requiring advanced math or ( more likely) play it by ear. set.seed(123) n <- 1e6 beta <- 0.3 alpha <- sort(rnorm(5)) x <- seq(-3, 3, length.out = n) z <- rlogis(n, beta*x) y <- factor(findInterval(z, alpha)) library(MASS) fit <- polr(formula = y ~ x) Generates the association: > coef(fit) x 0.2982142 a log-odds ratio for endorsing any higher response comparing two groups differing by 1 unit of $X$.
How to simulate Likert-scale data in R? [closed]
One way to generate Likert data is according to a proportional odds model. Here, the underlying distribution of the (latent) response is a logistic random variable with a center $\mu$ that can vary as
How to simulate Likert-scale data in R? [closed] One way to generate Likert data is according to a proportional odds model. Here, the underlying distribution of the (latent) response is a logistic random variable with a center $\mu$ that can vary as a function of one or more predictors. The latent variable is then thresholded into as many categories with an arbitrary number of cutpoints. Achieving a target number of response categories is quite difficult, either requiring advanced math or ( more likely) play it by ear. set.seed(123) n <- 1e6 beta <- 0.3 alpha <- sort(rnorm(5)) x <- seq(-3, 3, length.out = n) z <- rlogis(n, beta*x) y <- factor(findInterval(z, alpha)) library(MASS) fit <- polr(formula = y ~ x) Generates the association: > coef(fit) x 0.2982142 a log-odds ratio for endorsing any higher response comparing two groups differing by 1 unit of $X$.
How to simulate Likert-scale data in R? [closed] One way to generate Likert data is according to a proportional odds model. Here, the underlying distribution of the (latent) response is a logistic random variable with a center $\mu$ that can vary as
38,241
How to simulate Likert-scale data in R? [closed]
Original source is here: http://ravshansk.com/articles/likert.html The following formula works not only for Likert-scale models, but for any categorically distributed variables. Suppose, you want to generate a 5-category data (x1, x2, x3, x4, x5) for N participants with probabilities (1/10, 2/10, 4/10, 2/10, 1/10). The following formula will work: distribution <- c(rep(x1,1),rep(x2,2),rep(x3,4),rep(x4,2),rep(x5,1)) potential_population <- rep(distribution, M) #M is any number greatN likert_data <- sample(potential_population, N) The main idea here is to write a list "distribution" with appropriate number of repetitions that would together satisfy the desired probabilities. Needless to say that you have to set common denominators and integer numerators for the probabilities.
How to simulate Likert-scale data in R? [closed]
Original source is here: http://ravshansk.com/articles/likert.html The following formula works not only for Likert-scale models, but for any categorically distributed variables. Suppose, you want to g
How to simulate Likert-scale data in R? [closed] Original source is here: http://ravshansk.com/articles/likert.html The following formula works not only for Likert-scale models, but for any categorically distributed variables. Suppose, you want to generate a 5-category data (x1, x2, x3, x4, x5) for N participants with probabilities (1/10, 2/10, 4/10, 2/10, 1/10). The following formula will work: distribution <- c(rep(x1,1),rep(x2,2),rep(x3,4),rep(x4,2),rep(x5,1)) potential_population <- rep(distribution, M) #M is any number greatN likert_data <- sample(potential_population, N) The main idea here is to write a list "distribution" with appropriate number of repetitions that would together satisfy the desired probabilities. Needless to say that you have to set common denominators and integer numerators for the probabilities.
How to simulate Likert-scale data in R? [closed] Original source is here: http://ravshansk.com/articles/likert.html The following formula works not only for Likert-scale models, but for any categorically distributed variables. Suppose, you want to g
38,242
Etymology of "Adam" algorithm for gradient descent
On p.1 of the document you cite: "the name Adam is derived from adaptive moment estimation".
Etymology of "Adam" algorithm for gradient descent
On p.1 of the document you cite: "the name Adam is derived from adaptive moment estimation".
Etymology of "Adam" algorithm for gradient descent On p.1 of the document you cite: "the name Adam is derived from adaptive moment estimation".
Etymology of "Adam" algorithm for gradient descent On p.1 of the document you cite: "the name Adam is derived from adaptive moment estimation".
38,243
Etymology of "Adam" algorithm for gradient descent
Although Nick already answered the question, I would like to elaborate a bit. From the introduction of Adam: A Method for Stochastic Optimization (the original Adam paper): The method computes individual adaptive learning rates for different parameters from estimates of first and second moments of the gradients; the name Adam is derived from adaptive moment estimation. Adam uses: $$\theta_{t+1}=\theta_{t}-\frac{\alpha}{\sqrt{\hat{v}_{t}}+\epsilon}\hat{m}_{t}$$ while: $\theta_k$ is the vector of weights and bias in step $k$. All of the operations are element-wise. ${\hat m}_t$ is a bias-corrected moving average (implemented as an exponentially decaying average) of the gradients that were calculated until step $t$. In other words, $\hat m_t$ is an adaptive estimation of the first raw moment (i.e. the mean) of the gradient. Why "adaptive"? Because it is a weighted mean that gives more weight to gradients calculated closer to the current step, and gives virtually $0$ weight to gradients that were calculated in the distant past. In each step it is adapting to better estimate the mean of the gradient in the neighborhood of our current location in the cost function. (When I think about a moving average, I like to visualize a comet's trail, which becomes dimmer and dimmer as it gets further from the comet: ) Similarly, ${\hat v}_t$ is a bias-corrected moving average of the squares of the gradients. I.e. ${\hat v}_t$ is an adaptive estimation of the second raw moment (i.e. the uncentered variance) of the gradient. $\alpha$ is a scalar that the paper refers to as "stepsize" and sometimes "learning rate". Confusingly, the paper refers to $\frac{\alpha}{\sqrt{\hat{v}_{t}}+\epsilon}\hat{m}_{t}$ as "stepsize" or "effective step in parameter space". Thus, if I understand correctly, "learning rates" in the quote above refers to the components of $\frac{\alpha}{\sqrt{\hat{v}_{t}}+\epsilon}\hat{m}_{t}$, and Adam is named after the computing of these components, which is mainly according to $\hat{m}_{t}$ and $\hat{v}_{t}$, the adaptive moment estimations. It should also be noted that "adaptive" sometimes refers to using different learning rates for different parameters, in contrast to using the same learning rate for all parameters (sometimes called "global learning rate"). E.g. the basic stochastic gradient descent (SGD) uses $\theta_{t+1}=\theta_{t}-\eta g_{t}$, while $\eta$ is a scalar. So if we think of $\hat{m}_{t}$ as parallel to $g_t$ in SGD, then Adam also has "adaptive" learning rates in this sense. (Though my guess is that this wasn't what the authors meant.)
Etymology of "Adam" algorithm for gradient descent
Although Nick already answered the question, I would like to elaborate a bit. From the introduction of Adam: A Method for Stochastic Optimization (the original Adam paper): The method computes indivi
Etymology of "Adam" algorithm for gradient descent Although Nick already answered the question, I would like to elaborate a bit. From the introduction of Adam: A Method for Stochastic Optimization (the original Adam paper): The method computes individual adaptive learning rates for different parameters from estimates of first and second moments of the gradients; the name Adam is derived from adaptive moment estimation. Adam uses: $$\theta_{t+1}=\theta_{t}-\frac{\alpha}{\sqrt{\hat{v}_{t}}+\epsilon}\hat{m}_{t}$$ while: $\theta_k$ is the vector of weights and bias in step $k$. All of the operations are element-wise. ${\hat m}_t$ is a bias-corrected moving average (implemented as an exponentially decaying average) of the gradients that were calculated until step $t$. In other words, $\hat m_t$ is an adaptive estimation of the first raw moment (i.e. the mean) of the gradient. Why "adaptive"? Because it is a weighted mean that gives more weight to gradients calculated closer to the current step, and gives virtually $0$ weight to gradients that were calculated in the distant past. In each step it is adapting to better estimate the mean of the gradient in the neighborhood of our current location in the cost function. (When I think about a moving average, I like to visualize a comet's trail, which becomes dimmer and dimmer as it gets further from the comet: ) Similarly, ${\hat v}_t$ is a bias-corrected moving average of the squares of the gradients. I.e. ${\hat v}_t$ is an adaptive estimation of the second raw moment (i.e. the uncentered variance) of the gradient. $\alpha$ is a scalar that the paper refers to as "stepsize" and sometimes "learning rate". Confusingly, the paper refers to $\frac{\alpha}{\sqrt{\hat{v}_{t}}+\epsilon}\hat{m}_{t}$ as "stepsize" or "effective step in parameter space". Thus, if I understand correctly, "learning rates" in the quote above refers to the components of $\frac{\alpha}{\sqrt{\hat{v}_{t}}+\epsilon}\hat{m}_{t}$, and Adam is named after the computing of these components, which is mainly according to $\hat{m}_{t}$ and $\hat{v}_{t}$, the adaptive moment estimations. It should also be noted that "adaptive" sometimes refers to using different learning rates for different parameters, in contrast to using the same learning rate for all parameters (sometimes called "global learning rate"). E.g. the basic stochastic gradient descent (SGD) uses $\theta_{t+1}=\theta_{t}-\eta g_{t}$, while $\eta$ is a scalar. So if we think of $\hat{m}_{t}$ as parallel to $g_t$ in SGD, then Adam also has "adaptive" learning rates in this sense. (Though my guess is that this wasn't what the authors meant.)
Etymology of "Adam" algorithm for gradient descent Although Nick already answered the question, I would like to elaborate a bit. From the introduction of Adam: A Method for Stochastic Optimization (the original Adam paper): The method computes indivi
38,244
Expected value of $1/x$ when $x$ follows a Beta distribution
First note that the pdf of a Beta$(\alpha, \beta)$ distribution is only defined for $\alpha, \beta > 0$. Which means that for when $\alpha \leq 0$ or $\beta \leq 0$ $$\int_0^1 \dfrac{x^{\alpha - 1} (1 - x)^{1 - \beta}}{B(\alpha, \beta)} $$ is not finite. Now, \begin{align*} E\left(\dfrac{1}{X} \right) & = \int_0^1 \dfrac{1}{x} \dfrac{x^{\alpha - 1} (1 - x)^{1 - \beta}}{B(\alpha, \beta)}dx\\ & = \int_0^1 \underbrace{\dfrac{x^{(\alpha - 1) - 1} (1 - x)^{1 - \beta}}{B(\alpha-1, \beta)}}_{\text{Beta}(\alpha-1, \beta)} \dfrac{B(\alpha-1, \beta)}{B(\alpha, \beta)} dx\\ & = \dfrac{B(\alpha-1, \beta)}{B(\alpha, \beta)} \text{ if $\alpha - 1$ > 0} \,. \end{align*} Thus when $\alpha > 1$, the expectation $E(1/X)$ is finite and is as above (this can be further simplified as per Nate Pope's comment). Otherwise, it is undefined.
Expected value of $1/x$ when $x$ follows a Beta distribution
First note that the pdf of a Beta$(\alpha, \beta)$ distribution is only defined for $\alpha, \beta > 0$. Which means that for when $\alpha \leq 0$ or $\beta \leq 0$ $$\int_0^1 \dfrac{x^{\alpha - 1} (
Expected value of $1/x$ when $x$ follows a Beta distribution First note that the pdf of a Beta$(\alpha, \beta)$ distribution is only defined for $\alpha, \beta > 0$. Which means that for when $\alpha \leq 0$ or $\beta \leq 0$ $$\int_0^1 \dfrac{x^{\alpha - 1} (1 - x)^{1 - \beta}}{B(\alpha, \beta)} $$ is not finite. Now, \begin{align*} E\left(\dfrac{1}{X} \right) & = \int_0^1 \dfrac{1}{x} \dfrac{x^{\alpha - 1} (1 - x)^{1 - \beta}}{B(\alpha, \beta)}dx\\ & = \int_0^1 \underbrace{\dfrac{x^{(\alpha - 1) - 1} (1 - x)^{1 - \beta}}{B(\alpha-1, \beta)}}_{\text{Beta}(\alpha-1, \beta)} \dfrac{B(\alpha-1, \beta)}{B(\alpha, \beta)} dx\\ & = \dfrac{B(\alpha-1, \beta)}{B(\alpha, \beta)} \text{ if $\alpha - 1$ > 0} \,. \end{align*} Thus when $\alpha > 1$, the expectation $E(1/X)$ is finite and is as above (this can be further simplified as per Nate Pope's comment). Otherwise, it is undefined.
Expected value of $1/x$ when $x$ follows a Beta distribution First note that the pdf of a Beta$(\alpha, \beta)$ distribution is only defined for $\alpha, \beta > 0$. Which means that for when $\alpha \leq 0$ or $\beta \leq 0$ $$\int_0^1 \dfrac{x^{\alpha - 1} (
38,245
Expected value of $1/x$ when $x$ follows a Beta distribution
I want to point out another interesting solution method, which also generalizes the result to the expectation of $X^{-m}$ for integer $m=1,2,3,\dots$. I will use moment generating functions (mgf) and the results from the paper by N Cressie et.al http://amstat.tandfonline.com/doi/abs/10.1080/00031305.1981.10479334?journalCode=utas20 "The Moment-Generating Function and Negative Integer Moments". They give the result that when $X$ is a positive random variable with mgf $M_X(t)$ which is defined in an open neighbourhood of the origin, then we have $$ \DeclareMathOperator{\E}{\mathbb{E}} \E X^{-m} = \Gamma(m)^{-1} \int_0^\infty t^{m-1} M_X(-t) \; dt $$ for positive integers $m$. It is known that for the beta distribution, the mgf is given by a confluent hypergeometric function as $$ M_X(t) = {}_1F_1(\alpha;\alpha+\beta;t) $$ so using the result above gives that $$ \E X^{-m} = \Gamma(m)^{-1} \int_0^\infty t^{m-1} {}_1F_1(\alpha;\alpha+\beta;-t)\; dt $$ I evaluated that integral with the help of maple: assume( a>0, b>0 );assume(m-1,posint) GAMMA(m)^(-1) * int( t^(m-1)*hypergeom([a],[a+b],-t), t=0..infinity ) GAMMA(a + b) GAMMA(a - m) ------------------------- GAMMA(a) GAMMA(a + b - m) so finally we can write the result as $$ \E X^{-m} = \frac{\Gamma(\alpha+\beta)\Gamma(\alpha-m)}{\Gamma(\alpha)\Gamma(\alpha+\beta-m)} $$ which coinsides with the other answer for $m=1$. Then some human mathematics is needed to conclude that we need the assumption $\alpha > m$ for this to be valid.
Expected value of $1/x$ when $x$ follows a Beta distribution
I want to point out another interesting solution method, which also generalizes the result to the expectation of $X^{-m}$ for integer $m=1,2,3,\dots$. I will use moment generating functions (mgf) and
Expected value of $1/x$ when $x$ follows a Beta distribution I want to point out another interesting solution method, which also generalizes the result to the expectation of $X^{-m}$ for integer $m=1,2,3,\dots$. I will use moment generating functions (mgf) and the results from the paper by N Cressie et.al http://amstat.tandfonline.com/doi/abs/10.1080/00031305.1981.10479334?journalCode=utas20 "The Moment-Generating Function and Negative Integer Moments". They give the result that when $X$ is a positive random variable with mgf $M_X(t)$ which is defined in an open neighbourhood of the origin, then we have $$ \DeclareMathOperator{\E}{\mathbb{E}} \E X^{-m} = \Gamma(m)^{-1} \int_0^\infty t^{m-1} M_X(-t) \; dt $$ for positive integers $m$. It is known that for the beta distribution, the mgf is given by a confluent hypergeometric function as $$ M_X(t) = {}_1F_1(\alpha;\alpha+\beta;t) $$ so using the result above gives that $$ \E X^{-m} = \Gamma(m)^{-1} \int_0^\infty t^{m-1} {}_1F_1(\alpha;\alpha+\beta;-t)\; dt $$ I evaluated that integral with the help of maple: assume( a>0, b>0 );assume(m-1,posint) GAMMA(m)^(-1) * int( t^(m-1)*hypergeom([a],[a+b],-t), t=0..infinity ) GAMMA(a + b) GAMMA(a - m) ------------------------- GAMMA(a) GAMMA(a + b - m) so finally we can write the result as $$ \E X^{-m} = \frac{\Gamma(\alpha+\beta)\Gamma(\alpha-m)}{\Gamma(\alpha)\Gamma(\alpha+\beta-m)} $$ which coinsides with the other answer for $m=1$. Then some human mathematics is needed to conclude that we need the assumption $\alpha > m$ for this to be valid.
Expected value of $1/x$ when $x$ follows a Beta distribution I want to point out another interesting solution method, which also generalizes the result to the expectation of $X^{-m}$ for integer $m=1,2,3,\dots$. I will use moment generating functions (mgf) and
38,246
Trying to understand the fitted vs residual plot? [duplicate]
According to the discussion in Draper and Smith's Applied Regression Analysis (3rd edition, roughly page 59), this residual plot may be used to check for violations in model assumptions particularly related to incorrect specification or presence of heteroscedasticity. In the case that no violations are detects, the figure might look like the following. Notice that the residuals are randomly distributed within within the red horizontal lines, forming a horizontal band along the fitted values. There is no visible pattern, which indicates that our regression model specifies an adequate relationship between the outcome, $Y$ and the covariates, $X$. A figure depicting a potential violation in the model assumptions is where a horizontal band with a particular width may work well for one part of the data, but might not work so well for another section of the fitted values. In this example, variances for the first quarter of the data, up to about a fitted value of 40 are smaller than variances for fitted values larger than 40. The middle portion of the fitted values has substantially larger variances than the outer values. This indicates that the regression model may have failed to account for heteroscedasticity. As @ben-bolker mentions in his comments in the linked questions, this diagnostic plot may be even better suited for detection of non-linear relationships that that were not included in the specification. Two reproducible simulated examples of non-linear relationships are presented below. (the R code is presented at the bottom of the post). The first plot here repeats the ideal scenario, where the regression specification, $Y = \beta_0 + \beta_1 X + \epsilon$, adequately models the underlying relationship. In this instance, the fitted versus residual plot is where the horizontal red lines are drawn at +- 2. As in the first figure, the points more or less lie in this horizontal band and no residuals are larger than 3 in magnitude (max(abs(regs[[1]]$residuals)) returns 2.932835). In the second example, the outcome variable has a quadratic relationship with its covariate, $Y = \beta_0 + \beta_1 X + \beta_2 X^2$, but the regression specification only allows for a linear relationship. Here, the fitted versus residual plot shows a fairly strong sign of non-linearity with an upside down "U" shape. This is because the second order term of $X$ has a negative relationship with $Y$. The third example provides an instance where $\ln Y$ has a linear relationship with X, with $Y = \exp{(\beta_0 + \beta_1 * x + \epsilon)}$ but the model fails account for the needed transformation of $Y$. Here, the figure indicates a negative trend that isn't accounted for perhaps a bit of a funnel shape indicating heteroscedasticity. Further, there are larger numbers of residuals with extreme values, with 31 out of 500 values larger than 3 and four outside of the plot window, with values of roughly (10.1, 10.5 16.4, and 18.2). This relates to the non-normal error example in @glenn-b's answer to the question linked by @gung above. data set.seed(1234) x <- rnorm(500) x4 <- (.1 * x) + rnorm(500) y1 <- 2 * x + rnorm(500) y2 <- 2 * x + - (.5 * x^2) + rnorm(500) y3 <- exp(.5 * x + rnorm(500)) # put data into dataframe to organize results df <- data.frame(x, y1, y2, y3, y4) # run regressions regs <- lapply(df[-1], function(y) lm(y ~ x, data=df))
Trying to understand the fitted vs residual plot? [duplicate]
According to the discussion in Draper and Smith's Applied Regression Analysis (3rd edition, roughly page 59), this residual plot may be used to check for violations in model assumptions particularly r
Trying to understand the fitted vs residual plot? [duplicate] According to the discussion in Draper and Smith's Applied Regression Analysis (3rd edition, roughly page 59), this residual plot may be used to check for violations in model assumptions particularly related to incorrect specification or presence of heteroscedasticity. In the case that no violations are detects, the figure might look like the following. Notice that the residuals are randomly distributed within within the red horizontal lines, forming a horizontal band along the fitted values. There is no visible pattern, which indicates that our regression model specifies an adequate relationship between the outcome, $Y$ and the covariates, $X$. A figure depicting a potential violation in the model assumptions is where a horizontal band with a particular width may work well for one part of the data, but might not work so well for another section of the fitted values. In this example, variances for the first quarter of the data, up to about a fitted value of 40 are smaller than variances for fitted values larger than 40. The middle portion of the fitted values has substantially larger variances than the outer values. This indicates that the regression model may have failed to account for heteroscedasticity. As @ben-bolker mentions in his comments in the linked questions, this diagnostic plot may be even better suited for detection of non-linear relationships that that were not included in the specification. Two reproducible simulated examples of non-linear relationships are presented below. (the R code is presented at the bottom of the post). The first plot here repeats the ideal scenario, where the regression specification, $Y = \beta_0 + \beta_1 X + \epsilon$, adequately models the underlying relationship. In this instance, the fitted versus residual plot is where the horizontal red lines are drawn at +- 2. As in the first figure, the points more or less lie in this horizontal band and no residuals are larger than 3 in magnitude (max(abs(regs[[1]]$residuals)) returns 2.932835). In the second example, the outcome variable has a quadratic relationship with its covariate, $Y = \beta_0 + \beta_1 X + \beta_2 X^2$, but the regression specification only allows for a linear relationship. Here, the fitted versus residual plot shows a fairly strong sign of non-linearity with an upside down "U" shape. This is because the second order term of $X$ has a negative relationship with $Y$. The third example provides an instance where $\ln Y$ has a linear relationship with X, with $Y = \exp{(\beta_0 + \beta_1 * x + \epsilon)}$ but the model fails account for the needed transformation of $Y$. Here, the figure indicates a negative trend that isn't accounted for perhaps a bit of a funnel shape indicating heteroscedasticity. Further, there are larger numbers of residuals with extreme values, with 31 out of 500 values larger than 3 and four outside of the plot window, with values of roughly (10.1, 10.5 16.4, and 18.2). This relates to the non-normal error example in @glenn-b's answer to the question linked by @gung above. data set.seed(1234) x <- rnorm(500) x4 <- (.1 * x) + rnorm(500) y1 <- 2 * x + rnorm(500) y2 <- 2 * x + - (.5 * x^2) + rnorm(500) y3 <- exp(.5 * x + rnorm(500)) # put data into dataframe to organize results df <- data.frame(x, y1, y2, y3, y4) # run regressions regs <- lapply(df[-1], function(y) lm(y ~ x, data=df))
Trying to understand the fitted vs residual plot? [duplicate] According to the discussion in Draper and Smith's Applied Regression Analysis (3rd edition, roughly page 59), this residual plot may be used to check for violations in model assumptions particularly r
38,247
Trying to understand the fitted vs residual plot? [duplicate]
To follow up on @mdewey's answer and disagree mildly with @jjet's: the scale-location plot in the lower left is best for evaluating homo/heteroscedasticity. Two reasons: as raised by @mdewey: it's easier to judge whether the slope of a line than the amount of spread of a point cloud, and easier to fit a nonparametric smooth line to it for visualization purposes a data set with a non-uniform distribution of the fitted value (which is not itself problematic) can fool the viewer into believing there's heteroscedasticity, because your eye tends out to pick out the extremes. Because more observations lead to more extreme residuals (in the sense of order statistics), it will appear that there's more variability in ranges with more data. In this case there are fewer points toward the extremes of the fitted values, which makes it look like the variability is highest in the middle. The scale-location plot avoids this problem.
Trying to understand the fitted vs residual plot? [duplicate]
To follow up on @mdewey's answer and disagree mildly with @jjet's: the scale-location plot in the lower left is best for evaluating homo/heteroscedasticity. Two reasons: as raised by @mdewey: it's ea
Trying to understand the fitted vs residual plot? [duplicate] To follow up on @mdewey's answer and disagree mildly with @jjet's: the scale-location plot in the lower left is best for evaluating homo/heteroscedasticity. Two reasons: as raised by @mdewey: it's easier to judge whether the slope of a line than the amount of spread of a point cloud, and easier to fit a nonparametric smooth line to it for visualization purposes a data set with a non-uniform distribution of the fitted value (which is not itself problematic) can fool the viewer into believing there's heteroscedasticity, because your eye tends out to pick out the extremes. Because more observations lead to more extreme residuals (in the sense of order statistics), it will appear that there's more variability in ranges with more data. In this case there are fewer points toward the extremes of the fitted values, which makes it look like the variability is highest in the middle. The scale-location plot avoids this problem.
Trying to understand the fitted vs residual plot? [duplicate] To follow up on @mdewey's answer and disagree mildly with @jjet's: the scale-location plot in the lower left is best for evaluating homo/heteroscedasticity. Two reasons: as raised by @mdewey: it's ea
38,248
Trying to understand the fitted vs residual plot? [duplicate]
If you are looking at the top left plot then yes. However the best plot for what you intend is the bottom left one which folds the residuals about the horizontal axis in the first one so that the smoothed line drawn on that plot should be horizontal if there is no relation between scale and location. In your case it looks not too bad as the left hand dip is probably only being driven by a couple of points.
Trying to understand the fitted vs residual plot? [duplicate]
If you are looking at the top left plot then yes. However the best plot for what you intend is the bottom left one which folds the residuals about the horizontal axis in the first one so that the smoo
Trying to understand the fitted vs residual plot? [duplicate] If you are looking at the top left plot then yes. However the best plot for what you intend is the bottom left one which folds the residuals about the horizontal axis in the first one so that the smoothed line drawn on that plot should be horizontal if there is no relation between scale and location. In your case it looks not too bad as the left hand dip is probably only being driven by a couple of points.
Trying to understand the fitted vs residual plot? [duplicate] If you are looking at the top left plot then yes. However the best plot for what you intend is the bottom left one which folds the residuals about the horizontal axis in the first one so that the smoo
38,249
Trying to understand the fitted vs residual plot? [duplicate]
The second point is best evaluated using the top-left plot. Basically, you want to check to see whether the spread of the residuals is the same at all points along the x-axis. If it is, then you'll see a band of points that move horizontally along the x-axis. This would then suggest little evidence of heteroscedasticity. If instead it appears that the points either increase or decrease as you go from right to left, then you might say that "the band of points is increasing/decreasing" rather than staying strictly horizontal. The notion of a "band" of points is really just referring to the overall subjective shape of the scatterplot rather than anything specific.
Trying to understand the fitted vs residual plot? [duplicate]
The second point is best evaluated using the top-left plot. Basically, you want to check to see whether the spread of the residuals is the same at all points along the x-axis. If it is, then you'll se
Trying to understand the fitted vs residual plot? [duplicate] The second point is best evaluated using the top-left plot. Basically, you want to check to see whether the spread of the residuals is the same at all points along the x-axis. If it is, then you'll see a band of points that move horizontally along the x-axis. This would then suggest little evidence of heteroscedasticity. If instead it appears that the points either increase or decrease as you go from right to left, then you might say that "the band of points is increasing/decreasing" rather than staying strictly horizontal. The notion of a "band" of points is really just referring to the overall subjective shape of the scatterplot rather than anything specific.
Trying to understand the fitted vs residual plot? [duplicate] The second point is best evaluated using the top-left plot. Basically, you want to check to see whether the spread of the residuals is the same at all points along the x-axis. If it is, then you'll se
38,250
Using lasso for feature selection, followed by a non-regularized regression
Note that there exist multiple iterative LASSO procedures, so in general, it is not necessarily true that you should stick with the first LASSO estimates. For example: Post-LASSO-OLS: see Belloni, Chernozhukov (2013) Least squares after model selection in high-dimensional sparse models, Bernoulli 19(2), 2013, 521–547. Also known as the LASSO-OLS hybrid (Efron et al 2004, Least angle regression. Annals of Statistics 32 407–451) Adaptive LASSO (Zou 2006), eventually multiple stages (Bühlman, Meier 2008). Two-stages (or more), both using a CV procedure, the second step using a modified (re-weighted) penalty. Relaxed LASSO (Meinshausen 2007), on a bunch of subsets computed by initial LASSO Now in general, I would use one of these procedures to decide whether or not to add more variables, instead of a BIC model selection procedure.
Using lasso for feature selection, followed by a non-regularized regression
Note that there exist multiple iterative LASSO procedures, so in general, it is not necessarily true that you should stick with the first LASSO estimates. For example: Post-LASSO-OLS: see Belloni, C
Using lasso for feature selection, followed by a non-regularized regression Note that there exist multiple iterative LASSO procedures, so in general, it is not necessarily true that you should stick with the first LASSO estimates. For example: Post-LASSO-OLS: see Belloni, Chernozhukov (2013) Least squares after model selection in high-dimensional sparse models, Bernoulli 19(2), 2013, 521–547. Also known as the LASSO-OLS hybrid (Efron et al 2004, Least angle regression. Annals of Statistics 32 407–451) Adaptive LASSO (Zou 2006), eventually multiple stages (Bühlman, Meier 2008). Two-stages (or more), both using a CV procedure, the second step using a modified (re-weighted) penalty. Relaxed LASSO (Meinshausen 2007), on a bunch of subsets computed by initial LASSO Now in general, I would use one of these procedures to decide whether or not to add more variables, instead of a BIC model selection procedure.
Using lasso for feature selection, followed by a non-regularized regression Note that there exist multiple iterative LASSO procedures, so in general, it is not necessarily true that you should stick with the first LASSO estimates. For example: Post-LASSO-OLS: see Belloni, C
38,251
Using lasso for feature selection, followed by a non-regularized regression
Performing some variable selection (e.g. with LASSO with the smoothing parameter chosen by cross-validation or some of the other alternatives like the elastic net etc.) and then fitting a model on the same data as if no variable selection had happened is always inappropriate. Why not look at the results from LASSO? As stated by others lots of predictors with few records is of course tricky, but at least these will have some shrinkage of the coefficients to account for the variable selection.
Using lasso for feature selection, followed by a non-regularized regression
Performing some variable selection (e.g. with LASSO with the smoothing parameter chosen by cross-validation or some of the other alternatives like the elastic net etc.) and then fitting a model on the
Using lasso for feature selection, followed by a non-regularized regression Performing some variable selection (e.g. with LASSO with the smoothing parameter chosen by cross-validation or some of the other alternatives like the elastic net etc.) and then fitting a model on the same data as if no variable selection had happened is always inappropriate. Why not look at the results from LASSO? As stated by others lots of predictors with few records is of course tricky, but at least these will have some shrinkage of the coefficients to account for the variable selection.
Using lasso for feature selection, followed by a non-regularized regression Performing some variable selection (e.g. with LASSO with the smoothing parameter chosen by cross-validation or some of the other alternatives like the elastic net etc.) and then fitting a model on the
38,252
Looking for function to fit sigmoid-like curve
I think smoothing splines with small degrees of freedom would do the trick. Here's an example in R: The R code: txt <- "| 0 | 0 | | 1.6366666667 | -12.2012787905 | | 3.2733333333 | -13.7833876716 | | 4.91 | -10.5943208589 | | 6.5466666667 | -1.3584575518 | | 8.1833333333 | 8.1590423167 | | 9.82 | 13.8827937482 | | 10.4746666667 | 18.4965880076 | | 11.4566666667 | 42.1205206106 | | 11.784 | 45.0528073182 | | 12.4386666667 | 76.8150755186 | | 13.0933333333 | 80.0883540997 | | 14.73 | 89.7784173678 | | 16.3666666667 | 98.8113459392 | | 19.64 | 104.104366506 | | 22.9133333333 | 105.9929585305 | | 26.1866666667 | 94.0070414695 |" dat <- read.table(text=txt, sep="|")[,2:3] names(dat) <- c("x", "y") plot(dat$y~dat$x, pch = 19, xlab = "x", ylab = "y", main = "Smoothing Splines with Varying df") spl3 <- smooth.spline(x = dat$x, y = dat$y, df = 3) lines(spl3, col = 2) spl8 <- smooth.spline(x = dat$x, y = dat$y, df = 8) lines(spl8, col = 4) legend("topleft", c("df = 3", "df = 8"), col = c(2,4), bty = "n", lty = 1)
Looking for function to fit sigmoid-like curve
I think smoothing splines with small degrees of freedom would do the trick. Here's an example in R: The R code: txt <- "| 0 | 0 | | 1.6366666667 | -12.2012787905 | | 3.27333
Looking for function to fit sigmoid-like curve I think smoothing splines with small degrees of freedom would do the trick. Here's an example in R: The R code: txt <- "| 0 | 0 | | 1.6366666667 | -12.2012787905 | | 3.2733333333 | -13.7833876716 | | 4.91 | -10.5943208589 | | 6.5466666667 | -1.3584575518 | | 8.1833333333 | 8.1590423167 | | 9.82 | 13.8827937482 | | 10.4746666667 | 18.4965880076 | | 11.4566666667 | 42.1205206106 | | 11.784 | 45.0528073182 | | 12.4386666667 | 76.8150755186 | | 13.0933333333 | 80.0883540997 | | 14.73 | 89.7784173678 | | 16.3666666667 | 98.8113459392 | | 19.64 | 104.104366506 | | 22.9133333333 | 105.9929585305 | | 26.1866666667 | 94.0070414695 |" dat <- read.table(text=txt, sep="|")[,2:3] names(dat) <- c("x", "y") plot(dat$y~dat$x, pch = 19, xlab = "x", ylab = "y", main = "Smoothing Splines with Varying df") spl3 <- smooth.spline(x = dat$x, y = dat$y, df = 3) lines(spl3, col = 2) spl8 <- smooth.spline(x = dat$x, y = dat$y, df = 8) lines(spl8, col = 4) legend("topleft", c("df = 3", "df = 8"), col = c(2,4), bty = "n", lty = 1)
Looking for function to fit sigmoid-like curve I think smoothing splines with small degrees of freedom would do the trick. Here's an example in R: The R code: txt <- "| 0 | 0 | | 1.6366666667 | -12.2012787905 | | 3.27333
38,253
Looking for function to fit sigmoid-like curve
To fit a sigmoid-like function in a nonparametric way, we could use a monotone spline. This is implemented in the R package (all R packages here referenced are on CRAN) splines2. I will borrow some R code from the answer by @Chaconne, and modify it for my needs. splines2 offers the functions mSpline, implementing M-splines, which is a everywhere nonnegative (on the interval where defined) spline basis, and iSpline, the integral of the M-spline basis. The last one then are monotone increasing, so we can fit an increasing function by using them as a regression spline basis, and fit a linear model with restrictions on the coefficients to be non-negative. The last is implemented in a user-friendly way by R package colf, "constrained optimization on linear functions". The fits look like: The R code used: library(splines2) # includes monotone splines, M-splines, I-splines. library(colf) # constrained optimization on linear functions txt <- "| 0 | 0 | | 1.6366666667 | -12.2012787905 | | 3.2733333333 | -13.7833876716 | | 4.91 | -10.5943208589 | | 6.5466666667 | -1.3584575518 | | 8.1833333333 | 8.1590423167 | | 9.82 | 13.8827937482 | | 10.4746666667 | 18.4965880076 | | 11.4566666667 | 42.1205206106 | | 11.784 | 45.0528073182 | | 12.4386666667 | 76.8150755186 | | 13.0933333333 | 80.0883540997 | | 14.73 | 89.7784173678 | | 16.3666666667 | 98.8113459392 | | 19.64 | 104.104366506 | | 22.9133333333 | 105.9929585305 | | 26.1866666667 | 94.0070414695 |" dat <- read.table(text=txt, sep="|")[,2:3] names(dat) <- c("x", "y") plot(dat$y ~ dat$x, pch = 19, xlab = "x", ylab = "y", main = "Monotone Splines with Varying df") Imod_df_4 <- colf_nls(y ~ 1 + iSpline(x, df=4), data=dat, lower=c(-Inf, rep(0, 4)), control=nls.control(maxiter=1000, tol=1e-09, minFactor=1/2048) ) lines(dat$x, fitted(Imod_df_4), col="blue") Imod_df_6 <- colf_nls(y ~ 1 + iSpline(x, df=6), data=dat, lower=c(-Inf, rep(0, 6)), control=nls.control(maxiter=1000, tol=1e-09, minFactor=1/2048) ) lines(dat$x, fitted(Imod_df_6), col="orange") Imod_df_8 <- colf_nls(y ~ 1 + iSpline(x, df=8), data=dat, lower=c(-Inf, rep(0, 8)), control=nls.control(maxiter=1000, tol=1e-09, minFactor=1/2048) ) lines(dat$x, fitted(Imod_df_8), col="red") EDIT Monotone restrictions on a spline is a special case of shape-restricted splines, and now there is one (in fact several) R packages implementing those simplifying their use. I will do the above example again, with one of those packages. The R code is below, using the data as read in above: library(cgam) mod_cgam0 <- cgam(y ~ 1+s.incr(x), data=dat, family=gaussian) summary(mod_cgam0) Call: cgam(formula = y ~ 1 + s.incr(x), family = gaussian, data = dat) Coefficients: Estimate StdErr t.value p.value (Intercept) 43.4925 2.7748 15.674 < 2.2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for gaussian family taken to be 102.2557) Null deviance: 33749.25 on 16 degrees of freedom Residual deviance: 1636.091 on 12.5 observed degrees of freedom Approximate significance of smooth terms: edf mixture.of.Beta p.value s.incr(x) 3 0.9515 < 2.2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 CIC: 7.6873 This way the knots (and degrees of freedom) has been selected automatically. To fix the number of degrees of freedom use: mod_cgam1 <- cgam(y ~ 1+s.incr(x, numknots=5), data=dat, family=gaussian) A paper presenting cgam is here (arxiv).
Looking for function to fit sigmoid-like curve
To fit a sigmoid-like function in a nonparametric way, we could use a monotone spline. This is implemented in the R package (all R packages here referenced are on CRAN) splines2. I will borrow some
Looking for function to fit sigmoid-like curve To fit a sigmoid-like function in a nonparametric way, we could use a monotone spline. This is implemented in the R package (all R packages here referenced are on CRAN) splines2. I will borrow some R code from the answer by @Chaconne, and modify it for my needs. splines2 offers the functions mSpline, implementing M-splines, which is a everywhere nonnegative (on the interval where defined) spline basis, and iSpline, the integral of the M-spline basis. The last one then are monotone increasing, so we can fit an increasing function by using them as a regression spline basis, and fit a linear model with restrictions on the coefficients to be non-negative. The last is implemented in a user-friendly way by R package colf, "constrained optimization on linear functions". The fits look like: The R code used: library(splines2) # includes monotone splines, M-splines, I-splines. library(colf) # constrained optimization on linear functions txt <- "| 0 | 0 | | 1.6366666667 | -12.2012787905 | | 3.2733333333 | -13.7833876716 | | 4.91 | -10.5943208589 | | 6.5466666667 | -1.3584575518 | | 8.1833333333 | 8.1590423167 | | 9.82 | 13.8827937482 | | 10.4746666667 | 18.4965880076 | | 11.4566666667 | 42.1205206106 | | 11.784 | 45.0528073182 | | 12.4386666667 | 76.8150755186 | | 13.0933333333 | 80.0883540997 | | 14.73 | 89.7784173678 | | 16.3666666667 | 98.8113459392 | | 19.64 | 104.104366506 | | 22.9133333333 | 105.9929585305 | | 26.1866666667 | 94.0070414695 |" dat <- read.table(text=txt, sep="|")[,2:3] names(dat) <- c("x", "y") plot(dat$y ~ dat$x, pch = 19, xlab = "x", ylab = "y", main = "Monotone Splines with Varying df") Imod_df_4 <- colf_nls(y ~ 1 + iSpline(x, df=4), data=dat, lower=c(-Inf, rep(0, 4)), control=nls.control(maxiter=1000, tol=1e-09, minFactor=1/2048) ) lines(dat$x, fitted(Imod_df_4), col="blue") Imod_df_6 <- colf_nls(y ~ 1 + iSpline(x, df=6), data=dat, lower=c(-Inf, rep(0, 6)), control=nls.control(maxiter=1000, tol=1e-09, minFactor=1/2048) ) lines(dat$x, fitted(Imod_df_6), col="orange") Imod_df_8 <- colf_nls(y ~ 1 + iSpline(x, df=8), data=dat, lower=c(-Inf, rep(0, 8)), control=nls.control(maxiter=1000, tol=1e-09, minFactor=1/2048) ) lines(dat$x, fitted(Imod_df_8), col="red") EDIT Monotone restrictions on a spline is a special case of shape-restricted splines, and now there is one (in fact several) R packages implementing those simplifying their use. I will do the above example again, with one of those packages. The R code is below, using the data as read in above: library(cgam) mod_cgam0 <- cgam(y ~ 1+s.incr(x), data=dat, family=gaussian) summary(mod_cgam0) Call: cgam(formula = y ~ 1 + s.incr(x), family = gaussian, data = dat) Coefficients: Estimate StdErr t.value p.value (Intercept) 43.4925 2.7748 15.674 < 2.2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 (Dispersion parameter for gaussian family taken to be 102.2557) Null deviance: 33749.25 on 16 degrees of freedom Residual deviance: 1636.091 on 12.5 observed degrees of freedom Approximate significance of smooth terms: edf mixture.of.Beta p.value s.incr(x) 3 0.9515 < 2.2e-16 *** --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 CIC: 7.6873 This way the knots (and degrees of freedom) has been selected automatically. To fix the number of degrees of freedom use: mod_cgam1 <- cgam(y ~ 1+s.incr(x, numknots=5), data=dat, family=gaussian) A paper presenting cgam is here (arxiv).
Looking for function to fit sigmoid-like curve To fit a sigmoid-like function in a nonparametric way, we could use a monotone spline. This is implemented in the R package (all R packages here referenced are on CRAN) splines2. I will borrow some
38,254
Looking for function to fit sigmoid-like curve
The curve you show looks more like a cubic function, $ax^3+bx^2+cx+d$, as the ends turn up and down, rather than extend flat/horizontal. Or something like this, made with a polynomial trend line in Excel: But otherwise, if you want the ends to extend horizontal, there are many sigmoidal CDF probability distributions to choose from. The questions you need to ask yourself in choosing the most appropriate distribution are: What is the underlying mechanism/rationale for a sigmoidal-shaped curve? How flexible in shape does it need to be? How many degrees of freedom? This will depend on how many data points, as you want to avoid overfitting. But also, what features vary and what features stay constant? The mean? Variance (spread)? Skewness (lop-sidedness)? Kurtosis (tails)? Then you can search for the right shape from this list in wikipedia (https://en.wikipedia.org/wiki/List_of_probability_distributions), or refine your question with more details to get the best answer. There are also 4- and 5-parameter distributions based on the logit function with much more flexibility in shape, but again, you should avoid unless you have a lot of data points. And PS. You should never selectively add or remove data points for fitting - BAD BOY!
Looking for function to fit sigmoid-like curve
The curve you show looks more like a cubic function, $ax^3+bx^2+cx+d$, as the ends turn up and down, rather than extend flat/horizontal. Or something like this, made with a polynomial trend line in E
Looking for function to fit sigmoid-like curve The curve you show looks more like a cubic function, $ax^3+bx^2+cx+d$, as the ends turn up and down, rather than extend flat/horizontal. Or something like this, made with a polynomial trend line in Excel: But otherwise, if you want the ends to extend horizontal, there are many sigmoidal CDF probability distributions to choose from. The questions you need to ask yourself in choosing the most appropriate distribution are: What is the underlying mechanism/rationale for a sigmoidal-shaped curve? How flexible in shape does it need to be? How many degrees of freedom? This will depend on how many data points, as you want to avoid overfitting. But also, what features vary and what features stay constant? The mean? Variance (spread)? Skewness (lop-sidedness)? Kurtosis (tails)? Then you can search for the right shape from this list in wikipedia (https://en.wikipedia.org/wiki/List_of_probability_distributions), or refine your question with more details to get the best answer. There are also 4- and 5-parameter distributions based on the logit function with much more flexibility in shape, but again, you should avoid unless you have a lot of data points. And PS. You should never selectively add or remove data points for fitting - BAD BOY!
Looking for function to fit sigmoid-like curve The curve you show looks more like a cubic function, $ax^3+bx^2+cx+d$, as the ends turn up and down, rather than extend flat/horizontal. Or something like this, made with a polynomial trend line in E
38,255
Looking for function to fit sigmoid-like curve
You can use the sigmoid() function from the {pracma} package in R. The function will fit a sigmoidal curve to a numeric vector. If you don't care what function fits the data, I would recommend the gam() function from the {mgcv} package in R. It fits a smoothing function to the data using spline regression (the default is thin-plate, but you can check the documentation for other types). Using gam(),as with any non-parametric model fit, you won't be able to predict y from x with any reliability outside of the range of x in your data set as the predictions will simply follow the direction of the "last slope" of the curve, but from your question it sounds like you are not concerned with that. Hope this helps!
Looking for function to fit sigmoid-like curve
You can use the sigmoid() function from the {pracma} package in R. The function will fit a sigmoidal curve to a numeric vector. If you don't care what function fits the data, I would recommend the gam
Looking for function to fit sigmoid-like curve You can use the sigmoid() function from the {pracma} package in R. The function will fit a sigmoidal curve to a numeric vector. If you don't care what function fits the data, I would recommend the gam() function from the {mgcv} package in R. It fits a smoothing function to the data using spline regression (the default is thin-plate, but you can check the documentation for other types). Using gam(),as with any non-parametric model fit, you won't be able to predict y from x with any reliability outside of the range of x in your data set as the predictions will simply follow the direction of the "last slope" of the curve, but from your question it sounds like you are not concerned with that. Hope this helps!
Looking for function to fit sigmoid-like curve You can use the sigmoid() function from the {pracma} package in R. The function will fit a sigmoidal curve to a numeric vector. If you don't care what function fits the data, I would recommend the gam
38,256
Looking for function to fit sigmoid-like curve
Along with the other suggestions, a Gompertz growth curve would also fit this data. Here's what Wikipedia has to say about it: A Gompertz curve or Gompertz function, named after Benjamin Gompertz, is a sigmoid function. It is a type of mathematical model for a time series, where growth is slowest at the start and end of a time period. (https://en.wikipedia.org/wiki/Gompertz_function) The key is the sigmoid function. Here's the formula for $y$ as a function of $t$: $$y(t)=a \exp[-b\exp(-ct)],$$ where $a$ is an asymptote, since $\lim_{t \to \infty} a \exp[-b \exp(-ct)]= a \exp(0)=a$; $b>0$ sets the displacement along the $x$-axis (translates the graph to the left or right) $c > 0$ sets the growth rate ($y$ scaling). Here $\exp(1) = e$ is Euler's Number $2.71828...$.
Looking for function to fit sigmoid-like curve
Along with the other suggestions, a Gompertz growth curve would also fit this data. Here's what Wikipedia has to say about it: A Gompertz curve or Gompertz function, named after Benjamin Gompertz,
Looking for function to fit sigmoid-like curve Along with the other suggestions, a Gompertz growth curve would also fit this data. Here's what Wikipedia has to say about it: A Gompertz curve or Gompertz function, named after Benjamin Gompertz, is a sigmoid function. It is a type of mathematical model for a time series, where growth is slowest at the start and end of a time period. (https://en.wikipedia.org/wiki/Gompertz_function) The key is the sigmoid function. Here's the formula for $y$ as a function of $t$: $$y(t)=a \exp[-b\exp(-ct)],$$ where $a$ is an asymptote, since $\lim_{t \to \infty} a \exp[-b \exp(-ct)]= a \exp(0)=a$; $b>0$ sets the displacement along the $x$-axis (translates the graph to the left or right) $c > 0$ sets the growth rate ($y$ scaling). Here $\exp(1) = e$ is Euler's Number $2.71828...$.
Looking for function to fit sigmoid-like curve Along with the other suggestions, a Gompertz growth curve would also fit this data. Here's what Wikipedia has to say about it: A Gompertz curve or Gompertz function, named after Benjamin Gompertz,
38,257
Looking for function to fit sigmoid-like curve
But if you don't want to extrapolate (I think interpolate would be a better word) between the points, then no parametric function will give you better than what you have got, that is a straight line between your points. If you have a parametric model with as many parameters as you have observations, it will simply replicate what you have already.
Looking for function to fit sigmoid-like curve
But if you don't want to extrapolate (I think interpolate would be a better word) between the points, then no parametric function will give you better than what you have got, that is a straight line b
Looking for function to fit sigmoid-like curve But if you don't want to extrapolate (I think interpolate would be a better word) between the points, then no parametric function will give you better than what you have got, that is a straight line between your points. If you have a parametric model with as many parameters as you have observations, it will simply replicate what you have already.
Looking for function to fit sigmoid-like curve But if you don't want to extrapolate (I think interpolate would be a better word) between the points, then no parametric function will give you better than what you have got, that is a straight line b
38,258
Looking for function to fit sigmoid-like curve
If you're trying to obtain a CDF-like function (non-zero), you could use weighted Weibull curve of the form $y=A(1-e^{-(x/\alpha)^\beta})$ When I do this, I obtain roughly $A = 100$, $\alpha=12.3$, and $\beta=9.0$ (the resultant $\beta$ is much higher than I've typically run across for lifetime distributions)
Looking for function to fit sigmoid-like curve
If you're trying to obtain a CDF-like function (non-zero), you could use weighted Weibull curve of the form $y=A(1-e^{-(x/\alpha)^\beta})$ When I do this, I obtain roughly $A = 100$, $\alpha=12.3$, an
Looking for function to fit sigmoid-like curve If you're trying to obtain a CDF-like function (non-zero), you could use weighted Weibull curve of the form $y=A(1-e^{-(x/\alpha)^\beta})$ When I do this, I obtain roughly $A = 100$, $\alpha=12.3$, and $\beta=9.0$ (the resultant $\beta$ is much higher than I've typically run across for lifetime distributions)
Looking for function to fit sigmoid-like curve If you're trying to obtain a CDF-like function (non-zero), you could use weighted Weibull curve of the form $y=A(1-e^{-(x/\alpha)^\beta})$ When I do this, I obtain roughly $A = 100$, $\alpha=12.3$, an
38,259
Sample size calculation for correlation study
I would be wary of using the published value of r, 0.47, as the basis for your sample size calculations. What if the true correlation is say 0.25? If that were the true population correlation, would you want your study to find a "significant" result? If so, compute the sample size for r = 0.25 (or even smaller). More generally, try to find the sample size that can detect (with reasonable power) the smallest effect (correlation coefficient for this example) you would care about.
Sample size calculation for correlation study
I would be wary of using the published value of r, 0.47, as the basis for your sample size calculations. What if the true correlation is say 0.25? If that were the true population correlation, would
Sample size calculation for correlation study I would be wary of using the published value of r, 0.47, as the basis for your sample size calculations. What if the true correlation is say 0.25? If that were the true population correlation, would you want your study to find a "significant" result? If so, compute the sample size for r = 0.25 (or even smaller). More generally, try to find the sample size that can detect (with reasonable power) the smallest effect (correlation coefficient for this example) you would care about.
Sample size calculation for correlation study I would be wary of using the published value of r, 0.47, as the basis for your sample size calculations. What if the true correlation is say 0.25? If that were the true population correlation, would
38,260
Sample size calculation for correlation study
To run the power analysis, you need to know three of the four to calculate the last one: Number of observations Effect size (correlation coefficient) Significance level Power You have stated the effect size (correlation coefficient) in your question to be 0.47. Next, let's decide to use the conventional significance level $\alpha = 0.05$. A typical choice for the power is $p = 0.8$. Using the library pwr in R, we get > pwr.r.test(n=NULL, r=0.47, sig.level=0.05, power=0.80, alternative="two.sided") approximate correlation power calculation (arctangh transformation) n = 32.38727 r = 0.47 sig.level = 0.05 power = 0.8 alternative = two.sided Alternatively, we could set the threshold upper: > pwr.r.test(n=NULL, r=0.47, sig.level=0.05, power=0.95, alternative="two.sided") approximate correlation power calculation (arctangh transformation) n = 52.12905 r = 0.47 sig.level = 0.05 power = 0.95 alternative = two.sided You don't need very large sample size because $r=0.47$ is already quite strong relationship.
Sample size calculation for correlation study
To run the power analysis, you need to know three of the four to calculate the last one: Number of observations Effect size (correlation coefficient) Significance level Power You have stated the eff
Sample size calculation for correlation study To run the power analysis, you need to know three of the four to calculate the last one: Number of observations Effect size (correlation coefficient) Significance level Power You have stated the effect size (correlation coefficient) in your question to be 0.47. Next, let's decide to use the conventional significance level $\alpha = 0.05$. A typical choice for the power is $p = 0.8$. Using the library pwr in R, we get > pwr.r.test(n=NULL, r=0.47, sig.level=0.05, power=0.80, alternative="two.sided") approximate correlation power calculation (arctangh transformation) n = 32.38727 r = 0.47 sig.level = 0.05 power = 0.8 alternative = two.sided Alternatively, we could set the threshold upper: > pwr.r.test(n=NULL, r=0.47, sig.level=0.05, power=0.95, alternative="two.sided") approximate correlation power calculation (arctangh transformation) n = 52.12905 r = 0.47 sig.level = 0.05 power = 0.95 alternative = two.sided You don't need very large sample size because $r=0.47$ is already quite strong relationship.
Sample size calculation for correlation study To run the power analysis, you need to know three of the four to calculate the last one: Number of observations Effect size (correlation coefficient) Significance level Power You have stated the eff
38,261
Sample size calculation for correlation study
Here is another example using GPower expressed in graph, with sample size versus power: A sample of 45 seems to be reasonable, with power > 0.9.
Sample size calculation for correlation study
Here is another example using GPower expressed in graph, with sample size versus power: A sample of 45 seems to be reasonable, with power > 0.9.
Sample size calculation for correlation study Here is another example using GPower expressed in graph, with sample size versus power: A sample of 45 seems to be reasonable, with power > 0.9.
Sample size calculation for correlation study Here is another example using GPower expressed in graph, with sample size versus power: A sample of 45 seems to be reasonable, with power > 0.9.
38,262
Should I use t-test on highly skewed and discrete data?
Highly discrete and skew variables can exhibit some particular issues in their t-statistics: For example, consider something like this: (it has a bit more of a tail out to the right, that's been cut off, going out to 90-something) The distribution of two-sample t-statistics for samples of size 50 look something like this: In particular, there are somewhat short tails and a noticeable spike at 0. Issues like these suggest that simulation from distributions that look something like your sample might be necessary to judge whether the sample size is 'large enough' Your data seems to have somewhat more of a tail than in my above example, but your sample size is much larger (I was hoping for something like a frequency table). It may be okay, but you could either simulate form some models in the neighborhood of your sample distribution (or you could resample your data) to get some idea of whether those sample sizes would be sufficient to treat the distribution of your test statistics as approximately $t$. Simulation study A - t.test significance level (based on the supplied frequency tables) Here I resampled your frequency tables to get a sense of the impact of distributions like you have on the inference from a t-test. I did two simulations, both using your sample sizes for the UsersX and UsersY groups, but in the first instance sampling from the X-data for both and in the second instance sampling from the Y-data for both (to get the H0 true situation) The results were (not surprisingly given the similarity in shape) fairly similar: The distribution of p-values should look like a uniform distribution. The reason why it doesn't is probably for the same reason we see a spike in the histogram of the t-statistic I drew earlier - while the general shape is okay, there's a distinct possibility of a mean difference of exactly zero. This spike inflates the type 1 error rate - lifting a 5% significance level to roughly 7.5 or 8 percent: > sum(tpres1<.05)/length(tpres1) [1] 0.0769 > sum(tpres2<.05)/length(tpres2) [1] 0.0801 This is not necessarily a problem - if you know about it. You could, for example, (a) do the test "as is", keeping in mind you will get a somewhat higher type I error rate; or (b) drop the nominal type I error rate by about half (or even a bit more, since it affects smaller significance levels relatively more than larger ones). My suggestion - if you want to do a t-test - would instead be to use the t-statistic but to do a resampling-based test (do a permutation/randomization test or, if you prefer, do a bootstrap test). -- Simulation study B - Mann-Whitney test significance level (based on the supplied frequency tables) To my surprise, by contrast, the Mann-Whitney is quite level-robust at this sample size. This contradicts a couple of sets of published recommendations that I've seen (admittedly conducted at lower sample sizes). > sum(mwpres1<.05)/length(mwpres1) [1] 0.0509 > sum(mwpres2<.05)/length(mwpres2) [1] 0.0482 (the histograms for this case appear uniform, so this should work similarly at other typical significance levels) Significance levels of 4.8 and 5.1 percent (with standard error 0.22%) are excellent with distributions like these. On this basis I'd say that - on significance level at least - the Mann Whitney is performing quite well. We'd have to do a power study to see the impact on power, but I don't expect it would do too badly compared to say the t-test (if we adjust things so they're at about the same actual significance level). So I have to eat my previous words - my caution on the Mann-Whitney looks to be unnecessary at this sample size. My R code for reading in the frequency tables #metric1 sample1 UsersX=data.frame( count=c(182L, 119L, 41L, 11L, 7L, 5L, 5L, 3L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), value=c(0L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 12L, 17L, 18L, 20L, 29L, 35L, 42L) ) #metric 1 sample2 UsersY=data.frame( count=c(5098L, 2231L, 629L, 288L, 147L, 104L, 50L, 39L, 28L, 22L, 12L, 14L, 8L, 8L, 9L, 5L, 2L, 5L, 5L, 4L, 1L, 3L, 2L, 1L, 1L, 4L, 1L, 4L, 1L, 1L, 1L, 1L, 1L, 1L), value=c(0L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 16L, 17L, 18L, 19L, 20L, 21L, 22L, 25L, 26L, 27L, 28L, 31L, 33L, 37L, 40L, 44L, 50L, 76L) My R code for doing simulations resample=function(tbl,n=sum(tbl$count)) #$ sample(tbl$value,size=n,replace=TRUE,prob=tbl$count) #$ n1=sum(UsersX$count) #$ n2=sum(UsersY$count) #$ tpres1=replicate(10000,t.test(resample(UsersX),resample(UsersX,n2))$p.value) #$ tpres2=replicate(10000,t.test(resample(UsersY,n1),resample(UsersY))$p.value) #$ mwpres1=replicate(10000,wilcox.test(resample(UsersX),resample(UsersX,n2))$p.value)#$ mwpres2=replicate(10000,wilcox.test(resample(UsersY,n1),resample(UsersY))$p.value)#$ # "#$" at end of each line avoids minor issue with rendering R code containing "$"
Should I use t-test on highly skewed and discrete data?
Highly discrete and skew variables can exhibit some particular issues in their t-statistics: For example, consider something like this: (it has a bit more of a tail out to the right, that's been cut
Should I use t-test on highly skewed and discrete data? Highly discrete and skew variables can exhibit some particular issues in their t-statistics: For example, consider something like this: (it has a bit more of a tail out to the right, that's been cut off, going out to 90-something) The distribution of two-sample t-statistics for samples of size 50 look something like this: In particular, there are somewhat short tails and a noticeable spike at 0. Issues like these suggest that simulation from distributions that look something like your sample might be necessary to judge whether the sample size is 'large enough' Your data seems to have somewhat more of a tail than in my above example, but your sample size is much larger (I was hoping for something like a frequency table). It may be okay, but you could either simulate form some models in the neighborhood of your sample distribution (or you could resample your data) to get some idea of whether those sample sizes would be sufficient to treat the distribution of your test statistics as approximately $t$. Simulation study A - t.test significance level (based on the supplied frequency tables) Here I resampled your frequency tables to get a sense of the impact of distributions like you have on the inference from a t-test. I did two simulations, both using your sample sizes for the UsersX and UsersY groups, but in the first instance sampling from the X-data for both and in the second instance sampling from the Y-data for both (to get the H0 true situation) The results were (not surprisingly given the similarity in shape) fairly similar: The distribution of p-values should look like a uniform distribution. The reason why it doesn't is probably for the same reason we see a spike in the histogram of the t-statistic I drew earlier - while the general shape is okay, there's a distinct possibility of a mean difference of exactly zero. This spike inflates the type 1 error rate - lifting a 5% significance level to roughly 7.5 or 8 percent: > sum(tpres1<.05)/length(tpres1) [1] 0.0769 > sum(tpres2<.05)/length(tpres2) [1] 0.0801 This is not necessarily a problem - if you know about it. You could, for example, (a) do the test "as is", keeping in mind you will get a somewhat higher type I error rate; or (b) drop the nominal type I error rate by about half (or even a bit more, since it affects smaller significance levels relatively more than larger ones). My suggestion - if you want to do a t-test - would instead be to use the t-statistic but to do a resampling-based test (do a permutation/randomization test or, if you prefer, do a bootstrap test). -- Simulation study B - Mann-Whitney test significance level (based on the supplied frequency tables) To my surprise, by contrast, the Mann-Whitney is quite level-robust at this sample size. This contradicts a couple of sets of published recommendations that I've seen (admittedly conducted at lower sample sizes). > sum(mwpres1<.05)/length(mwpres1) [1] 0.0509 > sum(mwpres2<.05)/length(mwpres2) [1] 0.0482 (the histograms for this case appear uniform, so this should work similarly at other typical significance levels) Significance levels of 4.8 and 5.1 percent (with standard error 0.22%) are excellent with distributions like these. On this basis I'd say that - on significance level at least - the Mann Whitney is performing quite well. We'd have to do a power study to see the impact on power, but I don't expect it would do too badly compared to say the t-test (if we adjust things so they're at about the same actual significance level). So I have to eat my previous words - my caution on the Mann-Whitney looks to be unnecessary at this sample size. My R code for reading in the frequency tables #metric1 sample1 UsersX=data.frame( count=c(182L, 119L, 41L, 11L, 7L, 5L, 5L, 3L, 1L, 2L, 1L, 1L, 1L, 1L, 1L, 1L, 1L), value=c(0L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 12L, 17L, 18L, 20L, 29L, 35L, 42L) ) #metric 1 sample2 UsersY=data.frame( count=c(5098L, 2231L, 629L, 288L, 147L, 104L, 50L, 39L, 28L, 22L, 12L, 14L, 8L, 8L, 9L, 5L, 2L, 5L, 5L, 4L, 1L, 3L, 2L, 1L, 1L, 4L, 1L, 4L, 1L, 1L, 1L, 1L, 1L, 1L), value=c(0L, 1L, 2L, 3L, 4L, 5L, 6L, 7L, 8L, 9L, 10L, 11L, 12L, 13L, 14L, 15L, 16L, 17L, 18L, 19L, 20L, 21L, 22L, 25L, 26L, 27L, 28L, 31L, 33L, 37L, 40L, 44L, 50L, 76L) My R code for doing simulations resample=function(tbl,n=sum(tbl$count)) #$ sample(tbl$value,size=n,replace=TRUE,prob=tbl$count) #$ n1=sum(UsersX$count) #$ n2=sum(UsersY$count) #$ tpres1=replicate(10000,t.test(resample(UsersX),resample(UsersX,n2))$p.value) #$ tpres2=replicate(10000,t.test(resample(UsersY,n1),resample(UsersY))$p.value) #$ mwpres1=replicate(10000,wilcox.test(resample(UsersX),resample(UsersX,n2))$p.value)#$ mwpres2=replicate(10000,wilcox.test(resample(UsersY,n1),resample(UsersY))$p.value)#$ # "#$" at end of each line avoids minor issue with rendering R code containing "$"
Should I use t-test on highly skewed and discrete data? Highly discrete and skew variables can exhibit some particular issues in their t-statistics: For example, consider something like this: (it has a bit more of a tail out to the right, that's been cut
38,263
Should I use t-test on highly skewed and discrete data?
You should not use the t-test or even Welch's modified t-test on very skewed data, because these tests tend to be conservative (e.g., alpha and power of these tests can be reduced; Zimmerman and Zumbo, 1993). Then which test should you use? Your response variable is discrete count data with many 0's, and you want to compare means of two independent groups. I suggest use zero-inflated negative binomial regression. This page has a great tutorial on this technique using R. Reference: D.W. Zimmerman & B.D. (1993). Rank Transformations and the Power of the Student t Test and Welch t' Test for Non-Normal Populations With Unequal Variances, Canadian Journal of Experimental Psychology, 1993, 47:3, 523-539
Should I use t-test on highly skewed and discrete data?
You should not use the t-test or even Welch's modified t-test on very skewed data, because these tests tend to be conservative (e.g., alpha and power of these tests can be reduced; Zimmerman and Zumbo
Should I use t-test on highly skewed and discrete data? You should not use the t-test or even Welch's modified t-test on very skewed data, because these tests tend to be conservative (e.g., alpha and power of these tests can be reduced; Zimmerman and Zumbo, 1993). Then which test should you use? Your response variable is discrete count data with many 0's, and you want to compare means of two independent groups. I suggest use zero-inflated negative binomial regression. This page has a great tutorial on this technique using R. Reference: D.W. Zimmerman & B.D. (1993). Rank Transformations and the Power of the Student t Test and Welch t' Test for Non-Normal Populations With Unequal Variances, Canadian Journal of Experimental Psychology, 1993, 47:3, 523-539
Should I use t-test on highly skewed and discrete data? You should not use the t-test or even Welch's modified t-test on very skewed data, because these tests tend to be conservative (e.g., alpha and power of these tests can be reduced; Zimmerman and Zumbo
38,264
Should I use t-test on highly skewed and discrete data?
To $T$ or not to $T$ -- is that the question? I would suggest backing off for a moment and asking yourself, "What IS the question?" Is the question, "Are the means of populations 1 and 2 the same?", or is the question, "Is the usage distribution the same in populations 1 and 2?", or is the question, "Are the medians of populations 1 and 2 the same?", or is the question something else yet? At $\nu > 350$ degrees of freedom the difference between using sample variances vs population variances is a minor issue. Questions of data provenance are much more important. These are questions like how did these data come to be? Was any sort of random sampling mechanism involved? Also critical are questions related to the analysis, like those asked above. If you answer those questions, your choice of test statistic will be clearer. Of course, this answer precedes your question. Now, supposing that the question really is about the means, we have to ask if $N(0, 1)$ is a reasonable approximation to the distribution of the test statistic. The heavily skewed distributions you are dealing with cause me to doubt this. I'd recommend using an Edgeworth expansion and compare that answer with the answer given by the standard Normal. Note that Edgeworth expansions are not free of problems themselves, but if the two methods are giving radically different answers I would tend to trust the Edgeworth expansion answer more than the the $N(0, 1)$ answer.
Should I use t-test on highly skewed and discrete data?
To $T$ or not to $T$ -- is that the question? I would suggest backing off for a moment and asking yourself, "What IS the question?" Is the question, "Are the means of populations 1 and 2 the same?",
Should I use t-test on highly skewed and discrete data? To $T$ or not to $T$ -- is that the question? I would suggest backing off for a moment and asking yourself, "What IS the question?" Is the question, "Are the means of populations 1 and 2 the same?", or is the question, "Is the usage distribution the same in populations 1 and 2?", or is the question, "Are the medians of populations 1 and 2 the same?", or is the question something else yet? At $\nu > 350$ degrees of freedom the difference between using sample variances vs population variances is a minor issue. Questions of data provenance are much more important. These are questions like how did these data come to be? Was any sort of random sampling mechanism involved? Also critical are questions related to the analysis, like those asked above. If you answer those questions, your choice of test statistic will be clearer. Of course, this answer precedes your question. Now, supposing that the question really is about the means, we have to ask if $N(0, 1)$ is a reasonable approximation to the distribution of the test statistic. The heavily skewed distributions you are dealing with cause me to doubt this. I'd recommend using an Edgeworth expansion and compare that answer with the answer given by the standard Normal. Note that Edgeworth expansions are not free of problems themselves, but if the two methods are giving radically different answers I would tend to trust the Edgeworth expansion answer more than the the $N(0, 1)$ answer.
Should I use t-test on highly skewed and discrete data? To $T$ or not to $T$ -- is that the question? I would suggest backing off for a moment and asking yourself, "What IS the question?" Is the question, "Are the means of populations 1 and 2 the same?",
38,265
Should I use t-test on highly skewed and discrete data?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. While it will come with its own set of limitations, propensity scoring may be a way to ensure sample equality (Connelly et al., 2013).
Should I use t-test on highly skewed and discrete data?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Should I use t-test on highly skewed and discrete data? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. While it will come with its own set of limitations, propensity scoring may be a way to ensure sample equality (Connelly et al., 2013).
Should I use t-test on highly skewed and discrete data? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
38,266
Interpreting one- and two-tailed tests
You don't choose a one-tailed test based on near-significance in a two-tailed test. You don't choose the direction of a one-tailed test based on directional information from the data. Or at the least, if you do those things, you must also double the resulting p-value. A one tailed test - if you do one at all - must be based on prior considerations, in place before you know what is in the data. If this is not the case, the significance levels (and p-values) are meaningless.
Interpreting one- and two-tailed tests
You don't choose a one-tailed test based on near-significance in a two-tailed test. You don't choose the direction of a one-tailed test based on directional information from the data. Or at the least,
Interpreting one- and two-tailed tests You don't choose a one-tailed test based on near-significance in a two-tailed test. You don't choose the direction of a one-tailed test based on directional information from the data. Or at the least, if you do those things, you must also double the resulting p-value. A one tailed test - if you do one at all - must be based on prior considerations, in place before you know what is in the data. If this is not the case, the significance levels (and p-values) are meaningless.
Interpreting one- and two-tailed tests You don't choose a one-tailed test based on near-significance in a two-tailed test. You don't choose the direction of a one-tailed test based on directional information from the data. Or at the least,
38,267
Interpreting one- and two-tailed tests
Report the results that correspond to your hypothesis, which should be one- or two-tailed, not both. You should be able to decide which is appropriate on a theoretical basis before performing the test. Once you've decided, report the p value as you calculated it. If it's very small, consider the advice in responses to this question: How should tiny $p$-values be reported? (and why does R put a minimum on 2.22e-16?) If you are using the Neyman–Pearson approach to interpreting your p value, you probably know how to decide whether to reject or retain your null hypothesis based on the false positive error rate, which you must also choose in advance. It is incorrect to apply a one-sided test following a two-sided test of the otherwise-equivalent null hypothesis. Again, either one or the other is appropriate depending on your theoretical aim, not both. If a two-sided test is appropriate, you're using the Neyman–Pearson framework, and you fail to reject the null, then that is your result. If that doesn't suit your purposes, you can replicate the study anyway and see how it turns out the next time, but don't fail to report your first null result even if the second rejects the null. That is one of the primary causes of the file drawer effect, a meta-analyst's worst nightmare. For more on understanding the difference between one- and two-tailed tests, see: Difference between one-tailed and two-tailed testing? Independent t-tests. One- or two-tailed? Justification of one-tailed hypothesis testing
Interpreting one- and two-tailed tests
Report the results that correspond to your hypothesis, which should be one- or two-tailed, not both. You should be able to decide which is appropriate on a theoretical basis before performing the test
Interpreting one- and two-tailed tests Report the results that correspond to your hypothesis, which should be one- or two-tailed, not both. You should be able to decide which is appropriate on a theoretical basis before performing the test. Once you've decided, report the p value as you calculated it. If it's very small, consider the advice in responses to this question: How should tiny $p$-values be reported? (and why does R put a minimum on 2.22e-16?) If you are using the Neyman–Pearson approach to interpreting your p value, you probably know how to decide whether to reject or retain your null hypothesis based on the false positive error rate, which you must also choose in advance. It is incorrect to apply a one-sided test following a two-sided test of the otherwise-equivalent null hypothesis. Again, either one or the other is appropriate depending on your theoretical aim, not both. If a two-sided test is appropriate, you're using the Neyman–Pearson framework, and you fail to reject the null, then that is your result. If that doesn't suit your purposes, you can replicate the study anyway and see how it turns out the next time, but don't fail to report your first null result even if the second rejects the null. That is one of the primary causes of the file drawer effect, a meta-analyst's worst nightmare. For more on understanding the difference between one- and two-tailed tests, see: Difference between one-tailed and two-tailed testing? Independent t-tests. One- or two-tailed? Justification of one-tailed hypothesis testing
Interpreting one- and two-tailed tests Report the results that correspond to your hypothesis, which should be one- or two-tailed, not both. You should be able to decide which is appropriate on a theoretical basis before performing the test
38,268
How to deal with a skewed class in binary classification having many features?
First 18 isn't a lot of features at all and you should see if you can get more data. Google uses a ridiculous number of features in their ad targeting and takes a different online/game theoretical approach to choosing what ad to show to the audience Second, skewed class labels like this are a common problem. Search terms to look at include imbalanced or unbalanced classification and "skew insensitive". There are a bunch of approaches you can and should try: Stratified cross validation to make sure you end up with enough positives in the test. Under/over sampling as others have mentioned or roughly balanced bagging for random forests. There are also methods for generating new minority class samples and sampling representative majority class samples. I saw a python library for this here. Class weighted or cost sensitive learning can work well and there are versions of many methods that can do this (though not in scikit learn). Boosting (gradient or adaptive) can work well. Transductive or one class approaches which treat the data as positive and unlabeled can work well though they assume the positives are members of a larger class of possible positives. Hellinger distance decision trees are gaining some bus for working well on unbalanced data. Most of these approaches essentially reflect that you care more about getting the positives right then getting the negatives wrong. Within scikit.learn you're limited in the number of these you can try without some custom code but there are lots of other libraries out there if you google around though they'll be in a mix of languages.
How to deal with a skewed class in binary classification having many features?
First 18 isn't a lot of features at all and you should see if you can get more data. Google uses a ridiculous number of features in their ad targeting and takes a different online/game theoretical app
How to deal with a skewed class in binary classification having many features? First 18 isn't a lot of features at all and you should see if you can get more data. Google uses a ridiculous number of features in their ad targeting and takes a different online/game theoretical approach to choosing what ad to show to the audience Second, skewed class labels like this are a common problem. Search terms to look at include imbalanced or unbalanced classification and "skew insensitive". There are a bunch of approaches you can and should try: Stratified cross validation to make sure you end up with enough positives in the test. Under/over sampling as others have mentioned or roughly balanced bagging for random forests. There are also methods for generating new minority class samples and sampling representative majority class samples. I saw a python library for this here. Class weighted or cost sensitive learning can work well and there are versions of many methods that can do this (though not in scikit learn). Boosting (gradient or adaptive) can work well. Transductive or one class approaches which treat the data as positive and unlabeled can work well though they assume the positives are members of a larger class of possible positives. Hellinger distance decision trees are gaining some bus for working well on unbalanced data. Most of these approaches essentially reflect that you care more about getting the positives right then getting the negatives wrong. Within scikit.learn you're limited in the number of these you can try without some custom code but there are lots of other libraries out there if you google around though they'll be in a mix of languages.
How to deal with a skewed class in binary classification having many features? First 18 isn't a lot of features at all and you should see if you can get more data. Google uses a ridiculous number of features in their ad targeting and takes a different online/game theoretical app
38,269
How to deal with a skewed class in binary classification having many features?
I'll try to add a little intuition as to why you get such results. Considering all such classification tasks, the best result would be to predict 100% of the results correctly, right? In your case, with 0.4% vs 99.6% class balance, if you predict 0 for every row, then you automatically get to be 99.6% right. That's is a very very good result! As for how to approach these problems - as far as I know, there is no algorithms that work with very skewed classes. Hence there's two ways to approach it, just as DSea described, one is oversampling and the other is undersampling. In case of oversampling you add the smaller class many times. If you start out, as you do, with 1:250 ratio of classes, you might want to take the smaller class 50 times, so you end up with 50:250 or 1:5 ratio, which should already work with most classification algorithms. You'll have to keep in mind of course that each sample of the positive class is 50 times more "important" now. In case of undersampling you'll aim for a similar ratio, but achieve it by just picking 5 random samples from the larger class for every one of the smaller class. The drawback here is that you're looking only at a tiny part of the whole dataset. So there are ways to work with the data you have, but everything is a bit more complicated than it seems in the beginning :)
How to deal with a skewed class in binary classification having many features?
I'll try to add a little intuition as to why you get such results. Considering all such classification tasks, the best result would be to predict 100% of the results correctly, right? In your case, wi
How to deal with a skewed class in binary classification having many features? I'll try to add a little intuition as to why you get such results. Considering all such classification tasks, the best result would be to predict 100% of the results correctly, right? In your case, with 0.4% vs 99.6% class balance, if you predict 0 for every row, then you automatically get to be 99.6% right. That's is a very very good result! As for how to approach these problems - as far as I know, there is no algorithms that work with very skewed classes. Hence there's two ways to approach it, just as DSea described, one is oversampling and the other is undersampling. In case of oversampling you add the smaller class many times. If you start out, as you do, with 1:250 ratio of classes, you might want to take the smaller class 50 times, so you end up with 50:250 or 1:5 ratio, which should already work with most classification algorithms. You'll have to keep in mind of course that each sample of the positive class is 50 times more "important" now. In case of undersampling you'll aim for a similar ratio, but achieve it by just picking 5 random samples from the larger class for every one of the smaller class. The drawback here is that you're looking only at a tiny part of the whole dataset. So there are ways to work with the data you have, but everything is a bit more complicated than it seems in the beginning :)
How to deal with a skewed class in binary classification having many features? I'll try to add a little intuition as to why you get such results. Considering all such classification tasks, the best result would be to predict 100% of the results correctly, right? In your case, wi
38,270
How to deal with a skewed class in binary classification having many features?
The problem is the skew of the class balance. The simplest thing you could try would be to reduce the size of the majority class of your training set. Just randomly sample (without replacement) N instances form the majority class, where N is the number of instances in the minority class. This is called 'undersampling.'
How to deal with a skewed class in binary classification having many features?
The problem is the skew of the class balance. The simplest thing you could try would be to reduce the size of the majority class of your training set. Just randomly sample (without replacement) N inst
How to deal with a skewed class in binary classification having many features? The problem is the skew of the class balance. The simplest thing you could try would be to reduce the size of the majority class of your training set. Just randomly sample (without replacement) N instances form the majority class, where N is the number of instances in the minority class. This is called 'undersampling.'
How to deal with a skewed class in binary classification having many features? The problem is the skew of the class balance. The simplest thing you could try would be to reduce the size of the majority class of your training set. Just randomly sample (without replacement) N inst
38,271
Graphical Models and Explaining Away?
Yes, this is because the $\alpha, \beta$ node $d$-separates $m_2$ and $y_3$. See Probabilistic Reasoning in Intelligent Systems for an explanation of $d$-separation.
Graphical Models and Explaining Away?
Yes, this is because the $\alpha, \beta$ node $d$-separates $m_2$ and $y_3$. See Probabilistic Reasoning in Intelligent Systems for an explanation of $d$-separation.
Graphical Models and Explaining Away? Yes, this is because the $\alpha, \beta$ node $d$-separates $m_2$ and $y_3$. See Probabilistic Reasoning in Intelligent Systems for an explanation of $d$-separation.
Graphical Models and Explaining Away? Yes, this is because the $\alpha, \beta$ node $d$-separates $m_2$ and $y_3$. See Probabilistic Reasoning in Intelligent Systems for an explanation of $d$-separation.
38,272
Graphical Models and Explaining Away?
Another intuitive example of why two sibling nodes are independent given their parent: Imagine $A$ and $B$ are two guys living in the same city. Whether A gets wet or not depends on whether it rains or not. Same for B. Now, if don't know whether it rained or not but we observed that $A$ is wet, we are going to think that it rained, and therefore it will be likely that $B$ will be wet as well. That is, when we do not know the value of the parent node (rain) the information about $A$ gives also some information about $B$. However, if we know it rained (the parent node is observed) then to guess whether $B$ is wet or not we do not need to now nothing about $A$; we don't care, since we already know it rained. That is, when you now the state of the parent you don't care about the rest of nodes since you only depend on your parent. If you don't know the state of your parent then yes, the rest of the nodes can give you some hint on his state.
Graphical Models and Explaining Away?
Another intuitive example of why two sibling nodes are independent given their parent: Imagine $A$ and $B$ are two guys living in the same city. Whether A gets wet or not depends on whether it rain
Graphical Models and Explaining Away? Another intuitive example of why two sibling nodes are independent given their parent: Imagine $A$ and $B$ are two guys living in the same city. Whether A gets wet or not depends on whether it rains or not. Same for B. Now, if don't know whether it rained or not but we observed that $A$ is wet, we are going to think that it rained, and therefore it will be likely that $B$ will be wet as well. That is, when we do not know the value of the parent node (rain) the information about $A$ gives also some information about $B$. However, if we know it rained (the parent node is observed) then to guess whether $B$ is wet or not we do not need to now nothing about $A$; we don't care, since we already know it rained. That is, when you now the state of the parent you don't care about the rest of nodes since you only depend on your parent. If you don't know the state of your parent then yes, the rest of the nodes can give you some hint on his state.
Graphical Models and Explaining Away? Another intuitive example of why two sibling nodes are independent given their parent: Imagine $A$ and $B$ are two guys living in the same city. Whether A gets wet or not depends on whether it rain
38,273
Graphical Models and Explaining Away?
I like to think about it like this: What additional information does knowing $y_3=X$ actually give you? Now the additional is key here. Imagine you knew nothing but $y_3=X$. Then in order: We can learn about $m_3$ by asking what it needs to make $y_3=X$ more likely We can learn about $\alpha,\beta$ by asking ourselves what they need to be so to make $m_3$ more likely to be what we think it should be from step 1 The knowledge about $\alpha,\beta$ from step 2 can help us guess what the distribution of $m_2$ is. So that's $p(m_2|y_3)$. But now do the same process, already knowing for certain what $\alpha,\beta$ are, that is $p(m_2|\alpha,\beta,y_3)$. Well it turns out 2 is useless. Step 2 is useless because we already know $\alpha,\beta$, since they are given and no new information changes that. So step 3 happens regardless of whether $y_3$ is known or not. Hopefully this wasn't more confusing.
Graphical Models and Explaining Away?
I like to think about it like this: What additional information does knowing $y_3=X$ actually give you? Now the additional is key here. Imagine you knew nothing but $y_3=X$. Then in order: We can le
Graphical Models and Explaining Away? I like to think about it like this: What additional information does knowing $y_3=X$ actually give you? Now the additional is key here. Imagine you knew nothing but $y_3=X$. Then in order: We can learn about $m_3$ by asking what it needs to make $y_3=X$ more likely We can learn about $\alpha,\beta$ by asking ourselves what they need to be so to make $m_3$ more likely to be what we think it should be from step 1 The knowledge about $\alpha,\beta$ from step 2 can help us guess what the distribution of $m_2$ is. So that's $p(m_2|y_3)$. But now do the same process, already knowing for certain what $\alpha,\beta$ are, that is $p(m_2|\alpha,\beta,y_3)$. Well it turns out 2 is useless. Step 2 is useless because we already know $\alpha,\beta$, since they are given and no new information changes that. So step 3 happens regardless of whether $y_3$ is known or not. Hopefully this wasn't more confusing.
Graphical Models and Explaining Away? I like to think about it like this: What additional information does knowing $y_3=X$ actually give you? Now the additional is key here. Imagine you knew nothing but $y_3=X$. Then in order: We can le
38,274
Graphical Models and Explaining Away?
\begin{align} p(m_2|\alpha,\beta,y_3)&=\frac{p(m_2,y_3|\alpha, \beta)}{p(y_3|\alpha,\beta)}\\ &= \frac{p(m_2|\alpha,\beta)p(y_3|\alpha, \beta)}{p(y_3|\alpha, \beta)}\\ &=p(m_2|\alpha,\beta) \end{align} $p(m_2,y_3|\alpha, \beta)=p(m_2|\alpha,\beta)p(y_3|\alpha, \beta)$ holds because $m_2$, $y_3$ and $\alpha, \beta$ form a common cause trail. That's given $\alpha$ and $\beta$ the correlation between $m_2$ and $y_3$ disappears, and $m_2$ can influence $y_3$ only when $\alpha, \beta$ and $m_3$ are all not observed.
Graphical Models and Explaining Away?
\begin{align} p(m_2|\alpha,\beta,y_3)&=\frac{p(m_2,y_3|\alpha, \beta)}{p(y_3|\alpha,\beta)}\\ &= \frac{p(m_2|\alpha,\beta)p(y_3|\alpha, \beta)}{p(y_3|\alpha, \beta)}\\ &=p(m_2|\alpha,\beta) \end{align
Graphical Models and Explaining Away? \begin{align} p(m_2|\alpha,\beta,y_3)&=\frac{p(m_2,y_3|\alpha, \beta)}{p(y_3|\alpha,\beta)}\\ &= \frac{p(m_2|\alpha,\beta)p(y_3|\alpha, \beta)}{p(y_3|\alpha, \beta)}\\ &=p(m_2|\alpha,\beta) \end{align} $p(m_2,y_3|\alpha, \beta)=p(m_2|\alpha,\beta)p(y_3|\alpha, \beta)$ holds because $m_2$, $y_3$ and $\alpha, \beta$ form a common cause trail. That's given $\alpha$ and $\beta$ the correlation between $m_2$ and $y_3$ disappears, and $m_2$ can influence $y_3$ only when $\alpha, \beta$ and $m_3$ are all not observed.
Graphical Models and Explaining Away? \begin{align} p(m_2|\alpha,\beta,y_3)&=\frac{p(m_2,y_3|\alpha, \beta)}{p(y_3|\alpha,\beta)}\\ &= \frac{p(m_2|\alpha,\beta)p(y_3|\alpha, \beta)}{p(y_3|\alpha, \beta)}\\ &=p(m_2|\alpha,\beta) \end{align
38,275
How to store the standard errors with the lm() function in R? [closed]
Check the object that summary(reg) returns. You find then that > str(summary(reg)$coef) ... > X <- summary(reg)$coef > X[,2] (Intercept) x 0.03325738 0.05558073 gives you what you want. Or, if you calculate them yourself (as @caracal showed in the comments) : sqrt(diag(vcov(reg)))
How to store the standard errors with the lm() function in R? [closed]
Check the object that summary(reg) returns. You find then that > str(summary(reg)$coef) ... > X <- summary(reg)$coef > X[,2] (Intercept) x 0.03325738 0.05558073 gives you what you want.
How to store the standard errors with the lm() function in R? [closed] Check the object that summary(reg) returns. You find then that > str(summary(reg)$coef) ... > X <- summary(reg)$coef > X[,2] (Intercept) x 0.03325738 0.05558073 gives you what you want. Or, if you calculate them yourself (as @caracal showed in the comments) : sqrt(diag(vcov(reg)))
How to store the standard errors with the lm() function in R? [closed] Check the object that summary(reg) returns. You find then that > str(summary(reg)$coef) ... > X <- summary(reg)$coef > X[,2] (Intercept) x 0.03325738 0.05558073 gives you what you want.
38,276
How to store the standard errors with the lm() function in R? [closed]
Somewhere Doug Bates once mentioned that accessor functions are preferable, so I'd do R> example(lm) ## to create lm.D9 object [...] R> coef(summary(lm.D9)) Estimate Std. Error t value Pr(>|t|) (Intercept) 5.032 0.220218 22.85012 9.54713e-15 groupTrt -0.371 0.311435 -1.19126 2.49023e-01 R> str(coef(summary(lm.D9))) num [1:2, 1:4] 5.032 -0.371 0.22 0.311 22.85 ... - attr(*, "dimnames")=List of 2 ..$ : chr [1:2] "(Intercept)" "groupTrt" ..$ : chr [1:4] "Estimate" "Std. Error" "t value" "Pr(>|t|)" R> coef(summary(lm.D9))[,"Std. Error"] (Intercept) groupTrt 0.220218 0.311435 R> and the key is the coef() accessor for the summary object.
How to store the standard errors with the lm() function in R? [closed]
Somewhere Doug Bates once mentioned that accessor functions are preferable, so I'd do R> example(lm) ## to create lm.D9 object [...] R> coef(summary(lm.D9)) Estimate Std. Error t value
How to store the standard errors with the lm() function in R? [closed] Somewhere Doug Bates once mentioned that accessor functions are preferable, so I'd do R> example(lm) ## to create lm.D9 object [...] R> coef(summary(lm.D9)) Estimate Std. Error t value Pr(>|t|) (Intercept) 5.032 0.220218 22.85012 9.54713e-15 groupTrt -0.371 0.311435 -1.19126 2.49023e-01 R> str(coef(summary(lm.D9))) num [1:2, 1:4] 5.032 -0.371 0.22 0.311 22.85 ... - attr(*, "dimnames")=List of 2 ..$ : chr [1:2] "(Intercept)" "groupTrt" ..$ : chr [1:4] "Estimate" "Std. Error" "t value" "Pr(>|t|)" R> coef(summary(lm.D9))[,"Std. Error"] (Intercept) groupTrt 0.220218 0.311435 R> and the key is the coef() accessor for the summary object.
How to store the standard errors with the lm() function in R? [closed] Somewhere Doug Bates once mentioned that accessor functions are preferable, so I'd do R> example(lm) ## to create lm.D9 object [...] R> coef(summary(lm.D9)) Estimate Std. Error t value
38,277
How to store the standard errors with the lm() function in R? [closed]
Following on @Joris Meys answer for how to calculate std. error manually. #Std. Error = residual variance / variable variance = sqrt(diag(vcov(reg))) #where vcov(reg) = summary(reg)$cov.unscaled * summary(reg)$sigma^2 #where summary(reg)$cov.unscaled = 1/(variable variance) (diagonal of precision matrix) = solve(t(x) %*% x) #where summary(reg)$sigma = residual variance = sqrt(sum(reg$residuals^2) / (nrow(x)-ncol(x))) > x <- matrix(runif(200),nrow=100) > y <- 5 + 3 * x[,1] + rnorm(100, 0, 0.15) > reg <- lm(y~x) > summary(reg)$coefficient[,'Std. Error'] (Intercept) x1 x2 0.03842706 0.05494507 0.05243990 > sqrt(diag(vcov(reg))) (Intercept) x1 x2 0.03842706 0.05494507 0.05243990 > sqrt(diag( summary(reg)$sigma^2*summary(reg)$cov.unscaled )) (Intercept) x1 x2 0.03842706 0.05494507 0.05243990 > x_ = cbind(rep(1,nrow(x)),x) > sqrt(diag( sum(reg$residuals^2)/(nrow(x)-ncol(x)-1) * solve(t(x_) %*% x_) )) [1] 0.03842706 0.05494507 0.05243990 For math detailed please check Standard Error for a Parameter in Ordinary Least Squares or How to derive variance-covariance matrix of coefficients in linear regression .
How to store the standard errors with the lm() function in R? [closed]
Following on @Joris Meys answer for how to calculate std. error manually. #Std. Error = residual variance / variable variance = sqrt(diag(vcov(reg))) #where vcov(reg) = summary(reg)$cov.unscaled * s
How to store the standard errors with the lm() function in R? [closed] Following on @Joris Meys answer for how to calculate std. error manually. #Std. Error = residual variance / variable variance = sqrt(diag(vcov(reg))) #where vcov(reg) = summary(reg)$cov.unscaled * summary(reg)$sigma^2 #where summary(reg)$cov.unscaled = 1/(variable variance) (diagonal of precision matrix) = solve(t(x) %*% x) #where summary(reg)$sigma = residual variance = sqrt(sum(reg$residuals^2) / (nrow(x)-ncol(x))) > x <- matrix(runif(200),nrow=100) > y <- 5 + 3 * x[,1] + rnorm(100, 0, 0.15) > reg <- lm(y~x) > summary(reg)$coefficient[,'Std. Error'] (Intercept) x1 x2 0.03842706 0.05494507 0.05243990 > sqrt(diag(vcov(reg))) (Intercept) x1 x2 0.03842706 0.05494507 0.05243990 > sqrt(diag( summary(reg)$sigma^2*summary(reg)$cov.unscaled )) (Intercept) x1 x2 0.03842706 0.05494507 0.05243990 > x_ = cbind(rep(1,nrow(x)),x) > sqrt(diag( sum(reg$residuals^2)/(nrow(x)-ncol(x)-1) * solve(t(x_) %*% x_) )) [1] 0.03842706 0.05494507 0.05243990 For math detailed please check Standard Error for a Parameter in Ordinary Least Squares or How to derive variance-covariance matrix of coefficients in linear regression .
How to store the standard errors with the lm() function in R? [closed] Following on @Joris Meys answer for how to calculate std. error manually. #Std. Error = residual variance / variable variance = sqrt(diag(vcov(reg))) #where vcov(reg) = summary(reg)$cov.unscaled * s
38,278
Why is a "Correction" Required in Multiple Hypothesis Testing?
This is a tricky topic: when exactly do you correct for multiple testing? The two extremes are both problematic: never correcting for multiple testing will result in too many false positives, always correcting for multiple testing seems impossible, e.g. if, over your carreer as a statistician you perform $1\,000$ (which is still a conservative estimate) tests you wouldn't use $\alpha = \frac{5\%}{1\,000}$ for each of those tests. At the end of the day you will end up somewhere in the middle: you will account for multiple testing "in batches" and you'll have to decide on how to "batch" tests (or confidence intervals for that matter!) together. As a frequentist (which I assume you are because you are interested here in NHST) you are interested not in the result of a single test (that will be correct or wrong, but you won't know which of the two scenarios you are in) but rather in properties of your procedure if it were performed repeatedly. Now what "the procedure" is depends on the context. One strategy is to do this on a paper-by-paper basis, i.e. each paper gets a budget of $\alpha = 5\%$ that you can spend. Still, if sample sizes are low (as they usually are) and effect sizes are comparably small (as they usually are) and you are interested in many things at once (as one usually is), correcting for every test will be unsatisfactory. Then one has to make a decision: What is the primary interest of this analysis / paper? For these tests / confidence intervals you correct for multiple testing and are thus allowed to do a "hard" interpretation of the results. All other analyses are declared secondary analyses and the interpretation of results is more exploratory, e.g. generating hypothesis for follow-up studies. Similarly you can "batch" tests / confidence intervals together if you want to interpret the results of these analyses together: "If $\beta_1$ in this model is X and $\beta_1$ in the second model is Y then Z." Note that this also implies that you do not have to account for having $\beta_2, \dots, \beta_5$ in your models - if you do not want to interpret these in the end that is. All of this assumes of course that you have decided on a testing strategy before looking at the data.
Why is a "Correction" Required in Multiple Hypothesis Testing?
This is a tricky topic: when exactly do you correct for multiple testing? The two extremes are both problematic: never correcting for multiple testing will result in too many false positives, always
Why is a "Correction" Required in Multiple Hypothesis Testing? This is a tricky topic: when exactly do you correct for multiple testing? The two extremes are both problematic: never correcting for multiple testing will result in too many false positives, always correcting for multiple testing seems impossible, e.g. if, over your carreer as a statistician you perform $1\,000$ (which is still a conservative estimate) tests you wouldn't use $\alpha = \frac{5\%}{1\,000}$ for each of those tests. At the end of the day you will end up somewhere in the middle: you will account for multiple testing "in batches" and you'll have to decide on how to "batch" tests (or confidence intervals for that matter!) together. As a frequentist (which I assume you are because you are interested here in NHST) you are interested not in the result of a single test (that will be correct or wrong, but you won't know which of the two scenarios you are in) but rather in properties of your procedure if it were performed repeatedly. Now what "the procedure" is depends on the context. One strategy is to do this on a paper-by-paper basis, i.e. each paper gets a budget of $\alpha = 5\%$ that you can spend. Still, if sample sizes are low (as they usually are) and effect sizes are comparably small (as they usually are) and you are interested in many things at once (as one usually is), correcting for every test will be unsatisfactory. Then one has to make a decision: What is the primary interest of this analysis / paper? For these tests / confidence intervals you correct for multiple testing and are thus allowed to do a "hard" interpretation of the results. All other analyses are declared secondary analyses and the interpretation of results is more exploratory, e.g. generating hypothesis for follow-up studies. Similarly you can "batch" tests / confidence intervals together if you want to interpret the results of these analyses together: "If $\beta_1$ in this model is X and $\beta_1$ in the second model is Y then Z." Note that this also implies that you do not have to account for having $\beta_2, \dots, \beta_5$ in your models - if you do not want to interpret these in the end that is. All of this assumes of course that you have decided on a testing strategy before looking at the data.
Why is a "Correction" Required in Multiple Hypothesis Testing? This is a tricky topic: when exactly do you correct for multiple testing? The two extremes are both problematic: never correcting for multiple testing will result in too many false positives, always
38,279
Why is a "Correction" Required in Multiple Hypothesis Testing?
There is a lot of arbitrariness in certain statistical practices. As other responses and previous discussions in the literature point out, it's very hard to justify the tradition that you correct for multiple comparisons that you do within one study/one paper. This is a convention that has developed, but honestly has a somewhat shaky basis. On the other hand, it is clear that we want to avoid that the scientific literature gets flooded with false research "findings" (although arguably that is already a problem): We know that if we do lots of comparisons, then even if there is just nothing going on about 5% will end up having $p\leq 0.05$. Of course, in practice we investigate a mixture of effects that are there or not, that are small or large etc., so we don't know whether we're in that situation. People are sometimes surprised to learn that all else being equal $p\leq 0.05$ is less likely to be a genuine effect in the direction indicated by the point estimate, if the study was less well powered. I.e. "The study was small, but we nevertheless achieved significance, so the effect must be especially strong as also indicated by the large estimate!" is exactly the wrong way around. This is particularly obvious if you think about the "valid" frequentist study design of collecting no data and rejecting the null hypothesis randomly with probability 5%. In that case, it's clear that the p-value is completely useless, because the study is so underpowered. Finally, the current $p\leq 0.05$ "standard" is arguably already a rather weak standard for "statistical significance", so findings supported by the typical evidence behind $p\leq 0.05$ are not really that credible in the first place (i.e. even without multiplicity issues). What I'm trying to get at here, is that if you really want to do null hypothesis testing and, thus, presumably also care about the familywise type I error rate, you really ought to be worried about multiple comparisons to some extent. You can, of course, also read all of the above as a criticism of p-values and of over-emphasizing "significance" (see also here, here and here). So, given how weak $p\leq 0.05$ is and how this gets worse in small studies, you do not really want to pile multiple comparisons on top of that. Otherwise, the false positive rate of your work will get pretty high pretty fast. On that basis, one should not ignore multiplicity and should consider it a problem to the reliability of the scientific literature. Of course, if one considers the purpose of "science" to be for "scientists" to be able to make more "findings", publish them and as a result get tenure, one might not consider that a problem. Exactly how one deals with multiplicity is a different question. E.g. whether you should care about the familywise type I error rate, or perhaps rather about the false discovery rate, or perhaps about the probability that claims are true given all available information (or alternatively "if one were a bit skeptical of new claims"), is open for debate. Within any of these frameworks, there's then many different methods for doing what you set out to do (e.g. the Bonferroni correction, which as pointed out by others is unnecessarily conservative). Especially within the type of exploratory kind of work being described in the original question, where there is a bit of an fishing expedition for "signals", I do not think that a confirmatory study mindset is quite right (i.e. where you have a clearly defined hypothesis and test it with a study) and something more oriented towards minimizing false positive findings from exploratory work would seem like a better fit (where p-values may be the wrong tool in the first place, although one can try to correct them in ways that aim to control e.g. the false discovery rate). It's just important to then not misleadingly report such work as if it had been been from confirmatory work.
Why is a "Correction" Required in Multiple Hypothesis Testing?
There is a lot of arbitrariness in certain statistical practices. As other responses and previous discussions in the literature point out, it's very hard to justify the tradition that you correct for
Why is a "Correction" Required in Multiple Hypothesis Testing? There is a lot of arbitrariness in certain statistical practices. As other responses and previous discussions in the literature point out, it's very hard to justify the tradition that you correct for multiple comparisons that you do within one study/one paper. This is a convention that has developed, but honestly has a somewhat shaky basis. On the other hand, it is clear that we want to avoid that the scientific literature gets flooded with false research "findings" (although arguably that is already a problem): We know that if we do lots of comparisons, then even if there is just nothing going on about 5% will end up having $p\leq 0.05$. Of course, in practice we investigate a mixture of effects that are there or not, that are small or large etc., so we don't know whether we're in that situation. People are sometimes surprised to learn that all else being equal $p\leq 0.05$ is less likely to be a genuine effect in the direction indicated by the point estimate, if the study was less well powered. I.e. "The study was small, but we nevertheless achieved significance, so the effect must be especially strong as also indicated by the large estimate!" is exactly the wrong way around. This is particularly obvious if you think about the "valid" frequentist study design of collecting no data and rejecting the null hypothesis randomly with probability 5%. In that case, it's clear that the p-value is completely useless, because the study is so underpowered. Finally, the current $p\leq 0.05$ "standard" is arguably already a rather weak standard for "statistical significance", so findings supported by the typical evidence behind $p\leq 0.05$ are not really that credible in the first place (i.e. even without multiplicity issues). What I'm trying to get at here, is that if you really want to do null hypothesis testing and, thus, presumably also care about the familywise type I error rate, you really ought to be worried about multiple comparisons to some extent. You can, of course, also read all of the above as a criticism of p-values and of over-emphasizing "significance" (see also here, here and here). So, given how weak $p\leq 0.05$ is and how this gets worse in small studies, you do not really want to pile multiple comparisons on top of that. Otherwise, the false positive rate of your work will get pretty high pretty fast. On that basis, one should not ignore multiplicity and should consider it a problem to the reliability of the scientific literature. Of course, if one considers the purpose of "science" to be for "scientists" to be able to make more "findings", publish them and as a result get tenure, one might not consider that a problem. Exactly how one deals with multiplicity is a different question. E.g. whether you should care about the familywise type I error rate, or perhaps rather about the false discovery rate, or perhaps about the probability that claims are true given all available information (or alternatively "if one were a bit skeptical of new claims"), is open for debate. Within any of these frameworks, there's then many different methods for doing what you set out to do (e.g. the Bonferroni correction, which as pointed out by others is unnecessarily conservative). Especially within the type of exploratory kind of work being described in the original question, where there is a bit of an fishing expedition for "signals", I do not think that a confirmatory study mindset is quite right (i.e. where you have a clearly defined hypothesis and test it with a study) and something more oriented towards minimizing false positive findings from exploratory work would seem like a better fit (where p-values may be the wrong tool in the first place, although one can try to correct them in ways that aim to control e.g. the false discovery rate). It's just important to then not misleadingly report such work as if it had been been from confirmatory work.
Why is a "Correction" Required in Multiple Hypothesis Testing? There is a lot of arbitrariness in certain statistical practices. As other responses and previous discussions in the literature point out, it's very hard to justify the tradition that you correct for
38,280
Why is a "Correction" Required in Multiple Hypothesis Testing?
I'm frequently asked when multiple comparison adjustment should be used. Then I start talking about false discovery rate, type-I errors and so on, to conclude that I was not fully understood. So I came with a following example (maybe "show don't tell" rule applies to data analysis too ;) ): Imagine, you compare three parameters, P1, P2 and P3 in two groups (control and treatment, say), and get p=0,03 for each parameter. Now, if you apply Bonferroni correction, you'll reduce number of significant results from 3 to 0 (or to 1 or 2 with other corrections). But you really, really love your tripple significance, so you write three articles, one for each parameter, and send them to three distinct journals. Now, no one asks you for multiple comparisons adjustment! The moral is: if you can sensibly (forget about ethics for a while) split your results into distinct papers/threads, you probably do not need adjustments.
Why is a "Correction" Required in Multiple Hypothesis Testing?
I'm frequently asked when multiple comparison adjustment should be used. Then I start talking about false discovery rate, type-I errors and so on, to conclude that I was not fully understood. So I cam
Why is a "Correction" Required in Multiple Hypothesis Testing? I'm frequently asked when multiple comparison adjustment should be used. Then I start talking about false discovery rate, type-I errors and so on, to conclude that I was not fully understood. So I came with a following example (maybe "show don't tell" rule applies to data analysis too ;) ): Imagine, you compare three parameters, P1, P2 and P3 in two groups (control and treatment, say), and get p=0,03 for each parameter. Now, if you apply Bonferroni correction, you'll reduce number of significant results from 3 to 0 (or to 1 or 2 with other corrections). But you really, really love your tripple significance, so you write three articles, one for each parameter, and send them to three distinct journals. Now, no one asks you for multiple comparisons adjustment! The moral is: if you can sensibly (forget about ethics for a while) split your results into distinct papers/threads, you probably do not need adjustments.
Why is a "Correction" Required in Multiple Hypothesis Testing? I'm frequently asked when multiple comparison adjustment should be used. Then I start talking about false discovery rate, type-I errors and so on, to conclude that I was not fully understood. So I cam
38,281
Understand the illustration of the curse of dimensionality?
Let's look at the first few dimensions. For $d=1$, if examples are laid out on a regular grid, this just means that they are at equal distances on a straight line, e.g., at the integers. We can assume that our test example $x_t$ is at the origin, $x_t=0$. There are two nearest neighbors with equal distance $1$, namely the points at $1$ and $-1$. For $d=2$, we have a plane, and the regular grid could consist of all the two-dimensional integer points. For a test example again (without loss of generality) at the origin, there are four nearest neighbors, again all with distance $1$: $(0,1)$, $(-1,0)$, $(0,-1)$ and $(1,0)$. For $d=3$, we have a regular grid in three-dimensional space. Our test example at the origin now has six nearest neighbors, all at distance $1$. In general, since we have $d$ dimensions and can assume that our regular grid just consists of the $d$-dimensional integer points, we can take the test example at the origin, and then we can find all $2d$ nearest neighbors by choosing one of the $d$ dimensions and setting that coordinate to either $1$ or $-1$, and leaving all other coordinates at $0$. The problem this illustrates is that if there is no structure in our problem (i.e., our examples are on a regular grid, with no clusters, perhaps with some noise), then selecting a fixed number $k$ of nearest neighbors may simply mean picking them at random, since there are so many nearest neighbors, which may only be differentiated because of noise.
Understand the illustration of the curse of dimensionality?
Let's look at the first few dimensions. For $d=1$, if examples are laid out on a regular grid, this just means that they are at equal distances on a straight line, e.g., at the integers. We can assum
Understand the illustration of the curse of dimensionality? Let's look at the first few dimensions. For $d=1$, if examples are laid out on a regular grid, this just means that they are at equal distances on a straight line, e.g., at the integers. We can assume that our test example $x_t$ is at the origin, $x_t=0$. There are two nearest neighbors with equal distance $1$, namely the points at $1$ and $-1$. For $d=2$, we have a plane, and the regular grid could consist of all the two-dimensional integer points. For a test example again (without loss of generality) at the origin, there are four nearest neighbors, again all with distance $1$: $(0,1)$, $(-1,0)$, $(0,-1)$ and $(1,0)$. For $d=3$, we have a regular grid in three-dimensional space. Our test example at the origin now has six nearest neighbors, all at distance $1$. In general, since we have $d$ dimensions and can assume that our regular grid just consists of the $d$-dimensional integer points, we can take the test example at the origin, and then we can find all $2d$ nearest neighbors by choosing one of the $d$ dimensions and setting that coordinate to either $1$ or $-1$, and leaving all other coordinates at $0$. The problem this illustrates is that if there is no structure in our problem (i.e., our examples are on a regular grid, with no clusters, perhaps with some noise), then selecting a fixed number $k$ of nearest neighbors may simply mean picking them at random, since there are so many nearest neighbors, which may only be differentiated because of noise.
Understand the illustration of the curse of dimensionality? Let's look at the first few dimensions. For $d=1$, if examples are laid out on a regular grid, this just means that they are at equal distances on a straight line, e.g., at the integers. We can assum
38,282
Understand the illustration of the curse of dimensionality?
Especially, why If the grid is d-dimensional, $x_t$’s 2d nearest examples are all at the same distance from it? Could anyone please visualize the illustration(if possible)? The example speaks about a regular grid. In a regular grid you have for each grid line (on each dimension) two neighbouring nodes one at +1 and one at -1. For a non-regular distribution the 2d nearest neighbours are not anymore at an exactly similar distance and there is some randomness. But the example with the regular grid shows that there are a lot of directions in which a nearest neighbour can be found (and this makes it easier for noise to overtake the signal).
Understand the illustration of the curse of dimensionality?
Especially, why If the grid is d-dimensional, $x_t$’s 2d nearest examples are all at the same distance from it? Could anyone please visualize the illustration(if possible)? The example speaks about a
Understand the illustration of the curse of dimensionality? Especially, why If the grid is d-dimensional, $x_t$’s 2d nearest examples are all at the same distance from it? Could anyone please visualize the illustration(if possible)? The example speaks about a regular grid. In a regular grid you have for each grid line (on each dimension) two neighbouring nodes one at +1 and one at -1. For a non-regular distribution the 2d nearest neighbours are not anymore at an exactly similar distance and there is some randomness. But the example with the regular grid shows that there are a lot of directions in which a nearest neighbour can be found (and this makes it easier for noise to overtake the signal).
Understand the illustration of the curse of dimensionality? Especially, why If the grid is d-dimensional, $x_t$’s 2d nearest examples are all at the same distance from it? Could anyone please visualize the illustration(if possible)? The example speaks about a
38,283
Understand the illustration of the curse of dimensionality?
I think @Stephan Kolassa explains well what the authors meant in that paragraph. The theoretical basis for "in high dimensions all samples look alike" is layed out in section 3.4 Instability Result of this paper. Skipping the proof on page 6, essentially ... all points converge to the same distance from the query point. Thus under these conditions, the concept of nearest neighbor is no longer useful. I think if the authors cite this paper instead of using that grid example, the presentation would be much clearer. Results in higher dimensions can be unintuitive because we live in a 3-d world.
Understand the illustration of the curse of dimensionality?
I think @Stephan Kolassa explains well what the authors meant in that paragraph. The theoretical basis for "in high dimensions all samples look alike" is layed out in section 3.4 Instability Result of
Understand the illustration of the curse of dimensionality? I think @Stephan Kolassa explains well what the authors meant in that paragraph. The theoretical basis for "in high dimensions all samples look alike" is layed out in section 3.4 Instability Result of this paper. Skipping the proof on page 6, essentially ... all points converge to the same distance from the query point. Thus under these conditions, the concept of nearest neighbor is no longer useful. I think if the authors cite this paper instead of using that grid example, the presentation would be much clearer. Results in higher dimensions can be unintuitive because we live in a 3-d world.
Understand the illustration of the curse of dimensionality? I think @Stephan Kolassa explains well what the authors meant in that paragraph. The theoretical basis for "in high dimensions all samples look alike" is layed out in section 3.4 Instability Result of
38,284
Understand the illustration of the curse of dimensionality?
imagine the unit cube $[0,1]^d$. All training data is sampled uniformly within this cube, i.e. $∀i,x_i∈[0,1]^d$, and we are considering the $k=10$ nearest neighbors of such a test point. Let ℓ be the edge length of the smallest hyper-cube that contains all k-nearest neighbor of a test point. Then $ℓ^d≈\frac{k}{n}$ and $ℓ≈(\frac{k}{n})^{1/d}$. If n=1000, how big is ℓ? So as d≫0 almost the entire space is needed to find the 10-NN. To simulate the phenomenon, I conducted the following experiment(implement the example in the reference): import matplotlib.pyplot as plt %matplotlib inline plt.rcParams.update({'figure.figsize':(7,5), 'figure.dpi':100}) import numpy as np np.random.seed(0) from scipy.spatial import distance_matrix min_val = 0 max_val = 1 n = 1000 for d in (2, 3, 10, 100, 1000, 10000): a = np.random.rand(n, d) b = np.random.rand(n, d) distances = distance_matrix(a, b) distances = np.array(distances).flatten() plt.hist(distances, bins=100, weights=np.ones(len(distances)) / len(distances)) # plt.gca().set(title='Frequency Histogram', ylabel='Frequency') plt.show() And the plots align very well with those in the reference. References: Lecture 2: k-nearest neighbors
Understand the illustration of the curse of dimensionality?
imagine the unit cube $[0,1]^d$. All training data is sampled uniformly within this cube, i.e. $∀i,x_i∈[0,1]^d$, and we are considering the $k=10$ nearest neighbors of such a test point. Let ℓ be t
Understand the illustration of the curse of dimensionality? imagine the unit cube $[0,1]^d$. All training data is sampled uniformly within this cube, i.e. $∀i,x_i∈[0,1]^d$, and we are considering the $k=10$ nearest neighbors of such a test point. Let ℓ be the edge length of the smallest hyper-cube that contains all k-nearest neighbor of a test point. Then $ℓ^d≈\frac{k}{n}$ and $ℓ≈(\frac{k}{n})^{1/d}$. If n=1000, how big is ℓ? So as d≫0 almost the entire space is needed to find the 10-NN. To simulate the phenomenon, I conducted the following experiment(implement the example in the reference): import matplotlib.pyplot as plt %matplotlib inline plt.rcParams.update({'figure.figsize':(7,5), 'figure.dpi':100}) import numpy as np np.random.seed(0) from scipy.spatial import distance_matrix min_val = 0 max_val = 1 n = 1000 for d in (2, 3, 10, 100, 1000, 10000): a = np.random.rand(n, d) b = np.random.rand(n, d) distances = distance_matrix(a, b) distances = np.array(distances).flatten() plt.hist(distances, bins=100, weights=np.ones(len(distances)) / len(distances)) # plt.gca().set(title='Frequency Histogram', ylabel='Frequency') plt.show() And the plots align very well with those in the reference. References: Lecture 2: k-nearest neighbors
Understand the illustration of the curse of dimensionality? imagine the unit cube $[0,1]^d$. All training data is sampled uniformly within this cube, i.e. $∀i,x_i∈[0,1]^d$, and we are considering the $k=10$ nearest neighbors of such a test point. Let ℓ be t
38,285
Forecasting Prices vs Returns by Deep Learning
What you've outlined is probably the single most common error that machine learning researchers make when analyzing financial data: it's trivial to discover that a great predictor of tomorrow's price is today's price. The statistical term of art for this phenomenon is "non-stationarity." We have a number of questions about how to test for the stationarity of a time series. One such thread is How to know if a time series is stationary or non-stationary? In the particular case of time series analysis of financial data, it might be helpful to review a high-quality statistical text, such as Statistics and Data Analysis for Financial Engineering, Second Edition (David Ruppert & David S. Matteson). On page 308, we find the remark As mentioned, many financial time series do not exhibit stationarity, but often the changes in them, perhaps after applying a log transformation, are approximately stationary. (This is a quite extensive textbook about time series data and financial data, so it's worth reading in some detail if you're interested in how to pursue this project further.) So to answer your question, the example neural networks that you mention discover that the financial data are non-stationary, and these models make use of that fact when making predictions. But if you look at returns, then the non-stationarity phenomenon disappears, and the model is not able to discover such a simple rule to exploit. The cure, in some sense, is to discover what drives stock prices, either generally or in the specific case of the equities you're studying. The price changes every second -- why is that? What information could a person have that causes a 0.1% shift from minute to minute, or 1% day to day? It's unlikely that yesterday's price movement, or the price movement the day before, will tell you much of anything about tomorrow's price movement by itself with a high degree of precision -- because, as we know, past performance is no guarantee of future returns. Framed in this way, the problem is not about choosing a certain kind of neural network, but instead making a neural network that has relevant data to inform its predictions. So, right now, you know that a good predictor of price tomorrow is the price today. To improve on that, you'll have to find timely information that improves upon the "best guess" provided by yesterday's price data. As an example of what form this information might take, consider pairs trading. In the 1980s, Morgan Stanley quants invented "pairs trading" and the strategy was profitable for a while. The premise is that two highly correlated stocks will tend to move together, so if there is movement in one that's not present in the other, you can make a trade with thesis that eventually the two stocks will return to their equilibrium. So your neural network would use information about one stock to place trades on the second stock, and vice-versa. Naturally, pairs trading is only profitable as long as the premise that the pairs are strongly correlated is true.
Forecasting Prices vs Returns by Deep Learning
What you've outlined is probably the single most common error that machine learning researchers make when analyzing financial data: it's trivial to discover that a great predictor of tomorrow's price
Forecasting Prices vs Returns by Deep Learning What you've outlined is probably the single most common error that machine learning researchers make when analyzing financial data: it's trivial to discover that a great predictor of tomorrow's price is today's price. The statistical term of art for this phenomenon is "non-stationarity." We have a number of questions about how to test for the stationarity of a time series. One such thread is How to know if a time series is stationary or non-stationary? In the particular case of time series analysis of financial data, it might be helpful to review a high-quality statistical text, such as Statistics and Data Analysis for Financial Engineering, Second Edition (David Ruppert & David S. Matteson). On page 308, we find the remark As mentioned, many financial time series do not exhibit stationarity, but often the changes in them, perhaps after applying a log transformation, are approximately stationary. (This is a quite extensive textbook about time series data and financial data, so it's worth reading in some detail if you're interested in how to pursue this project further.) So to answer your question, the example neural networks that you mention discover that the financial data are non-stationary, and these models make use of that fact when making predictions. But if you look at returns, then the non-stationarity phenomenon disappears, and the model is not able to discover such a simple rule to exploit. The cure, in some sense, is to discover what drives stock prices, either generally or in the specific case of the equities you're studying. The price changes every second -- why is that? What information could a person have that causes a 0.1% shift from minute to minute, or 1% day to day? It's unlikely that yesterday's price movement, or the price movement the day before, will tell you much of anything about tomorrow's price movement by itself with a high degree of precision -- because, as we know, past performance is no guarantee of future returns. Framed in this way, the problem is not about choosing a certain kind of neural network, but instead making a neural network that has relevant data to inform its predictions. So, right now, you know that a good predictor of price tomorrow is the price today. To improve on that, you'll have to find timely information that improves upon the "best guess" provided by yesterday's price data. As an example of what form this information might take, consider pairs trading. In the 1980s, Morgan Stanley quants invented "pairs trading" and the strategy was profitable for a while. The premise is that two highly correlated stocks will tend to move together, so if there is movement in one that's not present in the other, you can make a trade with thesis that eventually the two stocks will return to their equilibrium. So your neural network would use information about one stock to place trades on the second stock, and vice-versa. Naturally, pairs trading is only profitable as long as the premise that the pairs are strongly correlated is true.
Forecasting Prices vs Returns by Deep Learning What you've outlined is probably the single most common error that machine learning researchers make when analyzing financial data: it's trivial to discover that a great predictor of tomorrow's price
38,286
Forecasting Prices vs Returns by Deep Learning
The question: It seems that (univariate) forecasting stock market done by websites using DL and LSTM actually does not work that well if we focus on returns instead of prices. What is a relatively quick fix for that?(or most important fix) The motivation is quite simple. You can find it in any financial economic/econometrics text. As starting point we can consider stock price (log-price) as described from a Random Walk model (RW): $p_t = p_{t-1} + \epsilon_t$ where $\epsilon_t$ are iid gaussian noise. Then $E[p_{t+1}|I_t]= p_{t}$ $I_t$ stand for information set at time $t$, but it boil down into $p_t$ in this model. So for log return = $r_t = p_t – p_{t-1}$ we have $E[r_{t+1}|I_t]=0$ for all $t$ Take away: price is non-stationary series, the best predictor for price in the future is price now, returns are stationary and best predictor for them is zero. Candidate predictive model have to predict better than RW, using in some way $I_t$. usually is better to try with return than price, because the former are stationary (wide sense). Under usual definition of predictability, price are quite well predictable while return are hard to predict. So your evidence is usual. Machine learning can be used fruitfully in finance too, but beat RW is far from simple. NOTE: Just to note that some comments below touch interesting points as equilibrium or others; however here we have to stay focused on prediction only. Others comment ask empirical support. About the last request, the story above is a model (theory) and, apart any details, it represent the oldest stochastic model in finance (Bachelier model: https://en.wikipedia.org/wiki/Random_walk#Applications; https://en.wikipedia.org/wiki/Bachelier_model ). RW became a benchmark model in stock prediction (and for other assets too). During the decades was proposed, in academy and financial industry, hundreds of predictive models. Models that, looking for predictability, refute the RW condition like $E[r_{t+1}|r_t, r_{t-1} …]=0$. Them searched some model for which $E[r_{t+1}|I_t] \neq 0$ and, then, to beat the RW in MSE or other metrics. However until now no one of them can beat the RW for all dataset (all country/index, all stock/name, all data frequency, ecc). So, it is useless to cite here specific articles that confirm or refuse returns predictability, this is an endless empirical debate. Moreover, for some RW version we can relax gaussianity and independence with weaker conditions but the basic implications above hold yet. Finally, the history seems me the strongest as possible empirical confirm that the basic RW implications above, in some extent, is consistent with data. Therefore it remain the more simple and more convincing way for explain why stock price are easily predictable while returns are very hard to predict. Just another point. Some trader/analyst want to predict directly the prices and not the returns. Them intend this work as predict the direction of price in the next period and not the level (it is easy). These guys use something like “prices paths”. This target is not different that predict returns direction. However these guys do not realize that there are no way for infer something from non stationary and non ergodic time series.
Forecasting Prices vs Returns by Deep Learning
The question: It seems that (univariate) forecasting stock market done by websites using DL and LSTM actually does not work that well if we focus on returns instead of prices. What is a relatively qui
Forecasting Prices vs Returns by Deep Learning The question: It seems that (univariate) forecasting stock market done by websites using DL and LSTM actually does not work that well if we focus on returns instead of prices. What is a relatively quick fix for that?(or most important fix) The motivation is quite simple. You can find it in any financial economic/econometrics text. As starting point we can consider stock price (log-price) as described from a Random Walk model (RW): $p_t = p_{t-1} + \epsilon_t$ where $\epsilon_t$ are iid gaussian noise. Then $E[p_{t+1}|I_t]= p_{t}$ $I_t$ stand for information set at time $t$, but it boil down into $p_t$ in this model. So for log return = $r_t = p_t – p_{t-1}$ we have $E[r_{t+1}|I_t]=0$ for all $t$ Take away: price is non-stationary series, the best predictor for price in the future is price now, returns are stationary and best predictor for them is zero. Candidate predictive model have to predict better than RW, using in some way $I_t$. usually is better to try with return than price, because the former are stationary (wide sense). Under usual definition of predictability, price are quite well predictable while return are hard to predict. So your evidence is usual. Machine learning can be used fruitfully in finance too, but beat RW is far from simple. NOTE: Just to note that some comments below touch interesting points as equilibrium or others; however here we have to stay focused on prediction only. Others comment ask empirical support. About the last request, the story above is a model (theory) and, apart any details, it represent the oldest stochastic model in finance (Bachelier model: https://en.wikipedia.org/wiki/Random_walk#Applications; https://en.wikipedia.org/wiki/Bachelier_model ). RW became a benchmark model in stock prediction (and for other assets too). During the decades was proposed, in academy and financial industry, hundreds of predictive models. Models that, looking for predictability, refute the RW condition like $E[r_{t+1}|r_t, r_{t-1} …]=0$. Them searched some model for which $E[r_{t+1}|I_t] \neq 0$ and, then, to beat the RW in MSE or other metrics. However until now no one of them can beat the RW for all dataset (all country/index, all stock/name, all data frequency, ecc). So, it is useless to cite here specific articles that confirm or refuse returns predictability, this is an endless empirical debate. Moreover, for some RW version we can relax gaussianity and independence with weaker conditions but the basic implications above hold yet. Finally, the history seems me the strongest as possible empirical confirm that the basic RW implications above, in some extent, is consistent with data. Therefore it remain the more simple and more convincing way for explain why stock price are easily predictable while returns are very hard to predict. Just another point. Some trader/analyst want to predict directly the prices and not the returns. Them intend this work as predict the direction of price in the next period and not the level (it is easy). These guys use something like “prices paths”. This target is not different that predict returns direction. However these guys do not realize that there are no way for infer something from non stationary and non ergodic time series.
Forecasting Prices vs Returns by Deep Learning The question: It seems that (univariate) forecasting stock market done by websites using DL and LSTM actually does not work that well if we focus on returns instead of prices. What is a relatively qui
38,287
Forecasting Prices vs Returns by Deep Learning
Advances in Financial Machine Learning is a good reference for practical usage of ML in the context of financial time series. Basically : Formulating your label in term of level attained in a given amount of time (see chapter 3 barrier method) will help you build practical and realistic strategies. Unless you are doing some market making, you usually don't care about the price at the next ticker. For application you might need to go further and use meta labeling (ie. two algo, one to predict the trend, one to predict the amount to bet). Regarding feature building, you often go with stationnary processes, that you obtain trough fractionnal differenciation (getting rid of noise without getting rid of information), instead of integer differenciation (0: price, 1: return). See Chapter 5 - Fractionnaly differentiated features. As mentionned by others, there are complex mechanisms at play, and numerous actors on the market such that any meaningfull price prediction will be taken advantage of and nearly immediatly corected. That's why whitout external/original/new info, you won't get meaningfull prediction (and why underlying are often modelled as randow walks with drift for option pricing). Basically you need external info to predict price movement. See here for an exemple where tweets are used (financial-tweets). So generally speaking, when using ML for Finance you just don't predict next price from past prices. There is barely any value in doing that. The exception might be in HFT when you are trying to predict price change from limit order book, but that is a very specific case. And, to answer your general question, there is no easy way to deal with it : any 'easy' solution has already been implemented and has been optimised to the point it is difficult to compete.
Forecasting Prices vs Returns by Deep Learning
Advances in Financial Machine Learning is a good reference for practical usage of ML in the context of financial time series. Basically : Formulating your label in term of level attained in a given a
Forecasting Prices vs Returns by Deep Learning Advances in Financial Machine Learning is a good reference for practical usage of ML in the context of financial time series. Basically : Formulating your label in term of level attained in a given amount of time (see chapter 3 barrier method) will help you build practical and realistic strategies. Unless you are doing some market making, you usually don't care about the price at the next ticker. For application you might need to go further and use meta labeling (ie. two algo, one to predict the trend, one to predict the amount to bet). Regarding feature building, you often go with stationnary processes, that you obtain trough fractionnal differenciation (getting rid of noise without getting rid of information), instead of integer differenciation (0: price, 1: return). See Chapter 5 - Fractionnaly differentiated features. As mentionned by others, there are complex mechanisms at play, and numerous actors on the market such that any meaningfull price prediction will be taken advantage of and nearly immediatly corected. That's why whitout external/original/new info, you won't get meaningfull prediction (and why underlying are often modelled as randow walks with drift for option pricing). Basically you need external info to predict price movement. See here for an exemple where tweets are used (financial-tweets). So generally speaking, when using ML for Finance you just don't predict next price from past prices. There is barely any value in doing that. The exception might be in HFT when you are trying to predict price change from limit order book, but that is a very specific case. And, to answer your general question, there is no easy way to deal with it : any 'easy' solution has already been implemented and has been optimised to the point it is difficult to compete.
Forecasting Prices vs Returns by Deep Learning Advances in Financial Machine Learning is a good reference for practical usage of ML in the context of financial time series. Basically : Formulating your label in term of level attained in a given a
38,288
What happens in the sub-areas of AI? (ML, DL)
This is just terminology, no need to think about it too much as different people classify different areas into different categories. For example a lot of statisticians would consider machine learning to be a sub-area of statistics, people from AI would consider machine learning to be a sub area of AI research, and people working with computer science consider it to be a sub-area of computer science. With this in mind, the thing to understand is that "Deep Learning" is not a distinct area from "Machine Learning", but a part of it. In the same way that "Building Bridges" is a sub-part of "Mechanics" which is a sub-part of "Physics". With the context of your question - neural networks vs deep neural networks, it is a bit like asking how long bridges are distinct from shorter bridges. Different tools and techniques are involved but the concept is the same. So, your questions: 1) Neural networks are not different, they just typically have to have more parameters (be "bigger") for them to be labelled "deep" neutral networks. 2) Not necessarily, neural networks themselves, loosely, can be thought of as multiple logistic regressions stacked on top of each other. Any time you create a model and feed its results to another model and then another model, etc, and try to "train" those models together, you can consider such architecture to be "deep". 3) Typically if you will use the term "deep learning" everyone will assume you are talking about neural networks, because that is the current trend and because the term "deep learning" was first applied to neural networks. So if you use any other architecture you will have to specify it to not confuse others. 4) Answered by 2)
What happens in the sub-areas of AI? (ML, DL)
This is just terminology, no need to think about it too much as different people classify different areas into different categories. For example a lot of statisticians would consider machine learning
What happens in the sub-areas of AI? (ML, DL) This is just terminology, no need to think about it too much as different people classify different areas into different categories. For example a lot of statisticians would consider machine learning to be a sub-area of statistics, people from AI would consider machine learning to be a sub area of AI research, and people working with computer science consider it to be a sub-area of computer science. With this in mind, the thing to understand is that "Deep Learning" is not a distinct area from "Machine Learning", but a part of it. In the same way that "Building Bridges" is a sub-part of "Mechanics" which is a sub-part of "Physics". With the context of your question - neural networks vs deep neural networks, it is a bit like asking how long bridges are distinct from shorter bridges. Different tools and techniques are involved but the concept is the same. So, your questions: 1) Neural networks are not different, they just typically have to have more parameters (be "bigger") for them to be labelled "deep" neutral networks. 2) Not necessarily, neural networks themselves, loosely, can be thought of as multiple logistic regressions stacked on top of each other. Any time you create a model and feed its results to another model and then another model, etc, and try to "train" those models together, you can consider such architecture to be "deep". 3) Typically if you will use the term "deep learning" everyone will assume you are talking about neural networks, because that is the current trend and because the term "deep learning" was first applied to neural networks. So if you use any other architecture you will have to specify it to not confuse others. 4) Answered by 2)
What happens in the sub-areas of AI? (ML, DL) This is just terminology, no need to think about it too much as different people classify different areas into different categories. For example a lot of statisticians would consider machine learning
38,289
What happens in the sub-areas of AI? (ML, DL)
Agreed with Karolis' answer in "there are no hard boundaries". In addition, It's the same architecture of course. However, although we don't have a hard threshold on the number of layers for a neural network to be deep, in DL, we're more interested neural networks with large number of layers, instead of 1 or 2. Typically, yes. See the wikipedia page for example: Deep learning is part of a broader family of machine learning methods based on artificial neural networks with representation learning. But, this doesn't mean it'll be always. A lot of new architectures have been developing lately, e.g. Graph Neural Networks that learns over arbitrary graph structures. This is an extension of neural-nets to graphs, but also significantly differ from the fully connected neural nets we're accustomed to. But, not all of these new architectures have to fit in under ANN topic and we might need to extend the definition in the near future. Not sure what you've asked.
What happens in the sub-areas of AI? (ML, DL)
Agreed with Karolis' answer in "there are no hard boundaries". In addition, It's the same architecture of course. However, although we don't have a hard threshold on the number of layers for a neural
What happens in the sub-areas of AI? (ML, DL) Agreed with Karolis' answer in "there are no hard boundaries". In addition, It's the same architecture of course. However, although we don't have a hard threshold on the number of layers for a neural network to be deep, in DL, we're more interested neural networks with large number of layers, instead of 1 or 2. Typically, yes. See the wikipedia page for example: Deep learning is part of a broader family of machine learning methods based on artificial neural networks with representation learning. But, this doesn't mean it'll be always. A lot of new architectures have been developing lately, e.g. Graph Neural Networks that learns over arbitrary graph structures. This is an extension of neural-nets to graphs, but also significantly differ from the fully connected neural nets we're accustomed to. But, not all of these new architectures have to fit in under ANN topic and we might need to extend the definition in the near future. Not sure what you've asked.
What happens in the sub-areas of AI? (ML, DL) Agreed with Karolis' answer in "there are no hard boundaries". In addition, It's the same architecture of course. However, although we don't have a hard threshold on the number of layers for a neural
38,290
What happens in the sub-areas of AI? (ML, DL)
I would second user2974951's comment (+1). Deep Learning entails stacking or layering within a methodology. Points 1, 2 & 4 have been fully answered by Karolis (+1). Regarding point 3: There are works that combined SVM and DNN. (e.g. Tang (2013) Deep Learning using Linear Support Vector Machines where it shows very promising results in replacing with L2-SVM an softmax activation function, or Jiu (2017) Nonlinear Deep Kernel Learning for Image Annotation where multiple kernel learning is presented within a deep learning framework.) In addition to that kernel methods like Gaussian Processes have also seen a resurgent when effectively stacked on top of each other (e.g. see Damianou & Lawrence (2013) Deep Gaussian Processes for fully recurrent way of stacking GPs or Dunlop et al. (2018) How Deep Are Deep Gaussian Processes? for a more in-depth discussion).
What happens in the sub-areas of AI? (ML, DL)
I would second user2974951's comment (+1). Deep Learning entails stacking or layering within a methodology. Points 1, 2 & 4 have been fully answered by Karolis (+1). Regarding point 3: There are wor
What happens in the sub-areas of AI? (ML, DL) I would second user2974951's comment (+1). Deep Learning entails stacking or layering within a methodology. Points 1, 2 & 4 have been fully answered by Karolis (+1). Regarding point 3: There are works that combined SVM and DNN. (e.g. Tang (2013) Deep Learning using Linear Support Vector Machines where it shows very promising results in replacing with L2-SVM an softmax activation function, or Jiu (2017) Nonlinear Deep Kernel Learning for Image Annotation where multiple kernel learning is presented within a deep learning framework.) In addition to that kernel methods like Gaussian Processes have also seen a resurgent when effectively stacked on top of each other (e.g. see Damianou & Lawrence (2013) Deep Gaussian Processes for fully recurrent way of stacking GPs or Dunlop et al. (2018) How Deep Are Deep Gaussian Processes? for a more in-depth discussion).
What happens in the sub-areas of AI? (ML, DL) I would second user2974951's comment (+1). Deep Learning entails stacking or layering within a methodology. Points 1, 2 & 4 have been fully answered by Karolis (+1). Regarding point 3: There are wor
38,291
Expected value until a success?
I would have liked to comment but I still can't... so I'll give a complete answer hoping that I'm not spoiling any homework. I'd start saying that the "winning" sides of this five-sided unfair die are a distraction. We can re-arrange the calculations and obtain the expected value for a single roll: $$ E_s = 0.9 \cdot \sum_{i=1}^{4}{x_i \cdot \frac{p(x_i)}{\sum_{i=1}^{4}{p(x_i)}}} + 0.1 \cdot 0 = 0.9 \cdot E_w$$ where $E_w$ is the expected value for the win under the assumption that we win (which happens with probability $0.9 = \sum_{i=1}^{4}{p(x_i)} = 1 - 0.1$). It's like having a loaded coin, where you win $E_w$ with probability $0.9$ and get nothing otherwise. In the "extended game" case (i.e. winning allows us to continue), if we win the first roll we are getting an expected value of $E_w$ (for the first successful roll) plus our expected value for... an undefined number of rolls, i.e. what we are after. In other terms, the expected value $E_m$ for multiple rolls will be: $$ E_m = 0.9\cdot(E_w + E_m)$$ $$ E_m = \frac{0.9 \cdot E_w}{0.1} = \frac{E_s}{0.1} = \frac{E_s}{p(x_5)}$$ From another angle, we might observe that the number of trials to get one "success" (in this case, losing!) when repeatedly tossing our loaded coin can be modeled by a geometric distribution. Hence we might just multiply the expected value for a single toss $E_s$ by the average number of tosses needed to get this "successful failure", which is $\frac{1}{p(x_5)}$, and obtain the same result.
Expected value until a success?
I would have liked to comment but I still can't... so I'll give a complete answer hoping that I'm not spoiling any homework. I'd start saying that the "winning" sides of this five-sided unfair die are
Expected value until a success? I would have liked to comment but I still can't... so I'll give a complete answer hoping that I'm not spoiling any homework. I'd start saying that the "winning" sides of this five-sided unfair die are a distraction. We can re-arrange the calculations and obtain the expected value for a single roll: $$ E_s = 0.9 \cdot \sum_{i=1}^{4}{x_i \cdot \frac{p(x_i)}{\sum_{i=1}^{4}{p(x_i)}}} + 0.1 \cdot 0 = 0.9 \cdot E_w$$ where $E_w$ is the expected value for the win under the assumption that we win (which happens with probability $0.9 = \sum_{i=1}^{4}{p(x_i)} = 1 - 0.1$). It's like having a loaded coin, where you win $E_w$ with probability $0.9$ and get nothing otherwise. In the "extended game" case (i.e. winning allows us to continue), if we win the first roll we are getting an expected value of $E_w$ (for the first successful roll) plus our expected value for... an undefined number of rolls, i.e. what we are after. In other terms, the expected value $E_m$ for multiple rolls will be: $$ E_m = 0.9\cdot(E_w + E_m)$$ $$ E_m = \frac{0.9 \cdot E_w}{0.1} = \frac{E_s}{0.1} = \frac{E_s}{p(x_5)}$$ From another angle, we might observe that the number of trials to get one "success" (in this case, losing!) when repeatedly tossing our loaded coin can be modeled by a geometric distribution. Hence we might just multiply the expected value for a single toss $E_s$ by the average number of tosses needed to get this "successful failure", which is $\frac{1}{p(x_5)}$, and obtain the same result.
Expected value until a success? I would have liked to comment but I still can't... so I'll give a complete answer hoping that I'm not spoiling any homework. I'd start saying that the "winning" sides of this five-sided unfair die are
38,292
Expected value until a success?
Recursion We can define the expected value of the game in a recursive manner. Let's assume that the expected benefit from the game (all rolls until the game ends) is x. Using the probabilities you give for rolling {0.15,0.2,0.25,0.3,0.1} and the winnings of {\$1, \$2, \$3, \$4, [game ends]}, the expected value of the roll is $2.5 plus the value of that 90% chance of continuing the game - and given the rules of this game, it's clear "the right to continue the game" is exactly as valuable as "the right to play the game" - if the game hasn't ended, at the start of the second (or any other future) roll my potential future winnings (excluding the winnings of previous rolls) are exactly the same as in the beginning of the game. So we can define the value of the game before the first roll is made as x=$2.5+0.9x, recursively referring to the value itself - and the resulting equation is trivially solvable to x=$25. In essence this is equivalent to Polettix's answer, but IMHO this approach is much simpler to understand.
Expected value until a success?
Recursion We can define the expected value of the game in a recursive manner. Let's assume that the expected benefit from the game (all rolls until the game ends) is x. Using the probabilities you giv
Expected value until a success? Recursion We can define the expected value of the game in a recursive manner. Let's assume that the expected benefit from the game (all rolls until the game ends) is x. Using the probabilities you give for rolling {0.15,0.2,0.25,0.3,0.1} and the winnings of {\$1, \$2, \$3, \$4, [game ends]}, the expected value of the roll is $2.5 plus the value of that 90% chance of continuing the game - and given the rules of this game, it's clear "the right to continue the game" is exactly as valuable as "the right to play the game" - if the game hasn't ended, at the start of the second (or any other future) roll my potential future winnings (excluding the winnings of previous rolls) are exactly the same as in the beginning of the game. So we can define the value of the game before the first roll is made as x=$2.5+0.9x, recursively referring to the value itself - and the resulting equation is trivially solvable to x=$25. In essence this is equivalent to Polettix's answer, but IMHO this approach is much simpler to understand.
Expected value until a success? Recursion We can define the expected value of the game in a recursive manner. Let's assume that the expected benefit from the game (all rolls until the game ends) is x. Using the probabilities you giv
38,293
Expected value until a success?
Simulation, per Comment: At each iteration, the die is rolled 400 times, but only the rolls up to the first observed 5 are counted. Then the payout w is computed, ignoring the trial on which the 5 occurred. I chose 400 rolls because (with very high probability) that is enough to get a 5. [Sloppy programming, wasteful of random numbers, but runs quickly.] set.seed(1237) # for reproducibility pr = c(.15, 0.2, 0.25, 0.3, 0.1) m = 10^6; w = h = numeric(m) for(i in 1:m) { x = sample(1:5, 400, rep=T, p=pr) h[i] = match(5,x); s = h[i] # stopping point w[i] = sum(x[1:s])-5 } mean(w); mean(h) [1] 25.00693 # aprx 9*2.777778 = 25; avg total payout [1] 10.00337 # aprx 1/.1 = 10; avg trial number at stop Note: Can look at summary(h) to see it's consistent with a geometric distribution (and contains no NAs from almost-impossible runs of 400 without at 5). summary(h) Min. 1st Qu. Median Mean 3rd Qu. Max. 1 3 7 10 14 143
Expected value until a success?
Simulation, per Comment: At each iteration, the die is rolled 400 times, but only the rolls up to the first observed 5 are counted. Then the payout w is computed, ignoring the trial on which the 5 occ
Expected value until a success? Simulation, per Comment: At each iteration, the die is rolled 400 times, but only the rolls up to the first observed 5 are counted. Then the payout w is computed, ignoring the trial on which the 5 occurred. I chose 400 rolls because (with very high probability) that is enough to get a 5. [Sloppy programming, wasteful of random numbers, but runs quickly.] set.seed(1237) # for reproducibility pr = c(.15, 0.2, 0.25, 0.3, 0.1) m = 10^6; w = h = numeric(m) for(i in 1:m) { x = sample(1:5, 400, rep=T, p=pr) h[i] = match(5,x); s = h[i] # stopping point w[i] = sum(x[1:s])-5 } mean(w); mean(h) [1] 25.00693 # aprx 9*2.777778 = 25; avg total payout [1] 10.00337 # aprx 1/.1 = 10; avg trial number at stop Note: Can look at summary(h) to see it's consistent with a geometric distribution (and contains no NAs from almost-impossible runs of 400 without at 5). summary(h) Min. 1st Qu. Median Mean 3rd Qu. Max. 1 3 7 10 14 143
Expected value until a success? Simulation, per Comment: At each iteration, the die is rolled 400 times, but only the rolls up to the first observed 5 are counted. Then the payout w is computed, ignoring the trial on which the 5 occ
38,294
Expected value until a success?
There's a .9 chance that you'll win 2.5 EV. You could also win a second 2.5. The probability of that, contingent on winning the first 2.5, is .9, for a total probability of .9^2. You can continue winning 2.5, with the probability of the nth win being .9^n. So you have $\sum (2.5*.9^n) = 2.5\sum .9^n$, which is a geometric series. Using the formula $\sum r^n = \frac 1 {1-r}$, you get that the total EV is 2.5*10 = 25. You can also use the algebra mentioned in other answers: EV = 2.5+.9Ev -> EV = 25. This is of course simpler to do, but I decided to post both methods as there's a nonzero chance that someone will read this answer and it will make the geometric series formula slightly less mysterious. PS You could have a six-sided die where 5 and 6 have probability .05 each, and either results in a loss.
Expected value until a success?
There's a .9 chance that you'll win 2.5 EV. You could also win a second 2.5. The probability of that, contingent on winning the first 2.5, is .9, for a total probability of .9^2. You can continue winn
Expected value until a success? There's a .9 chance that you'll win 2.5 EV. You could also win a second 2.5. The probability of that, contingent on winning the first 2.5, is .9, for a total probability of .9^2. You can continue winning 2.5, with the probability of the nth win being .9^n. So you have $\sum (2.5*.9^n) = 2.5\sum .9^n$, which is a geometric series. Using the formula $\sum r^n = \frac 1 {1-r}$, you get that the total EV is 2.5*10 = 25. You can also use the algebra mentioned in other answers: EV = 2.5+.9Ev -> EV = 25. This is of course simpler to do, but I decided to post both methods as there's a nonzero chance that someone will read this answer and it will make the geometric series formula slightly less mysterious. PS You could have a six-sided die where 5 and 6 have probability .05 each, and either results in a loss.
Expected value until a success? There's a .9 chance that you'll win 2.5 EV. You could also win a second 2.5. The probability of that, contingent on winning the first 2.5, is .9, for a total probability of .9^2. You can continue winn
38,295
Checking a beta regression model via glmmTMB with DHARMa package
tl;dr it's reasonable for you to worry, but having looked at a variety of different graphical diagnostics I don't think everything looks pretty much OK. My answer will illustrate a bunch of other ways to look at a glmmTMB fit - more involved/less convenient than DHARMa, but it's good to look at the fit as many different ways as one can. First let's look at the raw data (which I've called dd): library(ggplot2); theme_set(theme_bw()) ggplot(dd,aes(Product,prop.bio,colour=Side))+ geom_line(colour="gray",aes(group=Pacients))+ geom_point(aes(shape=Side))+ scale_colour_brewer(palette="Dark2") My first point is that the right-hand plot made by DHARMa (and in general, all predicted-vs-residual plots) is looking for bias in the model, i.e. patterns where the residuals have systematic patterns with respect to the mean. This should never happen for a model with only categorical predictors (provided it contains all possible interactions of the predictors), because the model has one parameter for every possible fitted value ... we'll see below that it doesn't happen if we look at fitted vs residuals at the population level rather than the individual level ... The quickest way to get fitted vs residual plots (e.g. analogous to base-R's plot.lm() method or lme4's plot.merMod()) is via broom.mixed::augment() + ggplot: library(broom.mixed) aa <- augment(m1.f, data=dd) gg2 <- (ggplot(aa, aes(.fitted,.resid)) + geom_line(aes(group=Pacients),colour="gray") + geom_point(aes(colour=Side,shape=Product)) + geom_smooth() ) These fitted and residual values are at the individual-patient level. They do show a mild trend (which I admittedly don't understand at the moment), but the overall trend doesn't seem large relative to the scatter in the data. To check that this phenomenon is indeed caused by predictions at the patient rather than the population level, and to test the argument above that population-level effects should have exactly zero trend in the fitted vs. residual plot, we can hack the glmmTMB predictions to construct population-level predictions and residuals (the next release of glmmTMB should make this easier): aa$.fitted0 <- predict(m1.f, newdata=transform(dd,Pacients=NA),type="response") aa$.resid0 <- dd$prop.bio-aa$.fitted0 gg3 <- (ggplot(aa, aes(.fitted0,.resid0)) + geom_line(aes(group=Pacients),colour="gray") + geom_point(aes(colour=Side,shape=Product)) + geom_smooth() ) (note that if you run this code, you'll get lots of warnings from geom_smooth(), which is unhappy about being run when the predictor variable [i.e., the fitted value] only has two unique levels) Now the mean value of the residuals is (almost?) exactly zero for both levels (Product=="No" and Product=="Yes"). As long as we're at it, let's check the diagnostics for the random effects: lme4:::dotplot.ranef.mer(ranef(m1.f)$cond) This looks OK: no sign of discontinuous jumps (indicating possible multi-modality in random effects) or outlier patients. other comments I disapprove on general principles of reducing the model based on which terms seem to be important (e.g. dropping Side from the model after running anova()): in general, data-driven model reduction messes up inference.
Checking a beta regression model via glmmTMB with DHARMa package
tl;dr it's reasonable for you to worry, but having looked at a variety of different graphical diagnostics I don't think everything looks pretty much OK. My answer will illustrate a bunch of other way
Checking a beta regression model via glmmTMB with DHARMa package tl;dr it's reasonable for you to worry, but having looked at a variety of different graphical diagnostics I don't think everything looks pretty much OK. My answer will illustrate a bunch of other ways to look at a glmmTMB fit - more involved/less convenient than DHARMa, but it's good to look at the fit as many different ways as one can. First let's look at the raw data (which I've called dd): library(ggplot2); theme_set(theme_bw()) ggplot(dd,aes(Product,prop.bio,colour=Side))+ geom_line(colour="gray",aes(group=Pacients))+ geom_point(aes(shape=Side))+ scale_colour_brewer(palette="Dark2") My first point is that the right-hand plot made by DHARMa (and in general, all predicted-vs-residual plots) is looking for bias in the model, i.e. patterns where the residuals have systematic patterns with respect to the mean. This should never happen for a model with only categorical predictors (provided it contains all possible interactions of the predictors), because the model has one parameter for every possible fitted value ... we'll see below that it doesn't happen if we look at fitted vs residuals at the population level rather than the individual level ... The quickest way to get fitted vs residual plots (e.g. analogous to base-R's plot.lm() method or lme4's plot.merMod()) is via broom.mixed::augment() + ggplot: library(broom.mixed) aa <- augment(m1.f, data=dd) gg2 <- (ggplot(aa, aes(.fitted,.resid)) + geom_line(aes(group=Pacients),colour="gray") + geom_point(aes(colour=Side,shape=Product)) + geom_smooth() ) These fitted and residual values are at the individual-patient level. They do show a mild trend (which I admittedly don't understand at the moment), but the overall trend doesn't seem large relative to the scatter in the data. To check that this phenomenon is indeed caused by predictions at the patient rather than the population level, and to test the argument above that population-level effects should have exactly zero trend in the fitted vs. residual plot, we can hack the glmmTMB predictions to construct population-level predictions and residuals (the next release of glmmTMB should make this easier): aa$.fitted0 <- predict(m1.f, newdata=transform(dd,Pacients=NA),type="response") aa$.resid0 <- dd$prop.bio-aa$.fitted0 gg3 <- (ggplot(aa, aes(.fitted0,.resid0)) + geom_line(aes(group=Pacients),colour="gray") + geom_point(aes(colour=Side,shape=Product)) + geom_smooth() ) (note that if you run this code, you'll get lots of warnings from geom_smooth(), which is unhappy about being run when the predictor variable [i.e., the fitted value] only has two unique levels) Now the mean value of the residuals is (almost?) exactly zero for both levels (Product=="No" and Product=="Yes"). As long as we're at it, let's check the diagnostics for the random effects: lme4:::dotplot.ranef.mer(ranef(m1.f)$cond) This looks OK: no sign of discontinuous jumps (indicating possible multi-modality in random effects) or outlier patients. other comments I disapprove on general principles of reducing the model based on which terms seem to be important (e.g. dropping Side from the model after running anova()): in general, data-driven model reduction messes up inference.
Checking a beta regression model via glmmTMB with DHARMa package tl;dr it's reasonable for you to worry, but having looked at a variety of different graphical diagnostics I don't think everything looks pretty much OK. My answer will illustrate a bunch of other way
38,296
Checking a beta regression model via glmmTMB with DHARMa package
Have a look at the section about glmmTMB in the vignette of DHARMa. It seems to be an issue with regard to how predictions are calculated given the random effects. As an alternative, you may try the GLMMadaptive package. You can find examples using the DHARMa here.
Checking a beta regression model via glmmTMB with DHARMa package
Have a look at the section about glmmTMB in the vignette of DHARMa. It seems to be an issue with regard to how predictions are calculated given the random effects. As an alternative, you may try the G
Checking a beta regression model via glmmTMB with DHARMa package Have a look at the section about glmmTMB in the vignette of DHARMa. It seems to be an issue with regard to how predictions are calculated given the random effects. As an alternative, you may try the GLMMadaptive package. You can find examples using the DHARMa here.
Checking a beta regression model via glmmTMB with DHARMa package Have a look at the section about glmmTMB in the vignette of DHARMa. It seems to be an issue with regard to how predictions are calculated given the random effects. As an alternative, you may try the G
38,297
Checking a beta regression model via glmmTMB with DHARMa package
I am the developer of DHARMa. Dimitris and Ben are correct, the pattern originates from the known issue that glmmTMB does not (yet) allow making predictions based on fixed effects only, which sometimes produces this pattern. I hope we can fix this issue with the next release of glmmTMB, which should allow fixed-effect predictions. [EDIT Nov 21: this problem was fixed in glmmTMB approximately 1yr ago.] In your case, it is obvious that the predicted variable in your model is based on fixed and random effects, because your fixed effects have only one categorical predictor, so you should have only 2 values on your x axis. We can produce a plot using only fixed effects as predictors easily by hand: plotResiduals(data$Product, res$scaledResiduals) Which results in a plot that looks fine btw, agree with Ben that I would not do model selection based on significance, this is essentially p-hacking. If you start with Product*Side, report this model, unless you think there is a serious issue with the inference.
Checking a beta regression model via glmmTMB with DHARMa package
I am the developer of DHARMa. Dimitris and Ben are correct, the pattern originates from the known issue that glmmTMB does not (yet) allow making predictions based on fixed effects only, which sometime
Checking a beta regression model via glmmTMB with DHARMa package I am the developer of DHARMa. Dimitris and Ben are correct, the pattern originates from the known issue that glmmTMB does not (yet) allow making predictions based on fixed effects only, which sometimes produces this pattern. I hope we can fix this issue with the next release of glmmTMB, which should allow fixed-effect predictions. [EDIT Nov 21: this problem was fixed in glmmTMB approximately 1yr ago.] In your case, it is obvious that the predicted variable in your model is based on fixed and random effects, because your fixed effects have only one categorical predictor, so you should have only 2 values on your x axis. We can produce a plot using only fixed effects as predictors easily by hand: plotResiduals(data$Product, res$scaledResiduals) Which results in a plot that looks fine btw, agree with Ben that I would not do model selection based on significance, this is essentially p-hacking. If you start with Product*Side, report this model, unless you think there is a serious issue with the inference.
Checking a beta regression model via glmmTMB with DHARMa package I am the developer of DHARMa. Dimitris and Ben are correct, the pattern originates from the known issue that glmmTMB does not (yet) allow making predictions based on fixed effects only, which sometime
38,298
Should all adjustments be random effects in a mixed linear effect?
That is not right. A mixed effects model is a mixture of random effects and fixed effects. Generally, the point of adjusting for a random effect is to control for clustering indicators or combinations of covariates that are so high in dimension, the fixed effects would be unstable if not singular in a model. Random effects are a kind of last resort in that sense. Correlated data and model misspecification are highly related, random effects allow you to have a misspecified model, but to borrow information about groups of individuals who tend to be clustered, to yield residuals that are conditionally independent. If you managed to control for all those attributes in the fixed effects there's no need for a random effect at all. If anything, the preference should be to control for the fixed effect whenever possible because the inference is more generalizable. Take, as an example, a study of fraternal twins. If you studied the phenotype of a heritable disease, and then adjusted for the genetic mutation (SNP) which predisposes individuals to that disease, the data are now independent despite the design because the only "relatedness" that the twins exhibited has been controlled for. There would be no need for a random effect indicating twin-pair in the outcome.
Should all adjustments be random effects in a mixed linear effect?
That is not right. A mixed effects model is a mixture of random effects and fixed effects. Generally, the point of adjusting for a random effect is to control for clustering indicators or combinations
Should all adjustments be random effects in a mixed linear effect? That is not right. A mixed effects model is a mixture of random effects and fixed effects. Generally, the point of adjusting for a random effect is to control for clustering indicators or combinations of covariates that are so high in dimension, the fixed effects would be unstable if not singular in a model. Random effects are a kind of last resort in that sense. Correlated data and model misspecification are highly related, random effects allow you to have a misspecified model, but to borrow information about groups of individuals who tend to be clustered, to yield residuals that are conditionally independent. If you managed to control for all those attributes in the fixed effects there's no need for a random effect at all. If anything, the preference should be to control for the fixed effect whenever possible because the inference is more generalizable. Take, as an example, a study of fraternal twins. If you studied the phenotype of a heritable disease, and then adjusted for the genetic mutation (SNP) which predisposes individuals to that disease, the data are now independent despite the design because the only "relatedness" that the twins exhibited has been controlled for. There would be no need for a random effect indicating twin-pair in the outcome.
Should all adjustments be random effects in a mixed linear effect? That is not right. A mixed effects model is a mixture of random effects and fixed effects. Generally, the point of adjusting for a random effect is to control for clustering indicators or combinations
38,299
Should all adjustments be random effects in a mixed linear effect?
The first and foremost consideration that should drive the specification of random effects in a mixed effects model is the study design. Here are some examples that illustrate how the design affects the model specification. Example 1 If you have a study in which you randomly selected patients from a target population of patients and measured an outcome variable (e.g., CD4 cell count) at several time points, along with time-varying and/or time-invarying predictor variables, then you would want to include at a minimum a random patient effect (i.e., a random patient intercept) to account for the natural nesting of repeated outcome observations within a patient. Example 2 If you have a study in which you randomly selected a set of hospitals from a target population of hospitals, and then you randomly selected a set of patients from each hospital such that each patient would provide multiple measurements for an outcome variable (e.g., CD4 count), then you would need to include (at a minimum) a random hospital effect and a random patient effect in your model. ------------------------------------------------- In the first example, patient is a random grouping factor. In the second example, hospital and patient are random grouping factors, with patient nested within hospital (since the patients randomly selected within a hospital are unique to that hospital and will not appear in any other hospital). In some study designs, it is possible to have either fully crossed or partially crossed random grouping factors. For instance, you could have a study where some patients end up attending multiple hospitals throughout the duration of the study, in which case the patient and hospital would likely be partially crossed random grouping factors. So paying attention to the study design helps identify the random grouping factors, each of which will be allowed its own random set of intercepts in the model - one intercept per level of random grouping factor. The second consideration in the mixed effects model specification is to think about what predictor variables in the model can have varying (or random) effects across the levels of the grouping factor(s). For Example 1, let's assume we measured the predictor blood pressure for each patient at all the time points where we also measured the outcome variable CD4 cell count - there were 4 time points per patient (say, once a week, for a total of 4 weeks). Let's also assume we measured the predictor gender. The blood pressure values will change from one week to the other for each patient, in tandem with the values of the CD4 cell counts. If we have reasons to believe that the association between CD4 counts and blood pressure values will be different from patient to patient, then we can allow the slope of blood pressure in the model to vary randomly across patients - we can achieve this by including a random effect of blood pressure in the model. The gender value will not change from one week to another for a patient, so there is no need to allow for a random effect of gender in our model. In the context of this example, we say that blood pressure is a within-patient (or within-subject) predictor variable, whereas gender is a between-patients (or between-patients) predictor variable. Only within-patient predictor variables can be allowed to have varying (or random) effects across the levels of the corresponding random grouping factor. For Example 2, we can have predictor variables that refer to the hospitals included in the study (e.g., type of hospital) and/or predictor variables that refer to the patients within those hospitals (e.g., patient gender, patient blood pressure). The patient-specific predictor variables, for instance, can be within-patient predictors whose values change from occasion to occasion for the same patient, or between-patient predictors, whose values are invariant to occasion for each patient but change from one patient to another. The within-patient predictors can have varying (or random) effects across patients, etc. So the inclusion of random effects in your model ultimately depends on whether your study design includes any random grouping factor (e.g., patient, hospital) and whether you have predictor variables whose effects can be assumed to vary across the levels of these random grouping factors.
Should all adjustments be random effects in a mixed linear effect?
The first and foremost consideration that should drive the specification of random effects in a mixed effects model is the study design. Here are some examples that illustrate how the design affects t
Should all adjustments be random effects in a mixed linear effect? The first and foremost consideration that should drive the specification of random effects in a mixed effects model is the study design. Here are some examples that illustrate how the design affects the model specification. Example 1 If you have a study in which you randomly selected patients from a target population of patients and measured an outcome variable (e.g., CD4 cell count) at several time points, along with time-varying and/or time-invarying predictor variables, then you would want to include at a minimum a random patient effect (i.e., a random patient intercept) to account for the natural nesting of repeated outcome observations within a patient. Example 2 If you have a study in which you randomly selected a set of hospitals from a target population of hospitals, and then you randomly selected a set of patients from each hospital such that each patient would provide multiple measurements for an outcome variable (e.g., CD4 count), then you would need to include (at a minimum) a random hospital effect and a random patient effect in your model. ------------------------------------------------- In the first example, patient is a random grouping factor. In the second example, hospital and patient are random grouping factors, with patient nested within hospital (since the patients randomly selected within a hospital are unique to that hospital and will not appear in any other hospital). In some study designs, it is possible to have either fully crossed or partially crossed random grouping factors. For instance, you could have a study where some patients end up attending multiple hospitals throughout the duration of the study, in which case the patient and hospital would likely be partially crossed random grouping factors. So paying attention to the study design helps identify the random grouping factors, each of which will be allowed its own random set of intercepts in the model - one intercept per level of random grouping factor. The second consideration in the mixed effects model specification is to think about what predictor variables in the model can have varying (or random) effects across the levels of the grouping factor(s). For Example 1, let's assume we measured the predictor blood pressure for each patient at all the time points where we also measured the outcome variable CD4 cell count - there were 4 time points per patient (say, once a week, for a total of 4 weeks). Let's also assume we measured the predictor gender. The blood pressure values will change from one week to the other for each patient, in tandem with the values of the CD4 cell counts. If we have reasons to believe that the association between CD4 counts and blood pressure values will be different from patient to patient, then we can allow the slope of blood pressure in the model to vary randomly across patients - we can achieve this by including a random effect of blood pressure in the model. The gender value will not change from one week to another for a patient, so there is no need to allow for a random effect of gender in our model. In the context of this example, we say that blood pressure is a within-patient (or within-subject) predictor variable, whereas gender is a between-patients (or between-patients) predictor variable. Only within-patient predictor variables can be allowed to have varying (or random) effects across the levels of the corresponding random grouping factor. For Example 2, we can have predictor variables that refer to the hospitals included in the study (e.g., type of hospital) and/or predictor variables that refer to the patients within those hospitals (e.g., patient gender, patient blood pressure). The patient-specific predictor variables, for instance, can be within-patient predictors whose values change from occasion to occasion for the same patient, or between-patient predictors, whose values are invariant to occasion for each patient but change from one patient to another. The within-patient predictors can have varying (or random) effects across patients, etc. So the inclusion of random effects in your model ultimately depends on whether your study design includes any random grouping factor (e.g., patient, hospital) and whether you have predictor variables whose effects can be assumed to vary across the levels of these random grouping factors.
Should all adjustments be random effects in a mixed linear effect? The first and foremost consideration that should drive the specification of random effects in a mixed effects model is the study design. Here are some examples that illustrate how the design affects t
38,300
T-test paradox: can adding a single point very far from the null value change the outcome from significant to nonsignificant?
Perhaps I'm missing the gist of this question, but: if the next sample is really large variance will blow up, making your t-statistic smaller. You can test it with made up data, for example #Test if the average value of the sample c(2, 2.5, 3) is significantly different from zero #> t.test(c(2,2.5,3))$p.value #[1] 0.01307246 #Now add a 9 to the sample #> t.test(c(2,2.5,3,9))$p.value #[1] 0.08627763 Is it possible that p-value will now be statistically insignificant and we can not reject null hypothesis? In other words is there any situation where increase in variance more than offsets change in $\bar{x}$ and thus renders t-test statistically insignificant? I think I answered both questions with the code above (but apparently everyone knew it already), so let's delve into the t-statistic now: $$t={\bar x \over S/\sqrt n}$$ So for the first sample, with size $n_1$: $$t_1={\bar x_1 \over S_1/\sqrt n_1}$$ Now the second one consists of the first one plus another sample, so: $$t_2={\bar x_2 \over S_2/\sqrt n_2}$$ With: $$n_2=n_1+1 \\ \bar x_2 = {n_1\cdot \bar x_1 + x_{n_1+1} \over n_1 + 1} \\ S_2^2 = {n_1-1\over n_1} \cdot S_1^2 + {(x_{n_1+1}-\bar x_1)^2 \over n_1 + 1}={\left(1\over n_1+1\right)}\left( {n_1^2-1\over n_1} \cdot S_1^2 + (x_{n_1+1}-\bar x_1)^2 \right)$$ $$ t_2= {n_1\cdot \bar x_1 + x_{n_1+1} \over \sqrt{{n_1^2-1\over n_1} \cdot S_1^2 + (x_{n_1+1}-\bar x_1)^2 }} $$ EDIT: I actually removed some further steps to avoid implicitly assuming some terms were different from zero. Defining $\delta = x_{n_1+1} - \bar x_1$ $$ t_2= {(n_1+1)\cdot \bar x_1 + \delta \over \sqrt{{n_1^2-1\over n_1} \cdot S_1^2 + \delta^2 }} $$ Assuming $\delta \neq 0$: $$ t_2= {\delta \over |\delta|}\cdot {{(n_1+1)\cdot \bar x_1\over \delta} + 1 \over \sqrt{{n_1^2-1\over n_1} \cdot \left(\frac{S_1}{\delta}\right)^2 + 1 }}=\\ = \text{sign}(\delta)\cdot {{(n_1+1)\cdot \bar x_1\over \delta} + 1 \over \sqrt{{n_1^2-1\over n_1} \cdot \left(\frac{S_1}{\delta}\right)^2 + 1 }} $$ So re-answering In other words is there any situation where increase in variance more than offsets change in $\bar{x}$ and thus renders t-test statistically insignificant? If we make $\delta$ arbitrarily larger than $\bar x_1$ and $S_1$: $$\lim_{\delta\rightarrow\pm\infty} t_2=\text{sign}(\delta)=\pm 1$$ Indeed: #The original sample is random x = rnorm(n = 1000, mean = 1E-1, sd = 2) t.test(x)$st # t #1.544687 t.test(c(x,1E10))$st #t #1 t.test(c(x,-1E10))$st # t #-1 So basically you can always make $t=\pm1$ with a single addition to the sample, and the smallest obtainable p-value under this regime, with the degrees of freedom tending to infinity, becomes: 2*pnorm(1, lower.tail = FALSE) #[1] 0.3173105 We can also visualize this conclusion looking at the following plot: #Our original sample, here a random normal variable x = rnorm(n = 1000, mean = 0, sd = 2) png("ttestparadox.png") plot(0, 0, xlim = c(-10,10), ylim = c(0,1), type = "n", ylab = "p-value", xlab = "Asinh(new_sample)") abline(h = 2*pnorm(1, lower.tail = FALSE), lwd = 2L, col = 2) for(i in seq(-10,10,length.out = 101L)) points(x = i, y = t.test(c(x,sinh(i)))$p., pch = 20L) dev.off() I've picked new samples in a $\sinh$ scale so we get to large values faster. Anyways, we can see that, when the new sample $x_{n_1+1}$ deviates from $H_0$, the t-statistic goes to 1. Finally, an example using $\alpha = 0.05$ (shown in blue) where we go from a statistically significant result (shown as the dashed black line, p = 0.02014321) to non-significant results depending on the scale of the new unit sample. set.seed(1234) #reproducible x = rnorm(n = 1000, mean = 0.2, sd = 2) png("ttestparadox2.png") plot(0, 0, xlim = c(-10,10), ylim = c(0,1), type = "n", ylab = "p-value", xlab = "Asinh(new_sample)") abline(h = 2*pnorm(1, lower.tail = FALSE), lwd = 2L, col = 2) abline(h = 0.05, lwd = 2L, col = 4) abline(h = t.test(x)$p.v, lwd = 1, lty = 2) for(i in seq(-10,10,length.out = 101L)) points(x = i, y = t.test(c(x,sinh(i)))$p., pch = 20L) dev.off()
T-test paradox: can adding a single point very far from the null value change the outcome from signi
Perhaps I'm missing the gist of this question, but: if the next sample is really large variance will blow up, making your t-statistic smaller. You can test it with made up data, for example #Test if t
T-test paradox: can adding a single point very far from the null value change the outcome from significant to nonsignificant? Perhaps I'm missing the gist of this question, but: if the next sample is really large variance will blow up, making your t-statistic smaller. You can test it with made up data, for example #Test if the average value of the sample c(2, 2.5, 3) is significantly different from zero #> t.test(c(2,2.5,3))$p.value #[1] 0.01307246 #Now add a 9 to the sample #> t.test(c(2,2.5,3,9))$p.value #[1] 0.08627763 Is it possible that p-value will now be statistically insignificant and we can not reject null hypothesis? In other words is there any situation where increase in variance more than offsets change in $\bar{x}$ and thus renders t-test statistically insignificant? I think I answered both questions with the code above (but apparently everyone knew it already), so let's delve into the t-statistic now: $$t={\bar x \over S/\sqrt n}$$ So for the first sample, with size $n_1$: $$t_1={\bar x_1 \over S_1/\sqrt n_1}$$ Now the second one consists of the first one plus another sample, so: $$t_2={\bar x_2 \over S_2/\sqrt n_2}$$ With: $$n_2=n_1+1 \\ \bar x_2 = {n_1\cdot \bar x_1 + x_{n_1+1} \over n_1 + 1} \\ S_2^2 = {n_1-1\over n_1} \cdot S_1^2 + {(x_{n_1+1}-\bar x_1)^2 \over n_1 + 1}={\left(1\over n_1+1\right)}\left( {n_1^2-1\over n_1} \cdot S_1^2 + (x_{n_1+1}-\bar x_1)^2 \right)$$ $$ t_2= {n_1\cdot \bar x_1 + x_{n_1+1} \over \sqrt{{n_1^2-1\over n_1} \cdot S_1^2 + (x_{n_1+1}-\bar x_1)^2 }} $$ EDIT: I actually removed some further steps to avoid implicitly assuming some terms were different from zero. Defining $\delta = x_{n_1+1} - \bar x_1$ $$ t_2= {(n_1+1)\cdot \bar x_1 + \delta \over \sqrt{{n_1^2-1\over n_1} \cdot S_1^2 + \delta^2 }} $$ Assuming $\delta \neq 0$: $$ t_2= {\delta \over |\delta|}\cdot {{(n_1+1)\cdot \bar x_1\over \delta} + 1 \over \sqrt{{n_1^2-1\over n_1} \cdot \left(\frac{S_1}{\delta}\right)^2 + 1 }}=\\ = \text{sign}(\delta)\cdot {{(n_1+1)\cdot \bar x_1\over \delta} + 1 \over \sqrt{{n_1^2-1\over n_1} \cdot \left(\frac{S_1}{\delta}\right)^2 + 1 }} $$ So re-answering In other words is there any situation where increase in variance more than offsets change in $\bar{x}$ and thus renders t-test statistically insignificant? If we make $\delta$ arbitrarily larger than $\bar x_1$ and $S_1$: $$\lim_{\delta\rightarrow\pm\infty} t_2=\text{sign}(\delta)=\pm 1$$ Indeed: #The original sample is random x = rnorm(n = 1000, mean = 1E-1, sd = 2) t.test(x)$st # t #1.544687 t.test(c(x,1E10))$st #t #1 t.test(c(x,-1E10))$st # t #-1 So basically you can always make $t=\pm1$ with a single addition to the sample, and the smallest obtainable p-value under this regime, with the degrees of freedom tending to infinity, becomes: 2*pnorm(1, lower.tail = FALSE) #[1] 0.3173105 We can also visualize this conclusion looking at the following plot: #Our original sample, here a random normal variable x = rnorm(n = 1000, mean = 0, sd = 2) png("ttestparadox.png") plot(0, 0, xlim = c(-10,10), ylim = c(0,1), type = "n", ylab = "p-value", xlab = "Asinh(new_sample)") abline(h = 2*pnorm(1, lower.tail = FALSE), lwd = 2L, col = 2) for(i in seq(-10,10,length.out = 101L)) points(x = i, y = t.test(c(x,sinh(i)))$p., pch = 20L) dev.off() I've picked new samples in a $\sinh$ scale so we get to large values faster. Anyways, we can see that, when the new sample $x_{n_1+1}$ deviates from $H_0$, the t-statistic goes to 1. Finally, an example using $\alpha = 0.05$ (shown in blue) where we go from a statistically significant result (shown as the dashed black line, p = 0.02014321) to non-significant results depending on the scale of the new unit sample. set.seed(1234) #reproducible x = rnorm(n = 1000, mean = 0.2, sd = 2) png("ttestparadox2.png") plot(0, 0, xlim = c(-10,10), ylim = c(0,1), type = "n", ylab = "p-value", xlab = "Asinh(new_sample)") abline(h = 2*pnorm(1, lower.tail = FALSE), lwd = 2L, col = 2) abline(h = 0.05, lwd = 2L, col = 4) abline(h = t.test(x)$p.v, lwd = 1, lty = 2) for(i in seq(-10,10,length.out = 101L)) points(x = i, y = t.test(c(x,sinh(i)))$p., pch = 20L) dev.off()
T-test paradox: can adding a single point very far from the null value change the outcome from signi Perhaps I'm missing the gist of this question, but: if the next sample is really large variance will blow up, making your t-statistic smaller. You can test it with made up data, for example #Test if t