idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
20,801 | Will two distributions with identical 5-number summaries always have the same shape? | No, definitely not the case. As a simple counter example, compare the continuous uniform distribution on $[0, 3]$ with the discrete uniform distribution on $\{0, 1, 2, 3\}$.
A related example is the well-known Anscombe's quartet, where there are 4 datasets with 6 identical sample properties (though different from the ones you mention) look completely different. See:
http://en.wikipedia.org/wiki/Anscombe%27s_quartet | Will two distributions with identical 5-number summaries always have the same shape? | No, definitely not the case. As a simple counter example, compare the continuous uniform distribution on $[0, 3]$ with the discrete uniform distribution on $\{0, 1, 2, 3\}$.
A related example is the w | Will two distributions with identical 5-number summaries always have the same shape?
No, definitely not the case. As a simple counter example, compare the continuous uniform distribution on $[0, 3]$ with the discrete uniform distribution on $\{0, 1, 2, 3\}$.
A related example is the well-known Anscombe's quartet, where there are 4 datasets with 6 identical sample properties (though different from the ones you mention) look completely different. See:
http://en.wikipedia.org/wiki/Anscombe%27s_quartet | Will two distributions with identical 5-number summaries always have the same shape?
No, definitely not the case. As a simple counter example, compare the continuous uniform distribution on $[0, 3]$ with the discrete uniform distribution on $\{0, 1, 2, 3\}$.
A related example is the w |
20,802 | Surveys: Is 25% of a large user base representative? | Think about surveys in the general population of say the US. If we need 50% of the population to determine the majority opinion we would need a sample of about 160 million, which is truly prohibitive. Even a 1% sample is extreme (about 3.2 million), and is rarely done. An important survey in the US the General Social Survey has sample sizes between 1,500 to almost 3,000. So a 25% sample is in itself no problem.
Remember that a survey is not an election or a referendum. For the latter to be legitimate every eligible person must have the opportunity to have their say. For survey the purpose is to get a good estimate of the average opinion, and you can get that with a random sample. So the company needs to decide what the purpose of the survey is: is it a way for employees to give their opinion and participate in the company, or is it a way for the managers to get information?
Both sampling designs ensure that 25% of the employees are asked. The latter ensures that smaller department are represented in the survey. If you care about standard errors then you should take the nested nature of the sampling into account, though I don't suspect that that will matter a great deal in this case. | Surveys: Is 25% of a large user base representative? | Think about surveys in the general population of say the US. If we need 50% of the population to determine the majority opinion we would need a sample of about 160 million, which is truly prohibitive. | Surveys: Is 25% of a large user base representative?
Think about surveys in the general population of say the US. If we need 50% of the population to determine the majority opinion we would need a sample of about 160 million, which is truly prohibitive. Even a 1% sample is extreme (about 3.2 million), and is rarely done. An important survey in the US the General Social Survey has sample sizes between 1,500 to almost 3,000. So a 25% sample is in itself no problem.
Remember that a survey is not an election or a referendum. For the latter to be legitimate every eligible person must have the opportunity to have their say. For survey the purpose is to get a good estimate of the average opinion, and you can get that with a random sample. So the company needs to decide what the purpose of the survey is: is it a way for employees to give their opinion and participate in the company, or is it a way for the managers to get information?
Both sampling designs ensure that 25% of the employees are asked. The latter ensures that smaller department are represented in the survey. If you care about standard errors then you should take the nested nature of the sampling into account, though I don't suspect that that will matter a great deal in this case. | Surveys: Is 25% of a large user base representative?
Think about surveys in the general population of say the US. If we need 50% of the population to determine the majority opinion we would need a sample of about 160 million, which is truly prohibitive. |
20,803 | Surveys: Is 25% of a large user base representative? | By etymology "survey" (sur- from 'super', as in 'from above' and -vey from 'view') means to get an overview, not the full picture.
So long as the 25% was truly random and not i.e. self-selected (opt-in) then it quite meets the definition of the term. If the survey is optional, then the answers will be representative only of those who feel a need to answer. For instance, imagine a restaurant in which one could fill out a feedback card after dining. Even if most diners are happy, most of the feedback will be negative because the happy customers see little reason to give feedback. | Surveys: Is 25% of a large user base representative? | By etymology "survey" (sur- from 'super', as in 'from above' and -vey from 'view') means to get an overview, not the full picture.
So long as the 25% was truly random and not i.e. self-selected (opt-i | Surveys: Is 25% of a large user base representative?
By etymology "survey" (sur- from 'super', as in 'from above' and -vey from 'view') means to get an overview, not the full picture.
So long as the 25% was truly random and not i.e. self-selected (opt-in) then it quite meets the definition of the term. If the survey is optional, then the answers will be representative only of those who feel a need to answer. For instance, imagine a restaurant in which one could fill out a feedback card after dining. Even if most diners are happy, most of the feedback will be negative because the happy customers see little reason to give feedback. | Surveys: Is 25% of a large user base representative?
By etymology "survey" (sur- from 'super', as in 'from above' and -vey from 'view') means to get an overview, not the full picture.
So long as the 25% was truly random and not i.e. self-selected (opt-i |
20,804 | Surveys: Is 25% of a large user base representative? | Another point of view comes from the theory of experiment design.
Statistical power is the probability of finding an effect if it’s
real (source)
Four factors affect power:
Size of the effect
Standard deviation of the characteristic
Bigger sample size
Significance level desired
Based on these elements, you can write a formal mathematical equation
that relates power, sample size, effect size, standard deviation, and
significance level (source)
Under a set of assumptions, you could to characterize your survey as an experiment and tap into the design of experiment framework (here there are a couple of examples). There is a number of educated guesses to be made; however, an imperfect model might be better than no model at all. | Surveys: Is 25% of a large user base representative? | Another point of view comes from the theory of experiment design.
Statistical power is the probability of finding an effect if it’s
real (source)
Four factors affect power:
Size of the effect
Sta | Surveys: Is 25% of a large user base representative?
Another point of view comes from the theory of experiment design.
Statistical power is the probability of finding an effect if it’s
real (source)
Four factors affect power:
Size of the effect
Standard deviation of the characteristic
Bigger sample size
Significance level desired
Based on these elements, you can write a formal mathematical equation
that relates power, sample size, effect size, standard deviation, and
significance level (source)
Under a set of assumptions, you could to characterize your survey as an experiment and tap into the design of experiment framework (here there are a couple of examples). There is a number of educated guesses to be made; however, an imperfect model might be better than no model at all. | Surveys: Is 25% of a large user base representative?
Another point of view comes from the theory of experiment design.
Statistical power is the probability of finding an effect if it’s
real (source)
Four factors affect power:
Size of the effect
Sta |
20,805 | Surveys: Is 25% of a large user base representative? | I sense two questions. One about the sample size (25%, why not a majority) and another about the sampling technique (is it truly random, sample 25% randomly on the entire company, sample 25% randomly in every department, or use some other distribution).
1) The sample size does not need to be a majority. The required sample size can be anything between 0 and 100% depending on the required accuracy for a given confidence or likelihood ratio.
100% certainty is never obtained (also not with a 50% or larger subset). Achieving such high accuracy is also not the point of sampling and estimating.
See more on sample sizes: https://en.wikipedia.org/wiki/Sample_size_determination
If you get the law of large numbers you may also have an intuitive idea.
The distribution of the averages of all possible subsets (and your sample will be one of them), will become smaller, and closer to the mean of the original distribution, if the size of the subset increases. If you select one person then there is some reasonable chance that you find an exception, but to find the same exception in the same direction twice becomes less likely. And so on, the larger the size of the sampled subset the smaller the chance of an exceptional subset.
Eventually the distribution of the averages of all possible subsets has a variance equal to the variance of the original set divided by $n$ the size of the subset.
Important note! Your estimate will not be dependent on the size of the population from which you sample, but on the distribution of that population.
In the case of your 500 size department. The deviation of the averages of random subsets (of size 125) will be 11 times smaller than the original deviation. Note that the error in the measurement (the deviation of the average of the randomly selected subsets), is independent of the size of the department. It could be 500, 5000, or 50000, in all cases the estimate would be unaffected as long as they have the same distribution (now a tiny department might have some strange distribution, but that starts to disappear for larger groups).
2) The sampling does not need to be fully random. You can take the demographics into account.
Eventually you would treat each department separately in this sort of analysis and correct for variations among the departments and how you have sampled in these, differently sized, departments.
In this correction there are two important differentiations. One might assume the distribution among groups as a random variable or not. If you treat it as a random variable then the analysis becomes stronger (taking out some degrees of freedom in the model) but it might be a wrong assumption if the different groups are not exchangeable as random entities with no specific effect (which seems to be your case, as I imagine that the departments have different functions and may have widely different sentiment that is not random in relationship to the department). | Surveys: Is 25% of a large user base representative? | I sense two questions. One about the sample size (25%, why not a majority) and another about the sampling technique (is it truly random, sample 25% randomly on the entire company, sample 25% randomly | Surveys: Is 25% of a large user base representative?
I sense two questions. One about the sample size (25%, why not a majority) and another about the sampling technique (is it truly random, sample 25% randomly on the entire company, sample 25% randomly in every department, or use some other distribution).
1) The sample size does not need to be a majority. The required sample size can be anything between 0 and 100% depending on the required accuracy for a given confidence or likelihood ratio.
100% certainty is never obtained (also not with a 50% or larger subset). Achieving such high accuracy is also not the point of sampling and estimating.
See more on sample sizes: https://en.wikipedia.org/wiki/Sample_size_determination
If you get the law of large numbers you may also have an intuitive idea.
The distribution of the averages of all possible subsets (and your sample will be one of them), will become smaller, and closer to the mean of the original distribution, if the size of the subset increases. If you select one person then there is some reasonable chance that you find an exception, but to find the same exception in the same direction twice becomes less likely. And so on, the larger the size of the sampled subset the smaller the chance of an exceptional subset.
Eventually the distribution of the averages of all possible subsets has a variance equal to the variance of the original set divided by $n$ the size of the subset.
Important note! Your estimate will not be dependent on the size of the population from which you sample, but on the distribution of that population.
In the case of your 500 size department. The deviation of the averages of random subsets (of size 125) will be 11 times smaller than the original deviation. Note that the error in the measurement (the deviation of the average of the randomly selected subsets), is independent of the size of the department. It could be 500, 5000, or 50000, in all cases the estimate would be unaffected as long as they have the same distribution (now a tiny department might have some strange distribution, but that starts to disappear for larger groups).
2) The sampling does not need to be fully random. You can take the demographics into account.
Eventually you would treat each department separately in this sort of analysis and correct for variations among the departments and how you have sampled in these, differently sized, departments.
In this correction there are two important differentiations. One might assume the distribution among groups as a random variable or not. If you treat it as a random variable then the analysis becomes stronger (taking out some degrees of freedom in the model) but it might be a wrong assumption if the different groups are not exchangeable as random entities with no specific effect (which seems to be your case, as I imagine that the departments have different functions and may have widely different sentiment that is not random in relationship to the department). | Surveys: Is 25% of a large user base representative?
I sense two questions. One about the sample size (25%, why not a majority) and another about the sampling technique (is it truly random, sample 25% randomly on the entire company, sample 25% randomly |
20,806 | Surveys: Is 25% of a large user base representative? | Your question is about sample size for a finite population. But the first thing you need is the sample size required in an infinite population, which can then be used to calculate the sample size for a finite population.
In a survey of an infinite population, the formula is: $n=(z^2pq)/d^2$
$n$, sample size
$z^2$, confidence level, usually 1.96
$p$, proportion of the population with a characteristic, if unknown use 0.5
$q=1-p$, proportion of the population without a characteristic
$d^2$, error level (aka margin of error), usually 3%, but 1% or 5% can be used.
Error level becomes the most important factor because the lower the level of error, the bigger the sample size required and visa versa. Therefore, the sample size for an infinite population with 3% error is: $(1.96 \times 0.5 \times 0.5)/0.03^2=1,068$. Further, the error level means that results have an error of +/-3%, in this case. This means that if 48% of people in the survey were male, then the range possible is 48% +/- 3%, or 45% to 51%.
The next step is the formula for sample size for a finite population: $m=n / (1+((n-1)/N))$
$m$, sample size for finite population
$n$, sample size for infinite population (1,068 from above)
$N$, finite population size
Using the example of $N=1,000$, the sample size required with 3% error would be $1068 / (1+((1068-1)/1000))=517$, or 51.7% of the population.
If you used 25% of the population, the error level comes out as 5.4%. This error level may be fine based on previous surveys. With surveys there is always a trade off between the level of error you are willing to accept and the costs of doing the survey.
None of this factors in the response rate (if using a simple random sample). To find out how many people need to be contacted, you divide the sample size by the expected response rate. For example, if the previous response rate was 65%, the you would need to send the survey instrument to $517/0.65=796$ people.
Things get more complex if you want to divide up the population by department (known as stratification). Basically, you need to treat each department as a separate finite population if you want the data to be accurate to each department, which may not be practical. But you could do a stratified random sample instead of a simple random sample, where 50% of the sample is randomly selected from the department with 50% of the population, and suitable percentages are randomly sampled from other departments. It will mean that your sample size will increase slightly because you need to round all decimal places up (you can't survey 0.1 of a person). However, the results should be examined at the population (company) level and not at the department level because there will not be enough responses from each department to be accurate. | Surveys: Is 25% of a large user base representative? | Your question is about sample size for a finite population. But the first thing you need is the sample size required in an infinite population, which can then be used to calculate the sample size for | Surveys: Is 25% of a large user base representative?
Your question is about sample size for a finite population. But the first thing you need is the sample size required in an infinite population, which can then be used to calculate the sample size for a finite population.
In a survey of an infinite population, the formula is: $n=(z^2pq)/d^2$
$n$, sample size
$z^2$, confidence level, usually 1.96
$p$, proportion of the population with a characteristic, if unknown use 0.5
$q=1-p$, proportion of the population without a characteristic
$d^2$, error level (aka margin of error), usually 3%, but 1% or 5% can be used.
Error level becomes the most important factor because the lower the level of error, the bigger the sample size required and visa versa. Therefore, the sample size for an infinite population with 3% error is: $(1.96 \times 0.5 \times 0.5)/0.03^2=1,068$. Further, the error level means that results have an error of +/-3%, in this case. This means that if 48% of people in the survey were male, then the range possible is 48% +/- 3%, or 45% to 51%.
The next step is the formula for sample size for a finite population: $m=n / (1+((n-1)/N))$
$m$, sample size for finite population
$n$, sample size for infinite population (1,068 from above)
$N$, finite population size
Using the example of $N=1,000$, the sample size required with 3% error would be $1068 / (1+((1068-1)/1000))=517$, or 51.7% of the population.
If you used 25% of the population, the error level comes out as 5.4%. This error level may be fine based on previous surveys. With surveys there is always a trade off between the level of error you are willing to accept and the costs of doing the survey.
None of this factors in the response rate (if using a simple random sample). To find out how many people need to be contacted, you divide the sample size by the expected response rate. For example, if the previous response rate was 65%, the you would need to send the survey instrument to $517/0.65=796$ people.
Things get more complex if you want to divide up the population by department (known as stratification). Basically, you need to treat each department as a separate finite population if you want the data to be accurate to each department, which may not be practical. But you could do a stratified random sample instead of a simple random sample, where 50% of the sample is randomly selected from the department with 50% of the population, and suitable percentages are randomly sampled from other departments. It will mean that your sample size will increase slightly because you need to round all decimal places up (you can't survey 0.1 of a person). However, the results should be examined at the population (company) level and not at the department level because there will not be enough responses from each department to be accurate. | Surveys: Is 25% of a large user base representative?
Your question is about sample size for a finite population. But the first thing you need is the sample size required in an infinite population, which can then be used to calculate the sample size for |
20,807 | Surveys: Is 25% of a large user base representative? | While talking about a valid sample, the underlying notion is usually one of representation. Does the sample "represent" the population adequately? In order to obtain a representative sample, one needs to make sure that the sample size is adequate (in order to reduce the variance of the estimate), and that the sample contains members belonging to the subsets of the population exhibiting different types of the behaviour under consideration.
First, the proportion of users selected for the survey matters lesser as compared to the absolute number of users selected. The sample size required will depend on the requirement of accuracy or confidence interval in the answer given. You can read this article for further information.
You mention that the company consists of several departments. Is is probable that the departments vary in their responses to the survey? If they do (or maybe you don't know for sure), it would be a good idea to "stratify" your sample across the departments. In its simplest form, this means picking an equal proportion of people from every department. Eg: The company size is 1000, and the sample size chosen is 100. Then you would choose 50 from a department of size 500, 10 from a department of size 100, etc. This is to avoid under-representation of a particular department in any specific "random" sample.
You also mention that not everyone may respond to the survey. If you know that roughly half the people will respond, then in order to get 100 responses, you will have to send the survey to 200 people. You will have to consider the possibility that such responses may be biased. People with a particular response may be more, or less, inclined to answer. | Surveys: Is 25% of a large user base representative? | While talking about a valid sample, the underlying notion is usually one of representation. Does the sample "represent" the population adequately? In order to obtain a representative sample, one needs | Surveys: Is 25% of a large user base representative?
While talking about a valid sample, the underlying notion is usually one of representation. Does the sample "represent" the population adequately? In order to obtain a representative sample, one needs to make sure that the sample size is adequate (in order to reduce the variance of the estimate), and that the sample contains members belonging to the subsets of the population exhibiting different types of the behaviour under consideration.
First, the proportion of users selected for the survey matters lesser as compared to the absolute number of users selected. The sample size required will depend on the requirement of accuracy or confidence interval in the answer given. You can read this article for further information.
You mention that the company consists of several departments. Is is probable that the departments vary in their responses to the survey? If they do (or maybe you don't know for sure), it would be a good idea to "stratify" your sample across the departments. In its simplest form, this means picking an equal proportion of people from every department. Eg: The company size is 1000, and the sample size chosen is 100. Then you would choose 50 from a department of size 500, 10 from a department of size 100, etc. This is to avoid under-representation of a particular department in any specific "random" sample.
You also mention that not everyone may respond to the survey. If you know that roughly half the people will respond, then in order to get 100 responses, you will have to send the survey to 200 people. You will have to consider the possibility that such responses may be biased. People with a particular response may be more, or less, inclined to answer. | Surveys: Is 25% of a large user base representative?
While talking about a valid sample, the underlying notion is usually one of representation. Does the sample "represent" the population adequately? In order to obtain a representative sample, one needs |
20,808 | Surveys: Is 25% of a large user base representative? | If it is a truly random selection of the entire employee base, how is that a statistically valid sample assuming all those employees responded?
It is a valid sample as long as it is drawn from the population it is meant to describe. That is, if you only sample bosses, nothing can be said about the other employees; that won't happen in the setting that you have described. It may however happen due to non-response (more on that here below).
If it is random on a per department level e.g. 25% of each department, how is that a valid sample considering one department is over 50% of the total population.
This is no longer a question of sample validity but one of sampling error. Obviously, the most precise estimates would be obtained from a stratified random draw, the stratum encompassing at least the department level. In such a setting, you will have a valid sample for each department but the estimates for small departments will be generally less precise than the estimates for big departments, thanks to the higher absolute sample size for the latter. For the overall organization, the higher sample representation of bigger departments simply reflects the reality of the organization and does in no way reduce the validity of the sample.
The survey is not enforced. There can be no guarantee of a 100% response rate from the 25% selected. There is no incentive or punitive means if the survey is or is not filled out.
You won't be able to force anyone to provide a good answer but implementing a response reminder plan is a minimum. Plus, you should explain the relevance of the survey to the employees and their impact they can have on the organisation thanks to the survey: e.g. when are the results published? what are the potential actions undertaken by the organisation based on the survey? why does each answer matter?
Once data are collected, non-response is an issue that should be dealt with. Dealing with it means you should first analyse the non-response behaviour to detect any potential patterns: has no boss responded? Has a given department not responded at all? Then adopt the necessary strategy (post-strafification, reweighting, imputation, etc.). | Surveys: Is 25% of a large user base representative? | If it is a truly random selection of the entire employee base, how is that a statistically valid sample assuming all those employees responded?
It is a valid sample as long as it is drawn from the po | Surveys: Is 25% of a large user base representative?
If it is a truly random selection of the entire employee base, how is that a statistically valid sample assuming all those employees responded?
It is a valid sample as long as it is drawn from the population it is meant to describe. That is, if you only sample bosses, nothing can be said about the other employees; that won't happen in the setting that you have described. It may however happen due to non-response (more on that here below).
If it is random on a per department level e.g. 25% of each department, how is that a valid sample considering one department is over 50% of the total population.
This is no longer a question of sample validity but one of sampling error. Obviously, the most precise estimates would be obtained from a stratified random draw, the stratum encompassing at least the department level. In such a setting, you will have a valid sample for each department but the estimates for small departments will be generally less precise than the estimates for big departments, thanks to the higher absolute sample size for the latter. For the overall organization, the higher sample representation of bigger departments simply reflects the reality of the organization and does in no way reduce the validity of the sample.
The survey is not enforced. There can be no guarantee of a 100% response rate from the 25% selected. There is no incentive or punitive means if the survey is or is not filled out.
You won't be able to force anyone to provide a good answer but implementing a response reminder plan is a minimum. Plus, you should explain the relevance of the survey to the employees and their impact they can have on the organisation thanks to the survey: e.g. when are the results published? what are the potential actions undertaken by the organisation based on the survey? why does each answer matter?
Once data are collected, non-response is an issue that should be dealt with. Dealing with it means you should first analyse the non-response behaviour to detect any potential patterns: has no boss responded? Has a given department not responded at all? Then adopt the necessary strategy (post-strafification, reweighting, imputation, etc.). | Surveys: Is 25% of a large user base representative?
If it is a truly random selection of the entire employee base, how is that a statistically valid sample assuming all those employees responded?
It is a valid sample as long as it is drawn from the po |
20,809 | Surveys: Is 25% of a large user base representative? | I'm expanding on @ICannotFixThis 's answer with an example on how the four factors involved matter:
Size of the effect
Standard deviation of the characteristic
Bigger sample size
Significance level desired
How these factors affect your results will depend on the statistic you are using. For example, if you wanted to guess at the mean of some variable you might use Student's T Test.
Let's assume you want to figure out the average height of your employees with this survey. You don't actually know the standard deviation of the height of all employees at your company (without measuring everyone) but you could do some research and guess at 3 inches (it is roughly the standard deviation of height for males in the US).
If you surveyed only 5 people then 95% of the time the average height you observe in your survey will be within 3.72 inches of the true average height.
Now, how do our factors affect this:
If you need to know the average height very precisely (e.g. the effect size is very small) then you will need a large # of samples. For example, to know the true average height within 2.66 inches you would need to survey 100 people.
If the standard deviation is large then the precision you can obtain is going to be limited. If the standard deviation were 6 inches instead of 3 inches and you still had 5 responses you would only know within 7.44 inches instead of 3.72 inches the true average height.
Skipping this point since it is the focus of the entire discussion.
If you really need to be sure you have the correct answer then you will need to survey more people. In our example we saw that with 5 responses we could get within 3.72 inches 95% of the time. If we wanted to be sure our answer was in the correct range 99% of the time then our range will be 6.17 inches and not 3.72 inches. | Surveys: Is 25% of a large user base representative? | I'm expanding on @ICannotFixThis 's answer with an example on how the four factors involved matter:
Size of the effect
Standard deviation of the characteristic
Bigger sample size
Significance level d | Surveys: Is 25% of a large user base representative?
I'm expanding on @ICannotFixThis 's answer with an example on how the four factors involved matter:
Size of the effect
Standard deviation of the characteristic
Bigger sample size
Significance level desired
How these factors affect your results will depend on the statistic you are using. For example, if you wanted to guess at the mean of some variable you might use Student's T Test.
Let's assume you want to figure out the average height of your employees with this survey. You don't actually know the standard deviation of the height of all employees at your company (without measuring everyone) but you could do some research and guess at 3 inches (it is roughly the standard deviation of height for males in the US).
If you surveyed only 5 people then 95% of the time the average height you observe in your survey will be within 3.72 inches of the true average height.
Now, how do our factors affect this:
If you need to know the average height very precisely (e.g. the effect size is very small) then you will need a large # of samples. For example, to know the true average height within 2.66 inches you would need to survey 100 people.
If the standard deviation is large then the precision you can obtain is going to be limited. If the standard deviation were 6 inches instead of 3 inches and you still had 5 responses you would only know within 7.44 inches instead of 3.72 inches the true average height.
Skipping this point since it is the focus of the entire discussion.
If you really need to be sure you have the correct answer then you will need to survey more people. In our example we saw that with 5 responses we could get within 3.72 inches 95% of the time. If we wanted to be sure our answer was in the correct range 99% of the time then our range will be 6.17 inches and not 3.72 inches. | Surveys: Is 25% of a large user base representative?
I'm expanding on @ICannotFixThis 's answer with an example on how the four factors involved matter:
Size of the effect
Standard deviation of the characteristic
Bigger sample size
Significance level d |
20,810 | R package for identifying relationships between variables [closed] | AFAIK, no. To be more precise, I don't know of a single R package that would do part of what is called Exploratory Data Analysis (EDA) for you through a single function call -- I'm thinking of the re-expression and revelation aspects discussed in Hoaglin, Mosteller and Tukey, Understanding Robust and Exploratory Data Analysis. Wiley-Interscience, 1983, in particular.
However, there exist some nifty alternatives in R, especially regarding interactive exploration of data (Look here for interesting discussion: When is interactive data visualization useful to use?). I can think of
iplots, or its successor Acinonyx, for interactive visualization (allowing for brushing, linked plots, and the like) (Some of these functionalities can be found in the latticist package; finally, rgl is great for 3D interactive visualization.)
ggobi for interactive and dynamic displays, including data reduction (Multidimensional scaling) and Projection Pursuit
This is only for interactive data exploration, but I would say this is the essence of EDA. Anyway, the above techniques might help when exploring bivariate or higher-order relationships between numerical variables. For categorical data, the vcd package is a good option (visualization and summary tables). Then, I would say than the vegan and ade4 packages come first for exploring relationships between variables of mixed data types.
Finally, what about data mining in R? (Try this keyword on Rseek) | R package for identifying relationships between variables [closed] | AFAIK, no. To be more precise, I don't know of a single R package that would do part of what is called Exploratory Data Analysis (EDA) for you through a single function call -- I'm thinking of the re- | R package for identifying relationships between variables [closed]
AFAIK, no. To be more precise, I don't know of a single R package that would do part of what is called Exploratory Data Analysis (EDA) for you through a single function call -- I'm thinking of the re-expression and revelation aspects discussed in Hoaglin, Mosteller and Tukey, Understanding Robust and Exploratory Data Analysis. Wiley-Interscience, 1983, in particular.
However, there exist some nifty alternatives in R, especially regarding interactive exploration of data (Look here for interesting discussion: When is interactive data visualization useful to use?). I can think of
iplots, or its successor Acinonyx, for interactive visualization (allowing for brushing, linked plots, and the like) (Some of these functionalities can be found in the latticist package; finally, rgl is great for 3D interactive visualization.)
ggobi for interactive and dynamic displays, including data reduction (Multidimensional scaling) and Projection Pursuit
This is only for interactive data exploration, but I would say this is the essence of EDA. Anyway, the above techniques might help when exploring bivariate or higher-order relationships between numerical variables. For categorical data, the vcd package is a good option (visualization and summary tables). Then, I would say than the vegan and ade4 packages come first for exploring relationships between variables of mixed data types.
Finally, what about data mining in R? (Try this keyword on Rseek) | R package for identifying relationships between variables [closed]
AFAIK, no. To be more precise, I don't know of a single R package that would do part of what is called Exploratory Data Analysis (EDA) for you through a single function call -- I'm thinking of the re- |
20,811 | R package for identifying relationships between variables [closed] | If you just want to get a quick look at how variables in your dataset are correlated, take a look at the pairs() function, or even better, the pairs.panels() function in the psych package. I wrote a little about the pairs function here.
Using the pairs() or psych::pairs.panels() function it's pretty easy to make scatterplot matrices.
pairs.panels(iris[-5], bg=c("blue","red","yellow")[iris$Species], pch=21,lm=TRUE) | R package for identifying relationships between variables [closed] | If you just want to get a quick look at how variables in your dataset are correlated, take a look at the pairs() function, or even better, the pairs.panels() function in the psych package. I wrote a l | R package for identifying relationships between variables [closed]
If you just want to get a quick look at how variables in your dataset are correlated, take a look at the pairs() function, or even better, the pairs.panels() function in the psych package. I wrote a little about the pairs function here.
Using the pairs() or psych::pairs.panels() function it's pretty easy to make scatterplot matrices.
pairs.panels(iris[-5], bg=c("blue","red","yellow")[iris$Species], pch=21,lm=TRUE) | R package for identifying relationships between variables [closed]
If you just want to get a quick look at how variables in your dataset are correlated, take a look at the pairs() function, or even better, the pairs.panels() function in the psych package. I wrote a l |
20,812 | R package for identifying relationships between variables [closed] | Check out the scagnostics package and the original research paper. This is very interesting for bivariate relationships. For multivariate relationships, projection pursuit is a very good first step.
In general, though, domain and data expertise will both narrow and improve your methods for quickly investigating relationships. | R package for identifying relationships between variables [closed] | Check out the scagnostics package and the original research paper. This is very interesting for bivariate relationships. For multivariate relationships, projection pursuit is a very good first step. | R package for identifying relationships between variables [closed]
Check out the scagnostics package and the original research paper. This is very interesting for bivariate relationships. For multivariate relationships, projection pursuit is a very good first step.
In general, though, domain and data expertise will both narrow and improve your methods for quickly investigating relationships. | R package for identifying relationships between variables [closed]
Check out the scagnostics package and the original research paper. This is very interesting for bivariate relationships. For multivariate relationships, projection pursuit is a very good first step. |
20,813 | R package for identifying relationships between variables [closed] | The chart.Correlation function in PerformanceAnalytics provides similar functionality to the plot.pairs function @Stephen Turner mentioned, except it smooths with a loess function rather than a linear model, and the significance for the correlations.
library(PerformanceAnalytics)
chart.Correlation(iris[-5], bg=c("blue","red","yellow")[iris$Species], pch=21) | R package for identifying relationships between variables [closed] | The chart.Correlation function in PerformanceAnalytics provides similar functionality to the plot.pairs function @Stephen Turner mentioned, except it smooths with a loess function rather than a linear | R package for identifying relationships between variables [closed]
The chart.Correlation function in PerformanceAnalytics provides similar functionality to the plot.pairs function @Stephen Turner mentioned, except it smooths with a loess function rather than a linear model, and the significance for the correlations.
library(PerformanceAnalytics)
chart.Correlation(iris[-5], bg=c("blue","red","yellow")[iris$Species], pch=21) | R package for identifying relationships between variables [closed]
The chart.Correlation function in PerformanceAnalytics provides similar functionality to the plot.pairs function @Stephen Turner mentioned, except it smooths with a loess function rather than a linear |
20,814 | R package for identifying relationships between variables [closed] | If you are looking for possible transformations to work with correlation, then a tool that has not been mentioned yet that may be useful is ace which can be found in the acepack package (and probably other packages as well). This does an interative process of trying many different transformations (using smoothers) to find the transformations to maximize the correlation between a set of x variables and a y variable. Plotting the transformations can then suggest meaningful transformations. | R package for identifying relationships between variables [closed] | If you are looking for possible transformations to work with correlation, then a tool that has not been mentioned yet that may be useful is ace which can be found in the acepack package (and probably | R package for identifying relationships between variables [closed]
If you are looking for possible transformations to work with correlation, then a tool that has not been mentioned yet that may be useful is ace which can be found in the acepack package (and probably other packages as well). This does an interative process of trying many different transformations (using smoothers) to find the transformations to maximize the correlation between a set of x variables and a y variable. Plotting the transformations can then suggest meaningful transformations. | R package for identifying relationships between variables [closed]
If you are looking for possible transformations to work with correlation, then a tool that has not been mentioned yet that may be useful is ace which can be found in the acepack package (and probably |
20,815 | R package for identifying relationships between variables [closed] | You can use the DCOR function in the 'energy' package to compute a measure of non-linear dependency called distance correlation and plot as above. The issue with Pearson's correlation is that it can only detect linear-relationships between variables. Make sure you choose the write parameter for index in the DCOR function that said. | R package for identifying relationships between variables [closed] | You can use the DCOR function in the 'energy' package to compute a measure of non-linear dependency called distance correlation and plot as above. The issue with Pearson's correlation is that it can o | R package for identifying relationships between variables [closed]
You can use the DCOR function in the 'energy' package to compute a measure of non-linear dependency called distance correlation and plot as above. The issue with Pearson's correlation is that it can only detect linear-relationships between variables. Make sure you choose the write parameter for index in the DCOR function that said. | R package for identifying relationships between variables [closed]
You can use the DCOR function in the 'energy' package to compute a measure of non-linear dependency called distance correlation and plot as above. The issue with Pearson's correlation is that it can o |
20,816 | Does mean centering reduce covariance? | If $X$ and $Y$ are random variables and $a$ and $b$ are constants, then
$$
\begin{aligned}
\operatorname{Cov}(X + a, Y + b)
&= E[(X + a - E[X + a])(Y + b - E[Y + b])] \\
&= E[(X + a - E[X] - E[a])(Y + b - E[Y] - E[b])] \\
&= E[(X + a - E[X] - a)(Y + b - E[Y] - b)] \\
&= E[(X - E[X])(Y - E[Y])] \\
&= \operatorname{Cov}(X, Y).
\end{aligned}
$$
Centering is the special case $a = -E[X]$ and $b = -E[Y]$, so centering does not affect covariance.
Also, since correlation is defined as
$$
\operatorname{Corr}(X, Y)
= \frac{\operatorname{Cov}(X, Y)}{\sqrt{\operatorname{Var}(X) \operatorname{Var}(Y)}},
$$
we can see that
$$
\begin{aligned}
\operatorname{Corr}(X + a, Y + b)
&= \frac{\operatorname{Cov}(X + a, Y + b)}{\sqrt{\operatorname{Var}(X + a) \operatorname{Var}(Y + b)}} \\
&= \frac{\operatorname{Cov}(X, Y)}{\sqrt{\operatorname{Var}(X) \operatorname{Var}(Y)}},
\end{aligned}
$$
so in particular, correlation isn't affected by centering either.
That was the population version of the story. The sample version is the same: If we use
$$
\widehat{\operatorname{Cov}}(X, Y)
= \frac{1}{n} \sum_{i=1}^n \left(X_i - \frac{1}{n}\sum_{j=1}^n X_j\right)\left(Y_i - \frac{1}{n}\sum_{j=1}^n Y_j\right)
$$
as our estimate of covariance between $X$ and $Y$ from a paired sample $(X_1,Y_1), \ldots, (X_n,Y_n)$, then
$$
\begin{aligned}
\widehat{\operatorname{Cov}}(X + a, Y + b)
&= \frac{1}{n} \sum_{i=1}^n \left(X_i + a - \frac{1}{n}\sum_{j=1}^n (X_j + a)\right)\left(Y_i + b - \frac{1}{n}\sum_{j=1}^n (Y_j + b)\right) \\
&= \frac{1}{n} \sum_{i=1}^n \left(X_i + a - \frac{1}{n}\sum_{j=1}^n X_j - \frac{n}{n} a\right)\left(Y_i + b - \frac{1}{n}\sum_{j=1}^n Y_j - \frac{n}{n} b\right) \\
&= \frac{1}{n} \sum_{i=1}^n \left(X_i - \frac{1}{n}\sum_{j=1}^n X_j\right)\left(Y_i - \frac{1}{n}\sum_{j=1}^n Y_j\right) \\
&= \widehat{\operatorname{Cov}}(X, Y)
\end{aligned}
$$
for any $a$ and $b$. | Does mean centering reduce covariance? | If $X$ and $Y$ are random variables and $a$ and $b$ are constants, then
$$
\begin{aligned}
\operatorname{Cov}(X + a, Y + b)
&= E[(X + a - E[X + a])(Y + b - E[Y + b])] \\
&= E[(X + a - E[X] - E[a])(Y + | Does mean centering reduce covariance?
If $X$ and $Y$ are random variables and $a$ and $b$ are constants, then
$$
\begin{aligned}
\operatorname{Cov}(X + a, Y + b)
&= E[(X + a - E[X + a])(Y + b - E[Y + b])] \\
&= E[(X + a - E[X] - E[a])(Y + b - E[Y] - E[b])] \\
&= E[(X + a - E[X] - a)(Y + b - E[Y] - b)] \\
&= E[(X - E[X])(Y - E[Y])] \\
&= \operatorname{Cov}(X, Y).
\end{aligned}
$$
Centering is the special case $a = -E[X]$ and $b = -E[Y]$, so centering does not affect covariance.
Also, since correlation is defined as
$$
\operatorname{Corr}(X, Y)
= \frac{\operatorname{Cov}(X, Y)}{\sqrt{\operatorname{Var}(X) \operatorname{Var}(Y)}},
$$
we can see that
$$
\begin{aligned}
\operatorname{Corr}(X + a, Y + b)
&= \frac{\operatorname{Cov}(X + a, Y + b)}{\sqrt{\operatorname{Var}(X + a) \operatorname{Var}(Y + b)}} \\
&= \frac{\operatorname{Cov}(X, Y)}{\sqrt{\operatorname{Var}(X) \operatorname{Var}(Y)}},
\end{aligned}
$$
so in particular, correlation isn't affected by centering either.
That was the population version of the story. The sample version is the same: If we use
$$
\widehat{\operatorname{Cov}}(X, Y)
= \frac{1}{n} \sum_{i=1}^n \left(X_i - \frac{1}{n}\sum_{j=1}^n X_j\right)\left(Y_i - \frac{1}{n}\sum_{j=1}^n Y_j\right)
$$
as our estimate of covariance between $X$ and $Y$ from a paired sample $(X_1,Y_1), \ldots, (X_n,Y_n)$, then
$$
\begin{aligned}
\widehat{\operatorname{Cov}}(X + a, Y + b)
&= \frac{1}{n} \sum_{i=1}^n \left(X_i + a - \frac{1}{n}\sum_{j=1}^n (X_j + a)\right)\left(Y_i + b - \frac{1}{n}\sum_{j=1}^n (Y_j + b)\right) \\
&= \frac{1}{n} \sum_{i=1}^n \left(X_i + a - \frac{1}{n}\sum_{j=1}^n X_j - \frac{n}{n} a\right)\left(Y_i + b - \frac{1}{n}\sum_{j=1}^n Y_j - \frac{n}{n} b\right) \\
&= \frac{1}{n} \sum_{i=1}^n \left(X_i - \frac{1}{n}\sum_{j=1}^n X_j\right)\left(Y_i - \frac{1}{n}\sum_{j=1}^n Y_j\right) \\
&= \widehat{\operatorname{Cov}}(X, Y)
\end{aligned}
$$
for any $a$ and $b$. | Does mean centering reduce covariance?
If $X$ and $Y$ are random variables and $a$ and $b$ are constants, then
$$
\begin{aligned}
\operatorname{Cov}(X + a, Y + b)
&= E[(X + a - E[X + a])(Y + b - E[Y + b])] \\
&= E[(X + a - E[X] - E[a])(Y + |
20,817 | Does mean centering reduce covariance? | The definition of the covariance of $X$ and $Y$ is $E[(X-E[X])(Y-E[Y])]$. The expression $X-E[X]$ in that formula is the centered version of $X$. So we already center $X$ when we take the covariance, and centering is an idempotent operator; once a variable is centered, applying the centering process further times doesn't change it. If the formula didn't take the centered versions of the variables, then there would all sort of weird effects, such as the covariance between temperature and another variable being different depending on whether we measure temperature in Celsius or Kelvin. | Does mean centering reduce covariance? | The definition of the covariance of $X$ and $Y$ is $E[(X-E[X])(Y-E[Y])]$. The expression $X-E[X]$ in that formula is the centered version of $X$. So we already center $X$ when we take the covariance, | Does mean centering reduce covariance?
The definition of the covariance of $X$ and $Y$ is $E[(X-E[X])(Y-E[Y])]$. The expression $X-E[X]$ in that formula is the centered version of $X$. So we already center $X$ when we take the covariance, and centering is an idempotent operator; once a variable is centered, applying the centering process further times doesn't change it. If the formula didn't take the centered versions of the variables, then there would all sort of weird effects, such as the covariance between temperature and another variable being different depending on whether we measure temperature in Celsius or Kelvin. | Does mean centering reduce covariance?
The definition of the covariance of $X$ and $Y$ is $E[(X-E[X])(Y-E[Y])]$. The expression $X-E[X]$ in that formula is the centered version of $X$. So we already center $X$ when we take the covariance, |
20,818 | Does mean centering reduce covariance? | "somewhere" tends to be a rather unreliable source...
Covariance/correlation are defined with explicit centering. If you don't center the data, then you are not computing covariance/correlation. (Precisely: Pearson correlation)
The main difference is whether you center based on a theoretical model (e.g., the expected value is supposed to be exactly 0) or based on the data (arithmetic mean). It is easy to see that the arithmetic mean will yield smaller Covariance than any different center.
However, smaller covariance does not imply smaller correlation, or the opposite. Assume that we have data X=(1,2) and Y=(2,1). It is easy to see that with arithmetic mean centering this will yield perfectly negative correlation, while if we know the generating process produces 0 on average, the data is actually positively correlated.
So in this example, we are centering - but with the theoretical expected value of 0.
This can arise easily. Consider we have a sensor array, 11x11, with the cells numbered -5 to +5. Rather than taking the arithmetic mean, it does make sense to use the "physical" mean of our sensor array here when looking for the correlation of sensor events (if we enumerated the cells 0 to 10, we'd use 5 as fixed mean, and we would get the exact same results, so that indexing choice disappears from the analysis - nice). | Does mean centering reduce covariance? | "somewhere" tends to be a rather unreliable source...
Covariance/correlation are defined with explicit centering. If you don't center the data, then you are not computing covariance/correlation. (Prec | Does mean centering reduce covariance?
"somewhere" tends to be a rather unreliable source...
Covariance/correlation are defined with explicit centering. If you don't center the data, then you are not computing covariance/correlation. (Precisely: Pearson correlation)
The main difference is whether you center based on a theoretical model (e.g., the expected value is supposed to be exactly 0) or based on the data (arithmetic mean). It is easy to see that the arithmetic mean will yield smaller Covariance than any different center.
However, smaller covariance does not imply smaller correlation, or the opposite. Assume that we have data X=(1,2) and Y=(2,1). It is easy to see that with arithmetic mean centering this will yield perfectly negative correlation, while if we know the generating process produces 0 on average, the data is actually positively correlated.
So in this example, we are centering - but with the theoretical expected value of 0.
This can arise easily. Consider we have a sensor array, 11x11, with the cells numbered -5 to +5. Rather than taking the arithmetic mean, it does make sense to use the "physical" mean of our sensor array here when looking for the correlation of sensor events (if we enumerated the cells 0 to 10, we'd use 5 as fixed mean, and we would get the exact same results, so that indexing choice disappears from the analysis - nice). | Does mean centering reduce covariance?
"somewhere" tends to be a rather unreliable source...
Covariance/correlation are defined with explicit centering. If you don't center the data, then you are not computing covariance/correlation. (Prec |
20,819 | What if both null hypothesis and alternative hypothesis are wrong? [duplicate] | What you've identified is one of the fundamental flaws with this approach to hypothesis testing: namely, that the statistical tests you are doing do not assess the validity of the statement you are actually interested in assessing the truth of.
In this form of hypothesis testing, $H_a$ is never accepted, you can only ever reject $H_0$. This is widely misunderstood and misrepresented by users of statistical testing. | What if both null hypothesis and alternative hypothesis are wrong? [duplicate] | What you've identified is one of the fundamental flaws with this approach to hypothesis testing: namely, that the statistical tests you are doing do not assess the validity of the statement you are ac | What if both null hypothesis and alternative hypothesis are wrong? [duplicate]
What you've identified is one of the fundamental flaws with this approach to hypothesis testing: namely, that the statistical tests you are doing do not assess the validity of the statement you are actually interested in assessing the truth of.
In this form of hypothesis testing, $H_a$ is never accepted, you can only ever reject $H_0$. This is widely misunderstood and misrepresented by users of statistical testing. | What if both null hypothesis and alternative hypothesis are wrong? [duplicate]
What you've identified is one of the fundamental flaws with this approach to hypothesis testing: namely, that the statistical tests you are doing do not assess the validity of the statement you are ac |
20,820 | What if both null hypothesis and alternative hypothesis are wrong? [duplicate] | $H_{a}$ is, properly the complement of $H_{0}$ in the sample space of the distribution under the null hypothesis. One-sided tests, should therefore properly have $H_{0}: \mu \ge c$ (for some number $c$), with $H_{a}: \mu < c$ (or vice versa: $H_{0}: \mu \le c$, with $H_{a}: \mu > c$), for precisely the reason you allude to: if the null hypothesis in a one-sided test is specified as $H_{0}: \mu = 0$, then a one-sided alternative hypothesis cannot express the complement of $H_{0}$. I (and others) therefore disagree with those who use the confusing nomenclature you describe.
See my answer here for a similar question and issue. | What if both null hypothesis and alternative hypothesis are wrong? [duplicate] | $H_{a}$ is, properly the complement of $H_{0}$ in the sample space of the distribution under the null hypothesis. One-sided tests, should therefore properly have $H_{0}: \mu \ge c$ (for some number $c | What if both null hypothesis and alternative hypothesis are wrong? [duplicate]
$H_{a}$ is, properly the complement of $H_{0}$ in the sample space of the distribution under the null hypothesis. One-sided tests, should therefore properly have $H_{0}: \mu \ge c$ (for some number $c$), with $H_{a}: \mu < c$ (or vice versa: $H_{0}: \mu \le c$, with $H_{a}: \mu > c$), for precisely the reason you allude to: if the null hypothesis in a one-sided test is specified as $H_{0}: \mu = 0$, then a one-sided alternative hypothesis cannot express the complement of $H_{0}$. I (and others) therefore disagree with those who use the confusing nomenclature you describe.
See my answer here for a similar question and issue. | What if both null hypothesis and alternative hypothesis are wrong? [duplicate]
$H_{a}$ is, properly the complement of $H_{0}$ in the sample space of the distribution under the null hypothesis. One-sided tests, should therefore properly have $H_{0}: \mu \ge c$ (for some number $c |
20,821 | What if both null hypothesis and alternative hypothesis are wrong? [duplicate] | Put properly, we don't actually test if an alternative hypothesis is true. It is often described that way, but as far as basic statistics goes, that is incorrect.
We actually test whether there is, or is not, enough evidence to accept some "new"/"novel"/"not-default" hypothesis H. We do this by
Taking into account what we know (Bayesian style if appropriate);
Choosing a test we think is applicable to the data and hypothesis we are probing, and
Stipulating a point which will be deemed "significant".
The significance level
This last item, the "significamce level", is often a source of confusion. What we actually say is, "If the hypothesis is wrong, then how exceptional would our results be?" So, suppose we set a significance level of 0.1% (P=0.001), what we are saying is:
"If our hypothesis is wrong, we just got a 1 in 1000 result by pure chance. That's so unlikely that we conclude the hypothesis is probably correct."
So you can "draw the line" where you like - for some research such as particle physics, you'd want 2 separate (independent) experiments both with a significance level of 1 in some millions, before concluding the hypothesis is probably correct. For a rigged dice game, a 1 in 3 level might be enough to persuade you not to play that game :)
But either way it is crucial to pick the level beforehand, otherwise you're probably just make a self serving statement using 'whatever level you like". | What if both null hypothesis and alternative hypothesis are wrong? [duplicate] | Put properly, we don't actually test if an alternative hypothesis is true. It is often described that way, but as far as basic statistics goes, that is incorrect.
We actually test whether there is, o | What if both null hypothesis and alternative hypothesis are wrong? [duplicate]
Put properly, we don't actually test if an alternative hypothesis is true. It is often described that way, but as far as basic statistics goes, that is incorrect.
We actually test whether there is, or is not, enough evidence to accept some "new"/"novel"/"not-default" hypothesis H. We do this by
Taking into account what we know (Bayesian style if appropriate);
Choosing a test we think is applicable to the data and hypothesis we are probing, and
Stipulating a point which will be deemed "significant".
The significance level
This last item, the "significamce level", is often a source of confusion. What we actually say is, "If the hypothesis is wrong, then how exceptional would our results be?" So, suppose we set a significance level of 0.1% (P=0.001), what we are saying is:
"If our hypothesis is wrong, we just got a 1 in 1000 result by pure chance. That's so unlikely that we conclude the hypothesis is probably correct."
So you can "draw the line" where you like - for some research such as particle physics, you'd want 2 separate (independent) experiments both with a significance level of 1 in some millions, before concluding the hypothesis is probably correct. For a rigged dice game, a 1 in 3 level might be enough to persuade you not to play that game :)
But either way it is crucial to pick the level beforehand, otherwise you're probably just make a self serving statement using 'whatever level you like". | What if both null hypothesis and alternative hypothesis are wrong? [duplicate]
Put properly, we don't actually test if an alternative hypothesis is true. It is often described that way, but as far as basic statistics goes, that is incorrect.
We actually test whether there is, o |
20,822 | What if both null hypothesis and alternative hypothesis are wrong? [duplicate] | This points to one of the few serious problems with the conventional statistics through null hypothesis significance testing (NHST). A much more meaningful approach in this case is to totally abandon NHST, and adopt the Bayesian framework. If you have some prior information available, just incorporate it into your model through prior distribution. Unfortunately most statistics consumers are simply too indoctrinated, obsessed and entrenched with the old school of thinking. See more discussion here. | What if both null hypothesis and alternative hypothesis are wrong? [duplicate] | This points to one of the few serious problems with the conventional statistics through null hypothesis significance testing (NHST). A much more meaningful approach in this case is to totally abandon | What if both null hypothesis and alternative hypothesis are wrong? [duplicate]
This points to one of the few serious problems with the conventional statistics through null hypothesis significance testing (NHST). A much more meaningful approach in this case is to totally abandon NHST, and adopt the Bayesian framework. If you have some prior information available, just incorporate it into your model through prior distribution. Unfortunately most statistics consumers are simply too indoctrinated, obsessed and entrenched with the old school of thinking. See more discussion here. | What if both null hypothesis and alternative hypothesis are wrong? [duplicate]
This points to one of the few serious problems with the conventional statistics through null hypothesis significance testing (NHST). A much more meaningful approach in this case is to totally abandon |
20,823 | Interpreting result of k-means clustering in R | If you compute the sum of squared distances of each data point to the global sample mean, you get total_SS. If, instead of computing a global sample mean (or 'centroid'), you compute one per group (here, there are three groups) and then compute the sum of squared distances of these three means to the global mean, you get between_SS. (When computing this, you multiply the squared distance of each mean to the global mean by the number of data points it represents.)
If there were no discernible pattern of clustering, the three means of the three groups would be close to the global mean, and between_SS would be a very small fraction of total_SS. The opposite is true here, which shows that data points cluster quite neatly in four dimensional space according to species. | Interpreting result of k-means clustering in R | If you compute the sum of squared distances of each data point to the global sample mean, you get total_SS. If, instead of computing a global sample mean (or 'centroid'), you compute one per group (he | Interpreting result of k-means clustering in R
If you compute the sum of squared distances of each data point to the global sample mean, you get total_SS. If, instead of computing a global sample mean (or 'centroid'), you compute one per group (here, there are three groups) and then compute the sum of squared distances of these three means to the global mean, you get between_SS. (When computing this, you multiply the squared distance of each mean to the global mean by the number of data points it represents.)
If there were no discernible pattern of clustering, the three means of the three groups would be close to the global mean, and between_SS would be a very small fraction of total_SS. The opposite is true here, which shows that data points cluster quite neatly in four dimensional space according to species. | Interpreting result of k-means clustering in R
If you compute the sum of squared distances of each data point to the global sample mean, you get total_SS. If, instead of computing a global sample mean (or 'centroid'), you compute one per group (he |
20,824 | Interpreting result of k-means clustering in R | K-means is not a distance based clustering algorithm.
K-means searches for the minimum sum of squares assignment, i.e. it minimizes unnormalized variance (=total_SS) by assigning points to cluster centers.
In order for k-means to converge, you need two conditions:
reassigning points reduces the sum of squares
recomputing the mean reduces the sum of squares
As there is only finite number of combinations, you cannot infinitely reduce this value and the algorithm must converge at some point to a local optimum.
Whenever you intend to change the assignment functions, you have the risk of making the algorithm not terminate anymore, like a dog chasing its own tail. Essentially both steps have to agree on the objective function. We do know that the arithmetic mean is the optimum choice with respect to sum of squares. And for the first step, we can just compute $\sum_i (x_i-\mu_{ji})^2$ for each mean $j$ and choose whichever is minimal. Technically, there is no distance computation here. Mathematically, assigning by least sum of squares is equal to assigning by closes squared Euclidean distance, which (if you waste the CPU cycles for computing sqrt) equals minimal Euclidean distance assignment. So the intuition of assigning each point to the closest mean is correct, but not what the optimization problem does.
between_SS probably is the weighted sum of squares between two means, to measure how well cluster centers are separated (note: cluster centers, it does not compare the actual clusters - technically, the cluster Voronoi cell touches the neighbor clusters Voronoi cell).
Note that with k-means you can improve the naive clustering quality by increasing k. The quality measured here is a mathematical value, which may not match the users requirements. Iris is actually a quite good example, where k-means often converges to less than satisfactory results, even given the external information that there should be exactly 3 clusters.
If you want a distance-based variation of k-means, look at k-medoids. Here convergence is ensured by replacing the mean with the medoid:
Each object is assigned to the nearest cluster (by an arbitrary distance measure)
The cluster center is updated to the most central object of the cluster, i.e. with the smallest average distance to all others.
In each step, the sum of distances reduces; there is a finite number of combinations, therefore the algorithm must terminate at some local minimum. | Interpreting result of k-means clustering in R | K-means is not a distance based clustering algorithm.
K-means searches for the minimum sum of squares assignment, i.e. it minimizes unnormalized variance (=total_SS) by assigning points to cluster cen | Interpreting result of k-means clustering in R
K-means is not a distance based clustering algorithm.
K-means searches for the minimum sum of squares assignment, i.e. it minimizes unnormalized variance (=total_SS) by assigning points to cluster centers.
In order for k-means to converge, you need two conditions:
reassigning points reduces the sum of squares
recomputing the mean reduces the sum of squares
As there is only finite number of combinations, you cannot infinitely reduce this value and the algorithm must converge at some point to a local optimum.
Whenever you intend to change the assignment functions, you have the risk of making the algorithm not terminate anymore, like a dog chasing its own tail. Essentially both steps have to agree on the objective function. We do know that the arithmetic mean is the optimum choice with respect to sum of squares. And for the first step, we can just compute $\sum_i (x_i-\mu_{ji})^2$ for each mean $j$ and choose whichever is minimal. Technically, there is no distance computation here. Mathematically, assigning by least sum of squares is equal to assigning by closes squared Euclidean distance, which (if you waste the CPU cycles for computing sqrt) equals minimal Euclidean distance assignment. So the intuition of assigning each point to the closest mean is correct, but not what the optimization problem does.
between_SS probably is the weighted sum of squares between two means, to measure how well cluster centers are separated (note: cluster centers, it does not compare the actual clusters - technically, the cluster Voronoi cell touches the neighbor clusters Voronoi cell).
Note that with k-means you can improve the naive clustering quality by increasing k. The quality measured here is a mathematical value, which may not match the users requirements. Iris is actually a quite good example, where k-means often converges to less than satisfactory results, even given the external information that there should be exactly 3 clusters.
If you want a distance-based variation of k-means, look at k-medoids. Here convergence is ensured by replacing the mean with the medoid:
Each object is assigned to the nearest cluster (by an arbitrary distance measure)
The cluster center is updated to the most central object of the cluster, i.e. with the smallest average distance to all others.
In each step, the sum of distances reduces; there is a finite number of combinations, therefore the algorithm must terminate at some local minimum. | Interpreting result of k-means clustering in R
K-means is not a distance based clustering algorithm.
K-means searches for the minimum sum of squares assignment, i.e. it minimizes unnormalized variance (=total_SS) by assigning points to cluster cen |
20,825 | Meaning of "Overdispersion" in Statistics | In a Poisson$(\lambda)$ distribution:
$$
\mu=\lambda\\
\sigma^2 =\lambda\\
\implies\\
\mu=\sigma^2
$$
Consequently, when we believe we have a Poisson distribution, we expect the samples drawn from it to obey $\bar x \approx s^2$, since $\mu=\sigma^2$ in the suspected distribution.
If we have a gross violation where $s^2>>\bar x$, then we would not find it believable that $\mu=\sigma^2$, and we describe the data as overdispersed. That is, the dispersion is higher than we expected it to be. | Meaning of "Overdispersion" in Statistics | In a Poisson$(\lambda)$ distribution:
$$
\mu=\lambda\\
\sigma^2 =\lambda\\
\implies\\
\mu=\sigma^2
$$
Consequently, when we believe we have a Poisson distribution, we expect the samples drawn from it | Meaning of "Overdispersion" in Statistics
In a Poisson$(\lambda)$ distribution:
$$
\mu=\lambda\\
\sigma^2 =\lambda\\
\implies\\
\mu=\sigma^2
$$
Consequently, when we believe we have a Poisson distribution, we expect the samples drawn from it to obey $\bar x \approx s^2$, since $\mu=\sigma^2$ in the suspected distribution.
If we have a gross violation where $s^2>>\bar x$, then we would not find it believable that $\mu=\sigma^2$, and we describe the data as overdispersed. That is, the dispersion is higher than we expected it to be. | Meaning of "Overdispersion" in Statistics
In a Poisson$(\lambda)$ distribution:
$$
\mu=\lambda\\
\sigma^2 =\lambda\\
\implies\\
\mu=\sigma^2
$$
Consequently, when we believe we have a Poisson distribution, we expect the samples drawn from it |
20,826 | Meaning of "Overdispersion" in Statistics | For many one-parameter probability distributions, the variance in the distribution is a function of the mean. When you fit data to a statistical model using these distributions, the estimator will tend to give you a reasonable estimate of the mean, but the estimated variance will just be a function of that, so it will not generally fit to the data very well. This happens with certain one-parameter probability distributions, most notably the Poisson distribution. In this case, it is common for the data to be more variable than the estimated variance coming out of your model, in which case we say that there is a problem of "overdispersion".
Both of the descriptions you have given for this are correct. Overdispersion is indeed the presence of greater variability in the data than predicted by the model. This generally occurs because the variance in the distribution used in the model is a function of the mean, so the estimation procedure can't estimate them both well (and mean estimation is generally more important than variance estimation when fitting data to a model).
Roughly speaking, if you have $k$ parmaters in a statistical distribution, and you fit it to data, it will allow you to accurately estimate $k$ moments of the distribution (often the first $k$ moments, but not always$^\dagger$). So, for example, some one-parameter distributions allow you to accurately estimate the mean but not the variance, some two-parameter distributions allow you to accurately estimate the mean and variance but not the skewness, some three-parameter distributions allow you to accurately estimate the mean, variance and skewness, but not the kurtosis, and so on.
If you want to avoid overdispersion in your modelling, you should use statistical models that use an underlying two-parameter distribution that can fit the mean and variance (e.g., use a negative binomial model instead of a Poisson model). The same basic principle also applies if you want to accurately fit higher-order moments --- e.g., if you want to accurately fit skewness you might generalise to a three-parameter distribution, and so on.
$^\dagger$ For example, the Student's T-distribution has a single parameter that affects the variance and kurtosis but not the mean or skewness. | Meaning of "Overdispersion" in Statistics | For many one-parameter probability distributions, the variance in the distribution is a function of the mean. When you fit data to a statistical model using these distributions, the estimator will te | Meaning of "Overdispersion" in Statistics
For many one-parameter probability distributions, the variance in the distribution is a function of the mean. When you fit data to a statistical model using these distributions, the estimator will tend to give you a reasonable estimate of the mean, but the estimated variance will just be a function of that, so it will not generally fit to the data very well. This happens with certain one-parameter probability distributions, most notably the Poisson distribution. In this case, it is common for the data to be more variable than the estimated variance coming out of your model, in which case we say that there is a problem of "overdispersion".
Both of the descriptions you have given for this are correct. Overdispersion is indeed the presence of greater variability in the data than predicted by the model. This generally occurs because the variance in the distribution used in the model is a function of the mean, so the estimation procedure can't estimate them both well (and mean estimation is generally more important than variance estimation when fitting data to a model).
Roughly speaking, if you have $k$ parmaters in a statistical distribution, and you fit it to data, it will allow you to accurately estimate $k$ moments of the distribution (often the first $k$ moments, but not always$^\dagger$). So, for example, some one-parameter distributions allow you to accurately estimate the mean but not the variance, some two-parameter distributions allow you to accurately estimate the mean and variance but not the skewness, some three-parameter distributions allow you to accurately estimate the mean, variance and skewness, but not the kurtosis, and so on.
If you want to avoid overdispersion in your modelling, you should use statistical models that use an underlying two-parameter distribution that can fit the mean and variance (e.g., use a negative binomial model instead of a Poisson model). The same basic principle also applies if you want to accurately fit higher-order moments --- e.g., if you want to accurately fit skewness you might generalise to a three-parameter distribution, and so on.
$^\dagger$ For example, the Student's T-distribution has a single parameter that affects the variance and kurtosis but not the mean or skewness. | Meaning of "Overdispersion" in Statistics
For many one-parameter probability distributions, the variance in the distribution is a function of the mean. When you fit data to a statistical model using these distributions, the estimator will te |
20,827 | Meaning of "Overdispersion" in Statistics | The only place I run into over-dispersion issues is when e.g. fitting a GLM model based on Poisson count data (Poisson regression).
As you know, for Poisson, the variance is equal to the mean, so
\begin{equation}
\mathrm{Var}(Y_i) = \mathrm{E}(Y_i).
\end{equation}
But commonly the variance exceeds the mean, so an attempt is made to recover the relationship above by introducing an over-dispersion parameter, $\phi$ in the functional form
\begin{equation}
\mathrm{Var}(Y_i) = \phi\mathrm{E}(Y_i),
\end{equation}
which is fitted along with the parameters during maximization of the log-likelihood. For Poisson, $\phi=1$, and if you allow $\phi>1$, you no longer have a distribution from the exponential family. The end result of implementing $\phi$ is that $V(\beta)$ is purposely inflated, leading to an over-estimation of standard errors -- essentially an under-statement of significance of the parameters. By not taking $\phi$ into account, it leads to underestimated standard errors, or over-statement about significance.
Another work-around is to use quantile regression, which is hinged to ranks (non-parametrics). | Meaning of "Overdispersion" in Statistics | The only place I run into over-dispersion issues is when e.g. fitting a GLM model based on Poisson count data (Poisson regression).
As you know, for Poisson, the variance is equal to the mean, so
\beg | Meaning of "Overdispersion" in Statistics
The only place I run into over-dispersion issues is when e.g. fitting a GLM model based on Poisson count data (Poisson regression).
As you know, for Poisson, the variance is equal to the mean, so
\begin{equation}
\mathrm{Var}(Y_i) = \mathrm{E}(Y_i).
\end{equation}
But commonly the variance exceeds the mean, so an attempt is made to recover the relationship above by introducing an over-dispersion parameter, $\phi$ in the functional form
\begin{equation}
\mathrm{Var}(Y_i) = \phi\mathrm{E}(Y_i),
\end{equation}
which is fitted along with the parameters during maximization of the log-likelihood. For Poisson, $\phi=1$, and if you allow $\phi>1$, you no longer have a distribution from the exponential family. The end result of implementing $\phi$ is that $V(\beta)$ is purposely inflated, leading to an over-estimation of standard errors -- essentially an under-statement of significance of the parameters. By not taking $\phi$ into account, it leads to underestimated standard errors, or over-statement about significance.
Another work-around is to use quantile regression, which is hinged to ranks (non-parametrics). | Meaning of "Overdispersion" in Statistics
The only place I run into over-dispersion issues is when e.g. fitting a GLM model based on Poisson count data (Poisson regression).
As you know, for Poisson, the variance is equal to the mean, so
\beg |
20,828 | Meaning of "Overdispersion" in Statistics | Over-dispersion can occur with one-parameter distributions, where mean and variance are tied together (Poisson, Binomial, Exponential). In real data, variance is usually much greater than would be allowed. Over-dispersion creates over-confidence (e.g. too narrow CIs), but usually does not introduce biases. In practical modelling, this problem can be resolved in one of three ways:
quasi-likelihood or generalized equation estimation
two-parameter distributions, such as negative-binomial or beta-binomial
observation-level random effects
I am discussing the issue and solutions 2 & 3 in my book. | Meaning of "Overdispersion" in Statistics | Over-dispersion can occur with one-parameter distributions, where mean and variance are tied together (Poisson, Binomial, Exponential). In real data, variance is usually much greater than would be all | Meaning of "Overdispersion" in Statistics
Over-dispersion can occur with one-parameter distributions, where mean and variance are tied together (Poisson, Binomial, Exponential). In real data, variance is usually much greater than would be allowed. Over-dispersion creates over-confidence (e.g. too narrow CIs), but usually does not introduce biases. In practical modelling, this problem can be resolved in one of three ways:
quasi-likelihood or generalized equation estimation
two-parameter distributions, such as negative-binomial or beta-binomial
observation-level random effects
I am discussing the issue and solutions 2 & 3 in my book. | Meaning of "Overdispersion" in Statistics
Over-dispersion can occur with one-parameter distributions, where mean and variance are tied together (Poisson, Binomial, Exponential). In real data, variance is usually much greater than would be all |
20,829 | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased | It seems to me that an underpowered study by definition is unlikely to give a small p-value against the null. Consequently, if you do get a small p-value it is likely that you are overestimating the true effect size.
However, if you look at all estimates from repeated experiments, regardless of their significance threshold, you do get an unbiased overall estimate. This should reconcile your paradox? Here's a simulation to illustrate:
We repeat an underpowered experiments 10,000 times:
set.seed(1234)
es <- 0.1 # The true difference
n <- 5 # and a small sample size
est <- rep(NA, 10000)
p <- rep(NA, length(est))
for(i in 1:length(est)) {
a <- rnorm(n, mean=0)
b <- rnorm(n, mean=0 + es)
tt <- t.test(a, b)
est[i] <- diff(tt$estimate)
p[i] <- tt$p.value
}
If you consider only experiments with p < 0.05 you get extreme estimates (blue line is the true value). Note that some estimates are extreme and are also in the wrong direction (those on the left of the blue line):
hist(est[p < 0.05], xlab='Estmates where p < 0.05', main='')
abline(v=es, col='blue', lty='dashed')
Nevertheless the estimatator is unbiased across the 10,000 experiments:
mean(est)
[1] 0.1002
# Count of over- and under-estimating experiments:
length(est[est > es])
[1] 5056
length(est[est <= es])
[1] 4944 | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased | It seems to me that an underpowered study by definition is unlikely to give a small p-value against the null. Consequently, if you do get a small p-value it is likely that you are overestimating the t | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased
It seems to me that an underpowered study by definition is unlikely to give a small p-value against the null. Consequently, if you do get a small p-value it is likely that you are overestimating the true effect size.
However, if you look at all estimates from repeated experiments, regardless of their significance threshold, you do get an unbiased overall estimate. This should reconcile your paradox? Here's a simulation to illustrate:
We repeat an underpowered experiments 10,000 times:
set.seed(1234)
es <- 0.1 # The true difference
n <- 5 # and a small sample size
est <- rep(NA, 10000)
p <- rep(NA, length(est))
for(i in 1:length(est)) {
a <- rnorm(n, mean=0)
b <- rnorm(n, mean=0 + es)
tt <- t.test(a, b)
est[i] <- diff(tt$estimate)
p[i] <- tt$p.value
}
If you consider only experiments with p < 0.05 you get extreme estimates (blue line is the true value). Note that some estimates are extreme and are also in the wrong direction (those on the left of the blue line):
hist(est[p < 0.05], xlab='Estmates where p < 0.05', main='')
abline(v=es, col='blue', lty='dashed')
Nevertheless the estimatator is unbiased across the 10,000 experiments:
mean(est)
[1] 0.1002
# Count of over- and under-estimating experiments:
length(est[est > es])
[1] 5056
length(est[est <= es])
[1] 4944 | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased
It seems to me that an underpowered study by definition is unlikely to give a small p-value against the null. Consequently, if you do get a small p-value it is likely that you are overestimating the t |
20,830 | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased | To resolve the issue of bias, note that, when we consider the effect size in a test that rejects, we no longer consider the entire distribution of $\hat\theta$ that estimates $\theta$ but $\hat\theta\vert\text{reject }H_0$, and there is no reason to expect this latter distribution to have the unbiasedness that $\hat\theta$ has.
Regarding the issue of being "underpowered", it is true that a formal definition of this term would be nice. Note, however, that as power increases, the estimation bias in estimates corresponding to rejected null hypotheses decreases.
library(pwr)
library(ggplot2)
set.seed(2022)
Ns <- seq(50, 2000, 50)
B <- 10000
powers <- biases <- ratio_biases <- rep(NA, length(Ns))
effect_size <- 0.1
for (i in 1:length(Ns)){
powers[i]<- pwr::pwr.t.test(
n = Ns[i],
d = effect_size,
type = "one.sample"
)$power
observed_sizes_conditional <- rep(NA, length(B))
for (j in 1:B){
x <- rnorm(Ns[i], effect_size, 1)
pval <- t.test(x)$p.value
if (pval <= 0.05){
observed_sizes_conditional[j] <- mean(x)
}
observed_sizes_conditional <- observed_sizes_conditional[
which(is.na(observed_sizes_conditional) == F)
]
ratio_biases[i] <- mean(observed_sizes_conditional)/effect_size
biases[i] <- mean(observed_sizes_conditional) - effect_size
}
print(paste(i, "of", length(Ns)))
}
d1 <- data.frame(
Power = powers,
Bias = biases,
Statistic = "Standard Bias"
)
d2 <- data.frame(
Power = powers,
Bias = ratio_biases,
Statistic = "Ratio Bias"
)
d <- rbind(d1, d2)
ggplot(d, aes(x = Power, y = Bias, col = Statistic)) +
geom_line() +
geom_point() +
facet_grid(rows = vars(Statistic), scale = "free_y") +
theme_bw() + theme(legend.position="none")
I do not know the correct term for what I mean by "ratio bias", but I mean $\dfrac{\mathbb E[\hat\theta]}{\theta}$. Since the effect size is not zero, this fraction is defined.
This makes sense for the t-test, where the standard error will be larger for a smaller sample size (less power), requiring a larger observed effect to reach significance.
By showing this, we avoid that irritating issue of defining what an "underpowered" study means and just show that more power means less estimation bias. This explains what is happening in the linked question, where a reviewer asked an author for the power of the test in order to screen for gross bias in the conditional estimator $\hat\theta\vert\text{reject }H_0$. If the power is low, the graphs above suggest that the bias will be high, but high power makes the bias nearly vanish, hence the reviewer wanting high power. | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased | To resolve the issue of bias, note that, when we consider the effect size in a test that rejects, we no longer consider the entire distribution of $\hat\theta$ that estimates $\theta$ but $\hat\theta\ | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased
To resolve the issue of bias, note that, when we consider the effect size in a test that rejects, we no longer consider the entire distribution of $\hat\theta$ that estimates $\theta$ but $\hat\theta\vert\text{reject }H_0$, and there is no reason to expect this latter distribution to have the unbiasedness that $\hat\theta$ has.
Regarding the issue of being "underpowered", it is true that a formal definition of this term would be nice. Note, however, that as power increases, the estimation bias in estimates corresponding to rejected null hypotheses decreases.
library(pwr)
library(ggplot2)
set.seed(2022)
Ns <- seq(50, 2000, 50)
B <- 10000
powers <- biases <- ratio_biases <- rep(NA, length(Ns))
effect_size <- 0.1
for (i in 1:length(Ns)){
powers[i]<- pwr::pwr.t.test(
n = Ns[i],
d = effect_size,
type = "one.sample"
)$power
observed_sizes_conditional <- rep(NA, length(B))
for (j in 1:B){
x <- rnorm(Ns[i], effect_size, 1)
pval <- t.test(x)$p.value
if (pval <= 0.05){
observed_sizes_conditional[j] <- mean(x)
}
observed_sizes_conditional <- observed_sizes_conditional[
which(is.na(observed_sizes_conditional) == F)
]
ratio_biases[i] <- mean(observed_sizes_conditional)/effect_size
biases[i] <- mean(observed_sizes_conditional) - effect_size
}
print(paste(i, "of", length(Ns)))
}
d1 <- data.frame(
Power = powers,
Bias = biases,
Statistic = "Standard Bias"
)
d2 <- data.frame(
Power = powers,
Bias = ratio_biases,
Statistic = "Ratio Bias"
)
d <- rbind(d1, d2)
ggplot(d, aes(x = Power, y = Bias, col = Statistic)) +
geom_line() +
geom_point() +
facet_grid(rows = vars(Statistic), scale = "free_y") +
theme_bw() + theme(legend.position="none")
I do not know the correct term for what I mean by "ratio bias", but I mean $\dfrac{\mathbb E[\hat\theta]}{\theta}$. Since the effect size is not zero, this fraction is defined.
This makes sense for the t-test, where the standard error will be larger for a smaller sample size (less power), requiring a larger observed effect to reach significance.
By showing this, we avoid that irritating issue of defining what an "underpowered" study means and just show that more power means less estimation bias. This explains what is happening in the linked question, where a reviewer asked an author for the power of the test in order to screen for gross bias in the conditional estimator $\hat\theta\vert\text{reject }H_0$. If the power is low, the graphs above suggest that the bias will be high, but high power makes the bias nearly vanish, hence the reviewer wanting high power. | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased
To resolve the issue of bias, note that, when we consider the effect size in a test that rejects, we no longer consider the entire distribution of $\hat\theta$ that estimates $\theta$ but $\hat\theta\ |
20,831 | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased | Possibly the following image might shed some light
Given that the null hypothesis is true, there will always be an $\alpha\%$ chance to reject the null hypothesis, no matter what the power of a test is*.
But the power of the test makes the overall picture a lot more different. Possibly the paradox stems from gazing too much exclusively at the null hypothesis and p-value.
*Or actually the percentage to reject might be a bit higher because the hypothesis test is based on a theoretical model for the error and the reality might be different (sampling errors like outliers or correlation between measurements). | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased | Possibly the following image might shed some light
Given that the null hypothesis is true, there will always be an $\alpha\%$ chance to reject the null hypothesis, no matter what the power of a test | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased
Possibly the following image might shed some light
Given that the null hypothesis is true, there will always be an $\alpha\%$ chance to reject the null hypothesis, no matter what the power of a test is*.
But the power of the test makes the overall picture a lot more different. Possibly the paradox stems from gazing too much exclusively at the null hypothesis and p-value.
*Or actually the percentage to reject might be a bit higher because the hypothesis test is based on a theoretical model for the error and the reality might be different (sampling errors like outliers or correlation between measurements). | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased
Possibly the following image might shed some light
Given that the null hypothesis is true, there will always be an $\alpha\%$ chance to reject the null hypothesis, no matter what the power of a test |
20,832 | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased | It's not a paradox. You may call it a dilemma, or more precisely an unknown. You have correctly narrowed it down to the two possible outcomes: you are either really "lucky", or the assumptions behind the power calculation are incorrect. There is no way to know which is which based on the results of one study alone. These considerations matter even for well powered studies with statistically significant findings. | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased | It's not a paradox. You may call it a dilemma, or more precisely an unknown. You have correctly narrowed it down to the two possible outcomes: you are either really "lucky", or the assumptions behind | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased
It's not a paradox. You may call it a dilemma, or more precisely an unknown. You have correctly narrowed it down to the two possible outcomes: you are either really "lucky", or the assumptions behind the power calculation are incorrect. There is no way to know which is which based on the results of one study alone. These considerations matter even for well powered studies with statistically significant findings. | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased
It's not a paradox. You may call it a dilemma, or more precisely an unknown. You have correctly narrowed it down to the two possible outcomes: you are either really "lucky", or the assumptions behind |
20,833 | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased | You may have an estimator $\hat\theta$ that is (unconditionally) unbiased for its target: $\mathbb{E}(\hat\theta)=\theta$. The absolute value of the estimator $|\hat\theta|$ may also be (unconditionally) unbiased for the absolute value of the target: $\mathbb{E}(|\hat\theta|)=|\theta|$. (The absolute value rather than the raw value is relevant when considering effect size.)
However, once you condition on statistical significance of the estimate, the absolute value of the conditional estimator will generally no longer be unbiased for the absolute value of the target: $\mathbb{E}(|\hat\theta|\mid \hat\theta\text{ is stat. signif. at }\alpha\text{ level})\neq|\theta|$.
(I had struggled with a similar question over here: Understanding Gelman & Carlin "Beyond Power Calculations: ..." (2014). The issue was not really the essence but rather presentation. In the beginning it was not immediately clear to me that Gelman & Carlin were actually conditioning on statistical significance.) | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased | You may have an estimator $\hat\theta$ that is (unconditionally) unbiased for its target: $\mathbb{E}(\hat\theta)=\theta$. The absolute value of the estimator $|\hat\theta|$ may also be (unconditional | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased
You may have an estimator $\hat\theta$ that is (unconditionally) unbiased for its target: $\mathbb{E}(\hat\theta)=\theta$. The absolute value of the estimator $|\hat\theta|$ may also be (unconditionally) unbiased for the absolute value of the target: $\mathbb{E}(|\hat\theta|)=|\theta|$. (The absolute value rather than the raw value is relevant when considering effect size.)
However, once you condition on statistical significance of the estimate, the absolute value of the conditional estimator will generally no longer be unbiased for the absolute value of the target: $\mathbb{E}(|\hat\theta|\mid \hat\theta\text{ is stat. signif. at }\alpha\text{ level})\neq|\theta|$.
(I had struggled with a similar question over here: Understanding Gelman & Carlin "Beyond Power Calculations: ..." (2014). The issue was not really the essence but rather presentation. In the beginning it was not immediately clear to me that Gelman & Carlin were actually conditioning on statistical significance.) | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased
You may have an estimator $\hat\theta$ that is (unconditionally) unbiased for its target: $\mathbb{E}(\hat\theta)=\theta$. The absolute value of the estimator $|\hat\theta|$ may also be (unconditional |
20,834 | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased | You have hit on the same question discussed in the well-known Why Most Published Research Findings Are False paper. If you do a lot of experiments as a scientific community and quite a few of the tested null hypotheses are true (i.e. people try to show a whole lot of effects that aren't really there, while some are), then "underpowered" studies are more likely to produce false positive findings than "well-powered" studies. Similarly, once one conditions on statistical significance, point estimates are biased away from where you put your null hypothesis. This bias is larger, the more underpowered a study is.
You might critique this by saying that null hypotheses are rarely exactly true, but the exact same things happen when you instead look at a set-up where many effects are very small and only a few are big.
People worry about this a lot in drug development, where large companies will run early stage proof of concept studies (you can look at those as a kind of screening tool for deciding which projects to pursue further) for many potentially promising new drugs (of which most will not have a meaningful effect on the disease of interest). It is important for these studies to not be completely underpowered, because otherwise "positive" proof of concept results will become useless as a tool for prioritizing which drugs to study further. | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased | You have hit on the same question discussed in the well-known Why Most Published Research Findings Are False paper. If you do a lot of experiments as a scientific community and quite a few of the test | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased
You have hit on the same question discussed in the well-known Why Most Published Research Findings Are False paper. If you do a lot of experiments as a scientific community and quite a few of the tested null hypotheses are true (i.e. people try to show a whole lot of effects that aren't really there, while some are), then "underpowered" studies are more likely to produce false positive findings than "well-powered" studies. Similarly, once one conditions on statistical significance, point estimates are biased away from where you put your null hypothesis. This bias is larger, the more underpowered a study is.
You might critique this by saying that null hypotheses are rarely exactly true, but the exact same things happen when you instead look at a set-up where many effects are very small and only a few are big.
People worry about this a lot in drug development, where large companies will run early stage proof of concept studies (you can look at those as a kind of screening tool for deciding which projects to pursue further) for many potentially promising new drugs (of which most will not have a meaningful effect on the disease of interest). It is important for these studies to not be completely underpowered, because otherwise "positive" proof of concept results will become useless as a tool for prioritizing which drugs to study further. | Power paradox: overestimated effect size in low-powered study, but the estimator is unbiased
You have hit on the same question discussed in the well-known Why Most Published Research Findings Are False paper. If you do a lot of experiments as a scientific community and quite a few of the test |
20,835 | What are the differences between "Marginal Probability Distribution" and "Conditional Probability Distribution"? | Let me add an example to @develarist's answer.
\begin{array}{llcc|r}
Y & & y_1 & y_2 \\
\hline
X & x_1 & 0.450 & 0.150 & 0.600 \\
& x_2 & 0.167 & 0.233 & 0.400 \\
\hline
& & 0.617 & 0.383 & 1.000
\end{array}
The table shows the joint distribution of $(X,Y)$:
\begin{array}{l}
P(X=x_1,Y=y_1)=0.450 \\
P(X=x_1,Y=y_2)=0.150 \\
P(X=x_2,Y=y_1)=0.167 \\
P(X=x_2,Y=y_2)=0.233 \\
\end{array}
The marginal distribution of $Y$ is:
\begin{align*}
P(Y=y_1)&=P(Y=y_1 \text{ and } (X=x_1\text{ or }X=x_2))\\
&= P((Y=y_1\text{ and }X=x_1)\text{ or }(Y=y_1\text{ and }X=x_2)) \\
&= \sum_{i=1}^2 P(Y=y_1,X=x_i)=0.450+0.167=0.617 \\
P(Y=y_2)&=0.383
\end{align*}
The marginal distribution of $X$ is:
\begin{align*}
P(X=x_1)&=0.600\\ P(X=x_2)&=0.400
\end{align*}
The conditional distribution of $Y$ given $X=x_1$ is:
\begin{align*}
P(Y=y_1\mid X=x_1)&=\frac{P(Y=y_1,X=x_1)}{P(X=x_1)}\\&=0.450/0.600=0.750\\
P(Y=y_2\mid X=x_1)&=0.150/0.600=0.250
\end{align*}
The conditional distribution of $Y$ given $X=x_2$ is:
\begin{array}{l}
P(Y=y_1\mid X=x_2)=0.167/0.400=0.4175\\
P(Y=y_2\mid X=x_2)=0.233/0.400=0.5825
\end{array}
The conditional distribution of $X$ given $Y=y_1$ is:
\begin{array}{l}
P(X=x_1\mid Y=y_1)=0.450/0.617=0.7293\\
P(X=x_2\mid Y=y_1)=0.167/0.617=0.2707
\end{array}
The conditional distribution of $X$ given $Y=y_2$ is:
\begin{array}{l}
P(X=x_1\mid Y=y_2)=0.150/0.383=0.3916\\
P(X=x_2\mid Y=y_2)=0.233/0.383=0.6084
\end{array} | What are the differences between "Marginal Probability Distribution" and "Conditional Probability Di | Let me add an example to @develarist's answer.
\begin{array}{llcc|r}
Y & & y_1 & y_2 \\
\hline
X & x_1 & 0.450 & 0.150 & 0.600 \\
& x_2 & 0.167 & 0.233 & 0.400 \\
\hline
& & 0.617 & 0.383 & 1.000
\en | What are the differences between "Marginal Probability Distribution" and "Conditional Probability Distribution"?
Let me add an example to @develarist's answer.
\begin{array}{llcc|r}
Y & & y_1 & y_2 \\
\hline
X & x_1 & 0.450 & 0.150 & 0.600 \\
& x_2 & 0.167 & 0.233 & 0.400 \\
\hline
& & 0.617 & 0.383 & 1.000
\end{array}
The table shows the joint distribution of $(X,Y)$:
\begin{array}{l}
P(X=x_1,Y=y_1)=0.450 \\
P(X=x_1,Y=y_2)=0.150 \\
P(X=x_2,Y=y_1)=0.167 \\
P(X=x_2,Y=y_2)=0.233 \\
\end{array}
The marginal distribution of $Y$ is:
\begin{align*}
P(Y=y_1)&=P(Y=y_1 \text{ and } (X=x_1\text{ or }X=x_2))\\
&= P((Y=y_1\text{ and }X=x_1)\text{ or }(Y=y_1\text{ and }X=x_2)) \\
&= \sum_{i=1}^2 P(Y=y_1,X=x_i)=0.450+0.167=0.617 \\
P(Y=y_2)&=0.383
\end{align*}
The marginal distribution of $X$ is:
\begin{align*}
P(X=x_1)&=0.600\\ P(X=x_2)&=0.400
\end{align*}
The conditional distribution of $Y$ given $X=x_1$ is:
\begin{align*}
P(Y=y_1\mid X=x_1)&=\frac{P(Y=y_1,X=x_1)}{P(X=x_1)}\\&=0.450/0.600=0.750\\
P(Y=y_2\mid X=x_1)&=0.150/0.600=0.250
\end{align*}
The conditional distribution of $Y$ given $X=x_2$ is:
\begin{array}{l}
P(Y=y_1\mid X=x_2)=0.167/0.400=0.4175\\
P(Y=y_2\mid X=x_2)=0.233/0.400=0.5825
\end{array}
The conditional distribution of $X$ given $Y=y_1$ is:
\begin{array}{l}
P(X=x_1\mid Y=y_1)=0.450/0.617=0.7293\\
P(X=x_2\mid Y=y_1)=0.167/0.617=0.2707
\end{array}
The conditional distribution of $X$ given $Y=y_2$ is:
\begin{array}{l}
P(X=x_1\mid Y=y_2)=0.150/0.383=0.3916\\
P(X=x_2\mid Y=y_2)=0.233/0.383=0.6084
\end{array} | What are the differences between "Marginal Probability Distribution" and "Conditional Probability Di
Let me add an example to @develarist's answer.
\begin{array}{llcc|r}
Y & & y_1 & y_2 \\
\hline
X & x_1 & 0.450 & 0.150 & 0.600 \\
& x_2 & 0.167 & 0.233 & 0.400 \\
\hline
& & 0.617 & 0.383 & 1.000
\en |
20,836 | What are the differences between "Marginal Probability Distribution" and "Conditional Probability Distribution"? | If $X$ and $Y$ are two random variables, the univariate pdf of $X$ is the marginal distribution of $X$, and the univariate pdf of $Y$ is the marginal distribution of $Y$. Therefore, when you see the word marginal, just think of a single data series' own distribution, itself. don't be tricked into thinking marginal means something different or special than a univariate (single variable) assessment.
For conditional distribution on the other hand, we make a bivariate (two variables) assessment, but by considering the univariate components' relationship to one another: conditional pdf is the distribution of $X$ conditional on, or given the recognition of, $Y$'s data. The idea is that an observation in $X$ has some correspondence to a similarly-located observation in $Y$, and therefore we're thinking of $X$ with respect to what is observed in $Y$. In other words, the conditional pdf is a flimsy way of characterizing the distribution of $X$ as a function of $Y$. | What are the differences between "Marginal Probability Distribution" and "Conditional Probability Di | If $X$ and $Y$ are two random variables, the univariate pdf of $X$ is the marginal distribution of $X$, and the univariate pdf of $Y$ is the marginal distribution of $Y$. Therefore, when you see the w | What are the differences between "Marginal Probability Distribution" and "Conditional Probability Distribution"?
If $X$ and $Y$ are two random variables, the univariate pdf of $X$ is the marginal distribution of $X$, and the univariate pdf of $Y$ is the marginal distribution of $Y$. Therefore, when you see the word marginal, just think of a single data series' own distribution, itself. don't be tricked into thinking marginal means something different or special than a univariate (single variable) assessment.
For conditional distribution on the other hand, we make a bivariate (two variables) assessment, but by considering the univariate components' relationship to one another: conditional pdf is the distribution of $X$ conditional on, or given the recognition of, $Y$'s data. The idea is that an observation in $X$ has some correspondence to a similarly-located observation in $Y$, and therefore we're thinking of $X$ with respect to what is observed in $Y$. In other words, the conditional pdf is a flimsy way of characterizing the distribution of $X$ as a function of $Y$. | What are the differences between "Marginal Probability Distribution" and "Conditional Probability Di
If $X$ and $Y$ are two random variables, the univariate pdf of $X$ is the marginal distribution of $X$, and the univariate pdf of $Y$ is the marginal distribution of $Y$. Therefore, when you see the w |
20,837 | What are the differences between "Marginal Probability Distribution" and "Conditional Probability Distribution"? | In this question/answer I used the following graph:
Joint distribution In the left plot you see the joint distribution of disp versus mpg. This is a scatterplot in a 2D-space.
Marginal distribution You might be interested in the distribution of all the 'mpg' together. That is depicted by the first (big) histogram. It shows the distribution of 'mpg'. (note that in this way of plotting the marginal distribution occurs in the margins of the figure)
Conditional distribution can be seen as slices through the scatter plot. In this case you see the distribution of the variable 'mpg' for three different conditions (emphasized in the histogram and joint distribution with colors yellow, green and blue). | What are the differences between "Marginal Probability Distribution" and "Conditional Probability Di | In this question/answer I used the following graph:
Joint distribution In the left plot you see the joint distribution of disp versus mpg. This is a scatterplot in a 2D-space.
Marginal distribution | What are the differences between "Marginal Probability Distribution" and "Conditional Probability Distribution"?
In this question/answer I used the following graph:
Joint distribution In the left plot you see the joint distribution of disp versus mpg. This is a scatterplot in a 2D-space.
Marginal distribution You might be interested in the distribution of all the 'mpg' together. That is depicted by the first (big) histogram. It shows the distribution of 'mpg'. (note that in this way of plotting the marginal distribution occurs in the margins of the figure)
Conditional distribution can be seen as slices through the scatter plot. In this case you see the distribution of the variable 'mpg' for three different conditions (emphasized in the histogram and joint distribution with colors yellow, green and blue). | What are the differences between "Marginal Probability Distribution" and "Conditional Probability Di
In this question/answer I used the following graph:
Joint distribution In the left plot you see the joint distribution of disp versus mpg. This is a scatterplot in a 2D-space.
Marginal distribution |
20,838 | What are the differences between "Marginal Probability Distribution" and "Conditional Probability Distribution"? | In general, the joint distribution of two or more variables $P(A, B, C, ...)$ is a statement of what you know, assuming you have no certain information about any of them (i.e., you are uncertain about all of them).
Given a joint distribution, it's often relevant to look at subsets of the variables. The distribution of a subset, ignoring any others, is called a marginal distribution. For example, $P(A)$ is a marginal distribution of $P(A, B, C, ....)$, $P(A, C)$ is also a marginal distribution of $P(A, B, C, ...)$, likewise $P(B), P(B, Z), P(H, W, Y), P(C, E, H, Z)$, etc., are all marginal distributions of $P(A, B, C, ...)$.
By "ignoring", I mean that the omitted variables could take on any values; we don't make any assumption about them. A different way to look at subsets is to make assumptions about the omitted variables. That is, to look at some of the variables assuming we know something about the others. This is called a conditional distribution and it is written with a vertical bar to separate the uncertain variables, on the left, from the assumed variables, on the right.
For example, $P(B, C | A), P(D | A, J, X), P(C, M | O, Q, R, U), P(D, F, G, L | B, E, S)$, etc., are all conditional distributions derived from the joint distribution $P(A, B, C, ...)$. These all represent statements of the form: given that we know the variables on the right, what do we know about the uncertain variables on the left. E.g., $P(B, C | A)$ represents what we know about $B$ and $C$, given that we know $A$. Likewise $P(D | A, J, X)$ represents what we know about $D$, given that we know $A, J$, and $X$.
There may be any numbers of variables on the left and right in a conditional distribution. $P(C, M | O, Q, R, U)$ represents what we know about $C$ and $M$, given that we know $O, Q, R$, and $U$. $P(D, F, G, L | B, E, S)$ represents what we know about $D, F, G,$ and $L$, given that we know $B, E,$ and $S$.
Joint, marginal, and conditional distributions are related in some important ways. In particular, $P($some variables, other variables$) = P($some variables $|$ other variables$) P($other variables$)$. That is, the joint distribution of some variables and other variables is the product of the conditional distribution of some variables given other variables and the marginal distribution of other variables. This is a generalization of Bayes' rule. | What are the differences between "Marginal Probability Distribution" and "Conditional Probability Di | In general, the joint distribution of two or more variables $P(A, B, C, ...)$ is a statement of what you know, assuming you have no certain information about any of them (i.e., you are uncertain about | What are the differences between "Marginal Probability Distribution" and "Conditional Probability Distribution"?
In general, the joint distribution of two or more variables $P(A, B, C, ...)$ is a statement of what you know, assuming you have no certain information about any of them (i.e., you are uncertain about all of them).
Given a joint distribution, it's often relevant to look at subsets of the variables. The distribution of a subset, ignoring any others, is called a marginal distribution. For example, $P(A)$ is a marginal distribution of $P(A, B, C, ....)$, $P(A, C)$ is also a marginal distribution of $P(A, B, C, ...)$, likewise $P(B), P(B, Z), P(H, W, Y), P(C, E, H, Z)$, etc., are all marginal distributions of $P(A, B, C, ...)$.
By "ignoring", I mean that the omitted variables could take on any values; we don't make any assumption about them. A different way to look at subsets is to make assumptions about the omitted variables. That is, to look at some of the variables assuming we know something about the others. This is called a conditional distribution and it is written with a vertical bar to separate the uncertain variables, on the left, from the assumed variables, on the right.
For example, $P(B, C | A), P(D | A, J, X), P(C, M | O, Q, R, U), P(D, F, G, L | B, E, S)$, etc., are all conditional distributions derived from the joint distribution $P(A, B, C, ...)$. These all represent statements of the form: given that we know the variables on the right, what do we know about the uncertain variables on the left. E.g., $P(B, C | A)$ represents what we know about $B$ and $C$, given that we know $A$. Likewise $P(D | A, J, X)$ represents what we know about $D$, given that we know $A, J$, and $X$.
There may be any numbers of variables on the left and right in a conditional distribution. $P(C, M | O, Q, R, U)$ represents what we know about $C$ and $M$, given that we know $O, Q, R$, and $U$. $P(D, F, G, L | B, E, S)$ represents what we know about $D, F, G,$ and $L$, given that we know $B, E,$ and $S$.
Joint, marginal, and conditional distributions are related in some important ways. In particular, $P($some variables, other variables$) = P($some variables $|$ other variables$) P($other variables$)$. That is, the joint distribution of some variables and other variables is the product of the conditional distribution of some variables given other variables and the marginal distribution of other variables. This is a generalization of Bayes' rule. | What are the differences between "Marginal Probability Distribution" and "Conditional Probability Di
In general, the joint distribution of two or more variables $P(A, B, C, ...)$ is a statement of what you know, assuming you have no certain information about any of them (i.e., you are uncertain about |
20,839 | What are the differences between "Marginal Probability Distribution" and "Conditional Probability Distribution"? | Consider a joint discrete probability $p(x_i,y_j)$ over $x_i$'s and $y_j$'s. The marginal probability $p_X(x_i)$ has no dependence on any $Y$ any more since we sum over all $y_j$ as follows: $p_X(x_i) = \sum_j p(x_i, y_j)$. We've reduced the two-dimensional information from $p(x_i,y_j)$ into one dimension $p_X(x_i)$.
The conditional distribution of $X$ conditioned on $Y$ is a distribution of $X$, given a specific value of $Y$, using conditional probability defined as $p(X=x_i \mid Y=y_j)$ and looking at all values of $X$. So for every value of $Y$, we have a different conditional distribution for $X$ conditioned on that value of $Y$. | What are the differences between "Marginal Probability Distribution" and "Conditional Probability Di | Consider a joint discrete probability $p(x_i,y_j)$ over $x_i$'s and $y_j$'s. The marginal probability $p_X(x_i)$ has no dependence on any $Y$ any more since we sum over all $y_j$ as follows: $p_X(x_i) | What are the differences between "Marginal Probability Distribution" and "Conditional Probability Distribution"?
Consider a joint discrete probability $p(x_i,y_j)$ over $x_i$'s and $y_j$'s. The marginal probability $p_X(x_i)$ has no dependence on any $Y$ any more since we sum over all $y_j$ as follows: $p_X(x_i) = \sum_j p(x_i, y_j)$. We've reduced the two-dimensional information from $p(x_i,y_j)$ into one dimension $p_X(x_i)$.
The conditional distribution of $X$ conditioned on $Y$ is a distribution of $X$, given a specific value of $Y$, using conditional probability defined as $p(X=x_i \mid Y=y_j)$ and looking at all values of $X$. So for every value of $Y$, we have a different conditional distribution for $X$ conditioned on that value of $Y$. | What are the differences between "Marginal Probability Distribution" and "Conditional Probability Di
Consider a joint discrete probability $p(x_i,y_j)$ over $x_i$'s and $y_j$'s. The marginal probability $p_X(x_i)$ has no dependence on any $Y$ any more since we sum over all $y_j$ as follows: $p_X(x_i) |
20,840 | Confused by location of fences in box-whisker plots | The whisker only goes as far as the maximum (minimum) point less (greater) than the upper (lower) fence value. For example, if $q_3+k \times IQR=10$ and the data set had values
$\lbrace\dots,5,6,7,8,12\rbrace$, then the whisker would only goes as far as 8, and 12 would be the "outlier".
So, in short, the definitions for the whiskers, $q_3 +k \times IQR$ and $q_1-k\times IQR$, only represent the maximum extent to which the whiskers could go, if there were data points at those values.
Thus they don't have to be (and rarely are) the same length. | Confused by location of fences in box-whisker plots | The whisker only goes as far as the maximum (minimum) point less (greater) than the upper (lower) fence value. For example, if $q_3+k \times IQR=10$ and the data set had values
$\lbrace\dots,5,6,7,8,1 | Confused by location of fences in box-whisker plots
The whisker only goes as far as the maximum (minimum) point less (greater) than the upper (lower) fence value. For example, if $q_3+k \times IQR=10$ and the data set had values
$\lbrace\dots,5,6,7,8,12\rbrace$, then the whisker would only goes as far as 8, and 12 would be the "outlier".
So, in short, the definitions for the whiskers, $q_3 +k \times IQR$ and $q_1-k\times IQR$, only represent the maximum extent to which the whiskers could go, if there were data points at those values.
Thus they don't have to be (and rarely are) the same length. | Confused by location of fences in box-whisker plots
The whisker only goes as far as the maximum (minimum) point less (greater) than the upper (lower) fence value. For example, if $q_3+k \times IQR=10$ and the data set had values
$\lbrace\dots,5,6,7,8,1 |
20,841 | Confused by location of fences in box-whisker plots | Here's a graphical representation that shows the upper and lower fences. In practice, the fences are not drawn. As mentioned in the other answers, the whiskers would only extend to the fence values if there were observations equal to the fence values, otherwise the whiskers extend to the most extreme observations that lie within the fences. | Confused by location of fences in box-whisker plots | Here's a graphical representation that shows the upper and lower fences. In practice, the fences are not drawn. As mentioned in the other answers, the whiskers would only extend to the fence values | Confused by location of fences in box-whisker plots
Here's a graphical representation that shows the upper and lower fences. In practice, the fences are not drawn. As mentioned in the other answers, the whiskers would only extend to the fence values if there were observations equal to the fence values, otherwise the whiskers extend to the most extreme observations that lie within the fences. | Confused by location of fences in box-whisker plots
Here's a graphical representation that shows the upper and lower fences. In practice, the fences are not drawn. As mentioned in the other answers, the whiskers would only extend to the fence values |
20,842 | Confused by location of fences in box-whisker plots | You seem to be confusing whiskers and fences. Whiskers represent data points, fences do not. Since the data points can lie pretty much anywhere (subject to the distribution they follow...), it is not surprising that the results would be asymmetrical. On the webpage that you linked, there is only one plot that shows true outliers (the one labeled "outliers" approximately in the middle of the page). You can infer the position of the fences from this picture, because the whisker ends inside the fence, and the dots are outside. | Confused by location of fences in box-whisker plots | You seem to be confusing whiskers and fences. Whiskers represent data points, fences do not. Since the data points can lie pretty much anywhere (subject to the distribution they follow...), it is not | Confused by location of fences in box-whisker plots
You seem to be confusing whiskers and fences. Whiskers represent data points, fences do not. Since the data points can lie pretty much anywhere (subject to the distribution they follow...), it is not surprising that the results would be asymmetrical. On the webpage that you linked, there is only one plot that shows true outliers (the one labeled "outliers" approximately in the middle of the page). You can infer the position of the fences from this picture, because the whisker ends inside the fence, and the dots are outside. | Confused by location of fences in box-whisker plots
You seem to be confusing whiskers and fences. Whiskers represent data points, fences do not. Since the data points can lie pretty much anywhere (subject to the distribution they follow...), it is not |
20,843 | Confused by location of fences in box-whisker plots | I am going to go straight to the point: let's say your data is positively skewed, (example : some Chi-Square distribution) there is no outlier on the left side while you might have few on the other side.
Moreover, if the data is not distributed as far as 1.5*IQR, your box plot will be shorter than 1.5*IQR on one end.
In this case, a box plot with 1.5*IQR on both sides would misrepresent the data because the range would be larger (at least on the shorter side) than it is!! | Confused by location of fences in box-whisker plots | I am going to go straight to the point: let's say your data is positively skewed, (example : some Chi-Square distribution) there is no outlier on the left side while you might have few on the other s | Confused by location of fences in box-whisker plots
I am going to go straight to the point: let's say your data is positively skewed, (example : some Chi-Square distribution) there is no outlier on the left side while you might have few on the other side.
Moreover, if the data is not distributed as far as 1.5*IQR, your box plot will be shorter than 1.5*IQR on one end.
In this case, a box plot with 1.5*IQR on both sides would misrepresent the data because the range would be larger (at least on the shorter side) than it is!! | Confused by location of fences in box-whisker plots
I am going to go straight to the point: let's say your data is positively skewed, (example : some Chi-Square distribution) there is no outlier on the left side while you might have few on the other s |
20,844 | Why is svm not so good as decision tree on the same data? | Possibilities include the use of an inappropriate kernel (e.g. a linear kernel for a non-linear problem), poor choice of kernel and regularisation hyper-parameters. Good model selection (choice of kernel and hyper-parameter tuning is the key to getting good performance from SVMs, they can only be expected to give good results when used correctly).
SVMs often do take a long time to train, this is especially true when the choice of kernel and particularly regularisation parameter means that almost all the data end up as support vectors (the sparsity of SVMs is a handy by-product, nothing more).
Lastly, the no free lunch theorems say that there is no a-priori superiority for any classifier system over the others, so the best classifier for a particular task is itself task-dependent. However there is more compelling theory for the SVM that suggests it is likely to be better choice than many other approaches for many problems. | Why is svm not so good as decision tree on the same data? | Possibilities include the use of an inappropriate kernel (e.g. a linear kernel for a non-linear problem), poor choice of kernel and regularisation hyper-parameters. Good model selection (choice of ke | Why is svm not so good as decision tree on the same data?
Possibilities include the use of an inappropriate kernel (e.g. a linear kernel for a non-linear problem), poor choice of kernel and regularisation hyper-parameters. Good model selection (choice of kernel and hyper-parameter tuning is the key to getting good performance from SVMs, they can only be expected to give good results when used correctly).
SVMs often do take a long time to train, this is especially true when the choice of kernel and particularly regularisation parameter means that almost all the data end up as support vectors (the sparsity of SVMs is a handy by-product, nothing more).
Lastly, the no free lunch theorems say that there is no a-priori superiority for any classifier system over the others, so the best classifier for a particular task is itself task-dependent. However there is more compelling theory for the SVM that suggests it is likely to be better choice than many other approaches for many problems. | Why is svm not so good as decision tree on the same data?
Possibilities include the use of an inappropriate kernel (e.g. a linear kernel for a non-linear problem), poor choice of kernel and regularisation hyper-parameters. Good model selection (choice of ke |
20,845 | Why is svm not so good as decision tree on the same data? | Decision Trees and Random Forests are actually extremely good classifiers. While SVM's (Support Vector Machines) are seen as more complex it does not actually mean they will perform better.
The paper "An Empirical Comparison of Supervised Learning Algorithms" by Rich Caruana compared 10 different binary classifiers, SVM, Neural-Networks, KNN, Logistic Regression, Naive Bayes, Random Forests, Decision Trees, Bagged Decision Trees, Boosted Decision trees and Bootstrapped Decision Trees on eleven different data sets and compared the results on 8 different performance metrics.
They found that Boosted decision trees came in first with Random Forests second and then Bagged Decision Trees and then SVM
The results will also depend on how many classes you are actually classifying for. | Why is svm not so good as decision tree on the same data? | Decision Trees and Random Forests are actually extremely good classifiers. While SVM's (Support Vector Machines) are seen as more complex it does not actually mean they will perform better.
The paper | Why is svm not so good as decision tree on the same data?
Decision Trees and Random Forests are actually extremely good classifiers. While SVM's (Support Vector Machines) are seen as more complex it does not actually mean they will perform better.
The paper "An Empirical Comparison of Supervised Learning Algorithms" by Rich Caruana compared 10 different binary classifiers, SVM, Neural-Networks, KNN, Logistic Regression, Naive Bayes, Random Forests, Decision Trees, Bagged Decision Trees, Boosted Decision trees and Bootstrapped Decision Trees on eleven different data sets and compared the results on 8 different performance metrics.
They found that Boosted decision trees came in first with Random Forests second and then Bagged Decision Trees and then SVM
The results will also depend on how many classes you are actually classifying for. | Why is svm not so good as decision tree on the same data?
Decision Trees and Random Forests are actually extremely good classifiers. While SVM's (Support Vector Machines) are seen as more complex it does not actually mean they will perform better.
The paper |
20,846 | Why is svm not so good as decision tree on the same data? | "whether a problem is linear or not"
In a binary classification problem, if the dataset can be separated by a hyper-plane, it's a linear problem.
If the dataset is not linear separable, while you try a linear classifier to find such a hyper-plane that is not existed at all, the algorithm may seem to run forever.
One suggestion: You can sample a small portion of your data, and try these algorithms to see if it works in a small dataset. Then increase the dataset to check when does these problem occur. | Why is svm not so good as decision tree on the same data? | "whether a problem is linear or not"
In a binary classification problem, if the dataset can be separated by a hyper-plane, it's a linear problem.
If the dataset is not linear separable, while you try | Why is svm not so good as decision tree on the same data?
"whether a problem is linear or not"
In a binary classification problem, if the dataset can be separated by a hyper-plane, it's a linear problem.
If the dataset is not linear separable, while you try a linear classifier to find such a hyper-plane that is not existed at all, the algorithm may seem to run forever.
One suggestion: You can sample a small portion of your data, and try these algorithms to see if it works in a small dataset. Then increase the dataset to check when does these problem occur. | Why is svm not so good as decision tree on the same data?
"whether a problem is linear or not"
In a binary classification problem, if the dataset can be separated by a hyper-plane, it's a linear problem.
If the dataset is not linear separable, while you try |
20,847 | Why is svm not so good as decision tree on the same data? | That is because of the nature of their decision boundaries.
The decision boundary of SVM (with or without kernel) is always linear (in the kernel space or not) while the decision boundary of the decision tree is piece-wise linear ( non-linear). | Why is svm not so good as decision tree on the same data? | That is because of the nature of their decision boundaries.
The decision boundary of SVM (with or without kernel) is always linear (in the kernel space or not) while the decision boundary of the decis | Why is svm not so good as decision tree on the same data?
That is because of the nature of their decision boundaries.
The decision boundary of SVM (with or without kernel) is always linear (in the kernel space or not) while the decision boundary of the decision tree is piece-wise linear ( non-linear). | Why is svm not so good as decision tree on the same data?
That is because of the nature of their decision boundaries.
The decision boundary of SVM (with or without kernel) is always linear (in the kernel space or not) while the decision boundary of the decis |
20,848 | How to draw funnel plot using ggplot2 in R? | Although there's room for improvement, here is a small attempt with simulated (heteroscedastic) data:
library(ggplot2)
set.seed(101)
x <- runif(100, min=1, max=10)
y <- rnorm(length(x), mean=5, sd=0.1*x)
df <- data.frame(x=x*70, y=y)
m <- lm(y ~ x, data=df)
fit95 <- predict(m, interval="conf", level=.95)
fit99 <- predict(m, interval="conf", level=.999)
df <- cbind.data.frame(df,
lwr95=fit95[,"lwr"], upr95=fit95[,"upr"],
lwr99=fit99[,"lwr"], upr99=fit99[,"upr"])
p <- ggplot(df, aes(x, y))
p + geom_point() +
geom_smooth(method="lm", colour="black", lwd=1.1, se=FALSE) +
geom_line(aes(y = upr95), color="black", linetype=2) +
geom_line(aes(y = lwr95), color="black", linetype=2) +
geom_line(aes(y = upr99), color="red", linetype=3) +
geom_line(aes(y = lwr99), color="red", linetype=3) +
annotate("text", 100, 6.5, label="95% limit", colour="black",
size=3, hjust=0) +
annotate("text", 100, 6.4, label="99.9% limit", colour="red",
size=3, hjust=0) +
labs(x="No. admissions...", y="Percentage of patients...") +
theme_bw() | How to draw funnel plot using ggplot2 in R? | Although there's room for improvement, here is a small attempt with simulated (heteroscedastic) data:
library(ggplot2)
set.seed(101)
x <- runif(100, min=1, max=10)
y <- rnorm(length(x), mean=5, sd=0.1 | How to draw funnel plot using ggplot2 in R?
Although there's room for improvement, here is a small attempt with simulated (heteroscedastic) data:
library(ggplot2)
set.seed(101)
x <- runif(100, min=1, max=10)
y <- rnorm(length(x), mean=5, sd=0.1*x)
df <- data.frame(x=x*70, y=y)
m <- lm(y ~ x, data=df)
fit95 <- predict(m, interval="conf", level=.95)
fit99 <- predict(m, interval="conf", level=.999)
df <- cbind.data.frame(df,
lwr95=fit95[,"lwr"], upr95=fit95[,"upr"],
lwr99=fit99[,"lwr"], upr99=fit99[,"upr"])
p <- ggplot(df, aes(x, y))
p + geom_point() +
geom_smooth(method="lm", colour="black", lwd=1.1, se=FALSE) +
geom_line(aes(y = upr95), color="black", linetype=2) +
geom_line(aes(y = lwr95), color="black", linetype=2) +
geom_line(aes(y = upr99), color="red", linetype=3) +
geom_line(aes(y = lwr99), color="red", linetype=3) +
annotate("text", 100, 6.5, label="95% limit", colour="black",
size=3, hjust=0) +
annotate("text", 100, 6.4, label="99.9% limit", colour="red",
size=3, hjust=0) +
labs(x="No. admissions...", y="Percentage of patients...") +
theme_bw() | How to draw funnel plot using ggplot2 in R?
Although there's room for improvement, here is a small attempt with simulated (heteroscedastic) data:
library(ggplot2)
set.seed(101)
x <- runif(100, min=1, max=10)
y <- rnorm(length(x), mean=5, sd=0.1 |
20,849 | How to draw funnel plot using ggplot2 in R? | If you are looking for this (meta-analysis) type of funnel plot, then the following might be a starting point:
library(ggplot2)
set.seed(1)
p <- runif(100)
number <- sample(1:1000, 100, replace = TRUE)
p.se <- sqrt((p*(1-p)) / (number))
df <- data.frame(p, number, p.se)
## common effect (fixed effect model)
p.fem <- weighted.mean(p, 1/p.se^2)
## lower and upper limits for 95% and 99.9% CI, based on FEM estimator
number.seq <- seq(0.001, max(number), 0.1)
number.ll95 <- p.fem - 1.96 * sqrt((p.fem*(1-p.fem)) / (number.seq))
number.ul95 <- p.fem + 1.96 * sqrt((p.fem*(1-p.fem)) / (number.seq))
number.ll999 <- p.fem - 3.29 * sqrt((p.fem*(1-p.fem)) / (number.seq))
number.ul999 <- p.fem + 3.29 * sqrt((p.fem*(1-p.fem)) / (number.seq))
dfCI <- data.frame(number.ll95, number.ul95, number.ll999, number.ul999, number.seq, p.fem)
## draw plot
fp <- ggplot(aes(x = number, y = p), data = df) +
geom_point(shape = 1) +
geom_line(aes(x = number.seq, y = number.ll95), data = dfCI) +
geom_line(aes(x = number.seq, y = number.ul95), data = dfCI) +
geom_line(aes(x = number.seq, y = number.ll999), linetype = "dashed", data = dfCI) +
geom_line(aes(x = number.seq, y = number.ul999), linetype = "dashed", data = dfCI) +
geom_hline(aes(yintercept = p.fem), data = dfCI) +
scale_y_continuous(limits = c(0,1.1)) +
xlab("number") + ylab("p") + theme_bw()
fp | How to draw funnel plot using ggplot2 in R? | If you are looking for this (meta-analysis) type of funnel plot, then the following might be a starting point:
library(ggplot2)
set.seed(1)
p <- runif(100)
number <- sample(1:1000, 100, replace = TRU | How to draw funnel plot using ggplot2 in R?
If you are looking for this (meta-analysis) type of funnel plot, then the following might be a starting point:
library(ggplot2)
set.seed(1)
p <- runif(100)
number <- sample(1:1000, 100, replace = TRUE)
p.se <- sqrt((p*(1-p)) / (number))
df <- data.frame(p, number, p.se)
## common effect (fixed effect model)
p.fem <- weighted.mean(p, 1/p.se^2)
## lower and upper limits for 95% and 99.9% CI, based on FEM estimator
number.seq <- seq(0.001, max(number), 0.1)
number.ll95 <- p.fem - 1.96 * sqrt((p.fem*(1-p.fem)) / (number.seq))
number.ul95 <- p.fem + 1.96 * sqrt((p.fem*(1-p.fem)) / (number.seq))
number.ll999 <- p.fem - 3.29 * sqrt((p.fem*(1-p.fem)) / (number.seq))
number.ul999 <- p.fem + 3.29 * sqrt((p.fem*(1-p.fem)) / (number.seq))
dfCI <- data.frame(number.ll95, number.ul95, number.ll999, number.ul999, number.seq, p.fem)
## draw plot
fp <- ggplot(aes(x = number, y = p), data = df) +
geom_point(shape = 1) +
geom_line(aes(x = number.seq, y = number.ll95), data = dfCI) +
geom_line(aes(x = number.seq, y = number.ul95), data = dfCI) +
geom_line(aes(x = number.seq, y = number.ll999), linetype = "dashed", data = dfCI) +
geom_line(aes(x = number.seq, y = number.ul999), linetype = "dashed", data = dfCI) +
geom_hline(aes(yintercept = p.fem), data = dfCI) +
scale_y_continuous(limits = c(0,1.1)) +
xlab("number") + ylab("p") + theme_bw()
fp | How to draw funnel plot using ggplot2 in R?
If you are looking for this (meta-analysis) type of funnel plot, then the following might be a starting point:
library(ggplot2)
set.seed(1)
p <- runif(100)
number <- sample(1:1000, 100, replace = TRU |
20,850 | How to draw funnel plot using ggplot2 in R? | Bernd Weiss's code is very helpful. I made some amendments below, to change/add a few features:
Used standard error as the measure of precision, which is more typical of the funnel plots I see (in psychology)
Swapped the axes, so precision (standard error) is on the y-axis, and effect size is on the x-axis
Used geom_segmentinstead of geom_linefor the line demarcating the meta-analytic mean, so that it would be the same height as the lines demarcating the 95% and 99% confidence regions
Instead of plotting the meta-analytic mean, I plotted it's 95% confidence interval
My code uses a meta-analytic mean of 0.0892 (se = 0.0035) as an example, but you can substitute your own values.
estimate = 0.0892
se = 0.0035
#Store a vector of values that spans the range from 0
#to the max value of impression (standard error) in your dataset.
#Make the increment (the final value) small enough (I choose 0.001)
#to ensure your whole range of data is captured
se.seq=seq(0, max(dat$corr_zi_se), 0.001)
#Compute vectors of the lower-limit and upper limit values for
#the 95% CI region
ll95 = estimate-(1.96*se.seq)
ul95 = estimate+(1.96*se.seq)
#Do this for a 99% CI region too
ll99 = estimate-(3.29*se.seq)
ul99 = estimate+(3.29*se.seq)
#And finally, calculate the confidence interval for your meta-analytic estimate
meanll95 = estimate-(1.96*se)
meanul95 = estimate+(1.96*se)
#Put all calculated values into one data frame
#You might get a warning about '...row names were found from a short variable...'
#You can ignore it.
dfCI = data.frame(ll95, ul95, ll99, ul99, se.seq, estimate, meanll95, meanul95)
#Draw Plot
fp = ggplot(aes(x = se, y = Zr), data = dat) +
geom_point(shape = 1) +
xlab('Standard Error') + ylab('Zr')+
geom_line(aes(x = se.seq, y = ll95), linetype = 'dotted', data = dfCI) +
geom_line(aes(x = se.seq, y = ul95), linetype = 'dotted', data = dfCI) +
geom_line(aes(x = se.seq, y = ll99), linetype = 'dashed', data = dfCI) +
geom_line(aes(x = se.seq, y = ul99), linetype = 'dashed', data = dfCI) +
geom_segment(aes(x = min(se.seq), y = meanll95, xend = max(se.seq), yend = meanll95), linetype='dotted', data=dfCI) +
geom_segment(aes(x = min(se.seq), y = meanul95, xend = max(se.seq), yend = meanul95), linetype='dotted', data=dfCI) +
scale_x_reverse()+
scale_y_continuous(breaks=seq(-1.25,2,0.25))+
coord_flip()+
theme_bw()
fp | How to draw funnel plot using ggplot2 in R? | Bernd Weiss's code is very helpful. I made some amendments below, to change/add a few features:
Used standard error as the measure of precision, which is more typical of the funnel plots I see (in p | How to draw funnel plot using ggplot2 in R?
Bernd Weiss's code is very helpful. I made some amendments below, to change/add a few features:
Used standard error as the measure of precision, which is more typical of the funnel plots I see (in psychology)
Swapped the axes, so precision (standard error) is on the y-axis, and effect size is on the x-axis
Used geom_segmentinstead of geom_linefor the line demarcating the meta-analytic mean, so that it would be the same height as the lines demarcating the 95% and 99% confidence regions
Instead of plotting the meta-analytic mean, I plotted it's 95% confidence interval
My code uses a meta-analytic mean of 0.0892 (se = 0.0035) as an example, but you can substitute your own values.
estimate = 0.0892
se = 0.0035
#Store a vector of values that spans the range from 0
#to the max value of impression (standard error) in your dataset.
#Make the increment (the final value) small enough (I choose 0.001)
#to ensure your whole range of data is captured
se.seq=seq(0, max(dat$corr_zi_se), 0.001)
#Compute vectors of the lower-limit and upper limit values for
#the 95% CI region
ll95 = estimate-(1.96*se.seq)
ul95 = estimate+(1.96*se.seq)
#Do this for a 99% CI region too
ll99 = estimate-(3.29*se.seq)
ul99 = estimate+(3.29*se.seq)
#And finally, calculate the confidence interval for your meta-analytic estimate
meanll95 = estimate-(1.96*se)
meanul95 = estimate+(1.96*se)
#Put all calculated values into one data frame
#You might get a warning about '...row names were found from a short variable...'
#You can ignore it.
dfCI = data.frame(ll95, ul95, ll99, ul99, se.seq, estimate, meanll95, meanul95)
#Draw Plot
fp = ggplot(aes(x = se, y = Zr), data = dat) +
geom_point(shape = 1) +
xlab('Standard Error') + ylab('Zr')+
geom_line(aes(x = se.seq, y = ll95), linetype = 'dotted', data = dfCI) +
geom_line(aes(x = se.seq, y = ul95), linetype = 'dotted', data = dfCI) +
geom_line(aes(x = se.seq, y = ll99), linetype = 'dashed', data = dfCI) +
geom_line(aes(x = se.seq, y = ul99), linetype = 'dashed', data = dfCI) +
geom_segment(aes(x = min(se.seq), y = meanll95, xend = max(se.seq), yend = meanll95), linetype='dotted', data=dfCI) +
geom_segment(aes(x = min(se.seq), y = meanul95, xend = max(se.seq), yend = meanul95), linetype='dotted', data=dfCI) +
scale_x_reverse()+
scale_y_continuous(breaks=seq(-1.25,2,0.25))+
coord_flip()+
theme_bw()
fp | How to draw funnel plot using ggplot2 in R?
Bernd Weiss's code is very helpful. I made some amendments below, to change/add a few features:
Used standard error as the measure of precision, which is more typical of the funnel plots I see (in p |
20,851 | How to draw funnel plot using ggplot2 in R? | See also the cran package berryFunctions, which has a funnelPlot for proportions without using ggplot2, if anyone needs it in base graphics.
http://cran.r-project.org/web/packages/berryFunctions/index.html
There is also the package extfunnel, which I haven't looked at. | How to draw funnel plot using ggplot2 in R? | See also the cran package berryFunctions, which has a funnelPlot for proportions without using ggplot2, if anyone needs it in base graphics.
http://cran.r-project.org/web/packages/berryFunctions/index | How to draw funnel plot using ggplot2 in R?
See also the cran package berryFunctions, which has a funnelPlot for proportions without using ggplot2, if anyone needs it in base graphics.
http://cran.r-project.org/web/packages/berryFunctions/index.html
There is also the package extfunnel, which I haven't looked at. | How to draw funnel plot using ggplot2 in R?
See also the cran package berryFunctions, which has a funnelPlot for proportions without using ggplot2, if anyone needs it in base graphics.
http://cran.r-project.org/web/packages/berryFunctions/index |
20,852 | In some sense, is linear regression an estimate of an estimate of an estimate? | To some extent, you had some very good point. The biggest problem in your interpretation is that you confused the concepts of approximation and estimation.
By probability theory, there exists a Borel function $f: \mathbb{R} \to \mathbb{R}$ such that $E[Y|X] = f(X)$ (almost surely). As you stated, for general distribution of $(X, Y)$, $f$ seldom has a nice closed form. On the other hand, suppose by some means, we have collected $n$ observations $S = \{(x_1, y_1), \ldots, (x_n, y_n)\}$ from the underlying distribution $(X, Y)$. A natural problem in statistics is then: can we use the sample $S$ to make some inference on the unknown $f$? Note that, while it is standard to say "estimate the functional form of $f$ using $S$", it is conceptually incorrect to say "estimate the random variable $Y$ using another random variable $X$", for the following two reasons:
No random variable $Y$ can be "estimated" by another random variable $X$. This has been refuted in Tim's answer. I just want to add that, if you recall that a random variable is essentially a real-valued function, does it make much sense to say use one function to "estimate" another function? From probability perspective, the statement "$E(Y|X)$ minimizes the mean-squared error $E[(Y - h(X))^2]$ over all $L^2$-functions $h(X)$ of $X$" is good enough but needs some correction as well: the "mean-squared error" has to be the "conditional mean-squared error" $E[(Y - h(X))^2|X]$. Do not use "estimator" for $E(Y|X)$ because $Y$ is not a valid estimand from statistics perspective (also see elaborations below).
In statistics (at least in frequentist statistical inference), the terminology "estimation" specifically means using an observed sample to draw some information on some unknown, yet non-random quantities (called "parameters") of an underlying population (or equivalently, distribution). From this perspective, your misuse of the word "estimate" is obvious: throughout your question, there is only one place you mentioned "sample": "So we use linear regression techniques such as the least-squared solution to get an estimate from a random sample." To be fair, this is the only place that you used the word "estimate" correctly, whereas "estimate/estimator" appeared in other places do not align with their standard statistical usages.
The more appropriate word for the problem you described is "approximation" (you actually also mentioned this term once but for the most time confused it with "estimation"): since $E[Y|X] = f(X)$ in general does not have an analytical form (i.e., "probability deduction" failed to work here), we need to turn to the help of statistical inference. But in order to get the statistics machine running, the first question we need to face is: what statistical tools should we use? Parametric inference or non-parametric inference? It turns out that the linear model is the simplest parametric inference weapon that practitioners like to use, which means, you specify $f(X)$ (it may well be a completely wrong specification, but the advantage is its simplicity and interpretability) as a linear function of $X$, i.e., $f(X) = \alpha + \beta X$, and then go head and use $S$ to estimate the parameters $\alpha$ and $\beta$. The procedure of specifying the unknown functional form $f$ as a linear function $\alpha + \beta X$ with just two unknown parameters is approximation (or model specification), it is not estimation, which is actually the next step that follows model specification. It is clear that while estimation cannot be done without sample/data, approximation in principle can be done without data (because it is just about selecting a simpler function to proxy a complicated function). However, to make a satisfactory approximation (i.e., build a decent model) requires the guidance of data as well and is usually interweaved with estimation in an iterative style. In this sense, "approximation" and "estimation" are closely related.
It is worth mentioning that when the joint distribution of $(X, Y)$ is bivariate Gaussian, then approximating $f$ by $\alpha + \beta X$ becomes exact. However, this doesn't make $\alpha + \beta X$ the best linear unbiased estimator of $f(X)$ when $(X, Y)$ is non-Gaussian. The "best linear unbiased estimator" refers to the estimator $(\hat{\alpha}, \hat{\beta})$ that minimizes variance after you have approximated $f(X)$ by $\alpha + \beta X$. It is well known that when the error distribution is spherical, the best linear unbiased estimator is the ordinary least-squares estimator.
Finally, let me quote the opening remark of Chapter 5 in The Elements of Statistical Learning to consolidate the point made above. If you want to get a better, more realistic approximation to $f$ than linear model, you can start looking into this chapter too.
We have already made use of models linear in the input features, both for
regression and classification. Linear regression, linear discriminant analysis,
logistic regression and separating hyperplanes all rely on a linear model.
It is extremely unlikely that the true function $f(X)$ is actually linear in
$X$. In regression problems, $f(X) = E(Y |X)$ will typically be nonlinear and
nonadditive in $X$, and representing $f(X)$ by a linear model is usually a convenient, and sometimes a necessary, approximation. Convenient because a
linear model is easy to interpret, and is the first-order Taylor approximation
to $f(X)$. | In some sense, is linear regression an estimate of an estimate of an estimate? | To some extent, you had some very good point. The biggest problem in your interpretation is that you confused the concepts of approximation and estimation.
By probability theory, there exists a Borel | In some sense, is linear regression an estimate of an estimate of an estimate?
To some extent, you had some very good point. The biggest problem in your interpretation is that you confused the concepts of approximation and estimation.
By probability theory, there exists a Borel function $f: \mathbb{R} \to \mathbb{R}$ such that $E[Y|X] = f(X)$ (almost surely). As you stated, for general distribution of $(X, Y)$, $f$ seldom has a nice closed form. On the other hand, suppose by some means, we have collected $n$ observations $S = \{(x_1, y_1), \ldots, (x_n, y_n)\}$ from the underlying distribution $(X, Y)$. A natural problem in statistics is then: can we use the sample $S$ to make some inference on the unknown $f$? Note that, while it is standard to say "estimate the functional form of $f$ using $S$", it is conceptually incorrect to say "estimate the random variable $Y$ using another random variable $X$", for the following two reasons:
No random variable $Y$ can be "estimated" by another random variable $X$. This has been refuted in Tim's answer. I just want to add that, if you recall that a random variable is essentially a real-valued function, does it make much sense to say use one function to "estimate" another function? From probability perspective, the statement "$E(Y|X)$ minimizes the mean-squared error $E[(Y - h(X))^2]$ over all $L^2$-functions $h(X)$ of $X$" is good enough but needs some correction as well: the "mean-squared error" has to be the "conditional mean-squared error" $E[(Y - h(X))^2|X]$. Do not use "estimator" for $E(Y|X)$ because $Y$ is not a valid estimand from statistics perspective (also see elaborations below).
In statistics (at least in frequentist statistical inference), the terminology "estimation" specifically means using an observed sample to draw some information on some unknown, yet non-random quantities (called "parameters") of an underlying population (or equivalently, distribution). From this perspective, your misuse of the word "estimate" is obvious: throughout your question, there is only one place you mentioned "sample": "So we use linear regression techniques such as the least-squared solution to get an estimate from a random sample." To be fair, this is the only place that you used the word "estimate" correctly, whereas "estimate/estimator" appeared in other places do not align with their standard statistical usages.
The more appropriate word for the problem you described is "approximation" (you actually also mentioned this term once but for the most time confused it with "estimation"): since $E[Y|X] = f(X)$ in general does not have an analytical form (i.e., "probability deduction" failed to work here), we need to turn to the help of statistical inference. But in order to get the statistics machine running, the first question we need to face is: what statistical tools should we use? Parametric inference or non-parametric inference? It turns out that the linear model is the simplest parametric inference weapon that practitioners like to use, which means, you specify $f(X)$ (it may well be a completely wrong specification, but the advantage is its simplicity and interpretability) as a linear function of $X$, i.e., $f(X) = \alpha + \beta X$, and then go head and use $S$ to estimate the parameters $\alpha$ and $\beta$. The procedure of specifying the unknown functional form $f$ as a linear function $\alpha + \beta X$ with just two unknown parameters is approximation (or model specification), it is not estimation, which is actually the next step that follows model specification. It is clear that while estimation cannot be done without sample/data, approximation in principle can be done without data (because it is just about selecting a simpler function to proxy a complicated function). However, to make a satisfactory approximation (i.e., build a decent model) requires the guidance of data as well and is usually interweaved with estimation in an iterative style. In this sense, "approximation" and "estimation" are closely related.
It is worth mentioning that when the joint distribution of $(X, Y)$ is bivariate Gaussian, then approximating $f$ by $\alpha + \beta X$ becomes exact. However, this doesn't make $\alpha + \beta X$ the best linear unbiased estimator of $f(X)$ when $(X, Y)$ is non-Gaussian. The "best linear unbiased estimator" refers to the estimator $(\hat{\alpha}, \hat{\beta})$ that minimizes variance after you have approximated $f(X)$ by $\alpha + \beta X$. It is well known that when the error distribution is spherical, the best linear unbiased estimator is the ordinary least-squares estimator.
Finally, let me quote the opening remark of Chapter 5 in The Elements of Statistical Learning to consolidate the point made above. If you want to get a better, more realistic approximation to $f$ than linear model, you can start looking into this chapter too.
We have already made use of models linear in the input features, both for
regression and classification. Linear regression, linear discriminant analysis,
logistic regression and separating hyperplanes all rely on a linear model.
It is extremely unlikely that the true function $f(X)$ is actually linear in
$X$. In regression problems, $f(X) = E(Y |X)$ will typically be nonlinear and
nonadditive in $X$, and representing $f(X)$ by a linear model is usually a convenient, and sometimes a necessary, approximation. Convenient because a
linear model is easy to interpret, and is the first-order Taylor approximation
to $f(X)$. | In some sense, is linear regression an estimate of an estimate of an estimate?
To some extent, you had some very good point. The biggest problem in your interpretation is that you confused the concepts of approximation and estimation.
By probability theory, there exists a Borel |
20,853 | In some sense, is linear regression an estimate of an estimate of an estimate? | It's not.
Consider the problem of estimating a random variable $Y$ using another random variable $X$.
We don't estimate random variables, but things about random variables. What would it mean to “estimate the random variable”? If I told you that I “estimated that the length of life is 70 years” what would it mean? Everybody would live exactly 70 years? It's maximum, minimum, average, mode, or median..? The statement would be quite meaningless. Also, why would you “estimate” a random variable with a single value when you can estimate its distribution (e.g. with empirical distribution or kernel density)?
The best estimator of $Y$ by a function of $X$ is the conditional expectation $E[Y|X]$. [...]
$E[Y]$ or $E[Y|X]$ are not estimators, but properties of random variables. The estimator is a function of a sample, the expected value is a property of a random variable.
However, $E[Y|X]$ seldom has a nice closed expression. [...] So we use linear regression techniques such as the least-squared solution to get an estimate from a random sample.
One has nothing to do with the other. To find $E[Y]$ you need to solve an integral. $E[Y]$ applies to mathematical objects (random variables). Linear regression needs data, if I asked you to tell me what is the expected value given that you know the probability density function of the variable, you wouldn't be able to calculate linear regression on the given function. There wouldn't be a closed-form solution either. The opposite is also true, you cannot calculate the expected value from the data, but you can use some estimator to approximate it. | In some sense, is linear regression an estimate of an estimate of an estimate? | It's not.
Consider the problem of estimating a random variable $Y$ using another random variable $X$.
We don't estimate random variables, but things about random variables. What would it mean to “es | In some sense, is linear regression an estimate of an estimate of an estimate?
It's not.
Consider the problem of estimating a random variable $Y$ using another random variable $X$.
We don't estimate random variables, but things about random variables. What would it mean to “estimate the random variable”? If I told you that I “estimated that the length of life is 70 years” what would it mean? Everybody would live exactly 70 years? It's maximum, minimum, average, mode, or median..? The statement would be quite meaningless. Also, why would you “estimate” a random variable with a single value when you can estimate its distribution (e.g. with empirical distribution or kernel density)?
The best estimator of $Y$ by a function of $X$ is the conditional expectation $E[Y|X]$. [...]
$E[Y]$ or $E[Y|X]$ are not estimators, but properties of random variables. The estimator is a function of a sample, the expected value is a property of a random variable.
However, $E[Y|X]$ seldom has a nice closed expression. [...] So we use linear regression techniques such as the least-squared solution to get an estimate from a random sample.
One has nothing to do with the other. To find $E[Y]$ you need to solve an integral. $E[Y]$ applies to mathematical objects (random variables). Linear regression needs data, if I asked you to tell me what is the expected value given that you know the probability density function of the variable, you wouldn't be able to calculate linear regression on the given function. There wouldn't be a closed-form solution either. The opposite is also true, you cannot calculate the expected value from the data, but you can use some estimator to approximate it. | In some sense, is linear regression an estimate of an estimate of an estimate?
It's not.
Consider the problem of estimating a random variable $Y$ using another random variable $X$.
We don't estimate random variables, but things about random variables. What would it mean to “es |
20,854 | In some sense, is linear regression an estimate of an estimate of an estimate? | A frequentist would use the term "prediction" for what you call "estimation of a random variable". Furthermore, if you don't assume a linear relationship but just use this as approximation, using the term "estmation" for "approximation" is confusing. And even then what is "estimated" is the coefficients of the regression, not the prediction of $Y$. So you're estimating the coefficients of an approximation in order to predict $Y$.
Things are not so clear in Bayesian analysis where parameters to be estimated are also random variables. There $E[Y|X]$ may be legitimately called "estimator" (even though some would probably still see this as confusing use of terminology), and you may estimate involved regression parameters on top of it. I don't see any way how to justify the term "estimator" for the linear approximation though in a Bayesian framework. | In some sense, is linear regression an estimate of an estimate of an estimate? | A frequentist would use the term "prediction" for what you call "estimation of a random variable". Furthermore, if you don't assume a linear relationship but just use this as approximation, using the | In some sense, is linear regression an estimate of an estimate of an estimate?
A frequentist would use the term "prediction" for what you call "estimation of a random variable". Furthermore, if you don't assume a linear relationship but just use this as approximation, using the term "estmation" for "approximation" is confusing. And even then what is "estimated" is the coefficients of the regression, not the prediction of $Y$. So you're estimating the coefficients of an approximation in order to predict $Y$.
Things are not so clear in Bayesian analysis where parameters to be estimated are also random variables. There $E[Y|X]$ may be legitimately called "estimator" (even though some would probably still see this as confusing use of terminology), and you may estimate involved regression parameters on top of it. I don't see any way how to justify the term "estimator" for the linear approximation though in a Bayesian framework. | In some sense, is linear regression an estimate of an estimate of an estimate?
A frequentist would use the term "prediction" for what you call "estimation of a random variable". Furthermore, if you don't assume a linear relationship but just use this as approximation, using the |
20,855 | Is Median Absolute Percentage Error useless? | I would be very careful about percentage errors, especially in the context of skewed distributions in the outcome (more precisely: skewed error distributions).
Look at each separate prediction. You evaluate it using an Absolute Percentage Error. This metric will prefer underpredictions, and especially so in the case of skewed error distribtutions. See What are the shortcomings of the Mean Absolute Percentage Error (MAPE)?, and note that the argument really applies to the raw APE, whether we summarize the APEs using the mean or the median.
Summarizing these separate APEs using the median instead of the mean (or using trimming, per Christian's answer) will attenuate the problem, but it won't solve it. "Optimal" predictions will still be biased low. You can simulate this by running an analysis like in the post linked to above: simulate skewed data whose distribution you know (this would stand in for the unknown error distribution in your application), and see which one-number summary will minimize the median APE.
If minimizing the APE is what you really want, and bias is not a problem for you, then by all means, go ahead. I just can't think of a business problem that would be better addressed by an APE-optimal forecast rather than an unbiased expectation forecast. As such, I would say that the "better interpretability" of APEs is a mirage. | Is Median Absolute Percentage Error useless? | I would be very careful about percentage errors, especially in the context of skewed distributions in the outcome (more precisely: skewed error distributions).
Look at each separate prediction. You ev | Is Median Absolute Percentage Error useless?
I would be very careful about percentage errors, especially in the context of skewed distributions in the outcome (more precisely: skewed error distributions).
Look at each separate prediction. You evaluate it using an Absolute Percentage Error. This metric will prefer underpredictions, and especially so in the case of skewed error distribtutions. See What are the shortcomings of the Mean Absolute Percentage Error (MAPE)?, and note that the argument really applies to the raw APE, whether we summarize the APEs using the mean or the median.
Summarizing these separate APEs using the median instead of the mean (or using trimming, per Christian's answer) will attenuate the problem, but it won't solve it. "Optimal" predictions will still be biased low. You can simulate this by running an analysis like in the post linked to above: simulate skewed data whose distribution you know (this would stand in for the unknown error distribution in your application), and see which one-number summary will minimize the median APE.
If minimizing the APE is what you really want, and bias is not a problem for you, then by all means, go ahead. I just can't think of a business problem that would be better addressed by an APE-optimal forecast rather than an unbiased expectation forecast. As such, I would say that the "better interpretability" of APEs is a mirage. | Is Median Absolute Percentage Error useless?
I would be very careful about percentage errors, especially in the context of skewed distributions in the outcome (more precisely: skewed error distributions).
Look at each separate prediction. You ev |
20,856 | Is Median Absolute Percentage Error useless? | Be careful with the median for performance metrics! Robustness to a small number of outliers is a good thing in most cases, but if you use the median a method may look good that in fact gives you a bad result in, say, 30 or 40% of the cases, and that's mostly not appropriate. I have used one-sided upper 10% trimmed means in such cases (probably not implemented in standard packages either) to express that I'm happy if I can have a good fit in 90% of the cases even if up to 10% predictions (or whatever performance you are measuring) are bad, but I don't want to tolerate more than that. It depends on the specific situation though.
Other than that, in principle your idea makes some sense, and the fact that it isn't implemented anywhere (or at least not where you looked) could be explained by the fact that many people don't think very much about the specific performance metric and are happy with what is available by default, which is often motivated by certain mathematical considerations that may be rather irrelevant for the application in hand. Many fairly simple but nonstandard things that can be useful in a certain situation are not implemented in standard packages. It's always good to question ones own ideas, but the bare fact that it isn't implemented doesn't mean there's something fundamentally wrong with it.
By the way here's something we've written some time ago: Some thoughts about the design of loss functions | Is Median Absolute Percentage Error useless? | Be careful with the median for performance metrics! Robustness to a small number of outliers is a good thing in most cases, but if you use the median a method may look good that in fact gives you a ba | Is Median Absolute Percentage Error useless?
Be careful with the median for performance metrics! Robustness to a small number of outliers is a good thing in most cases, but if you use the median a method may look good that in fact gives you a bad result in, say, 30 or 40% of the cases, and that's mostly not appropriate. I have used one-sided upper 10% trimmed means in such cases (probably not implemented in standard packages either) to express that I'm happy if I can have a good fit in 90% of the cases even if up to 10% predictions (or whatever performance you are measuring) are bad, but I don't want to tolerate more than that. It depends on the specific situation though.
Other than that, in principle your idea makes some sense, and the fact that it isn't implemented anywhere (or at least not where you looked) could be explained by the fact that many people don't think very much about the specific performance metric and are happy with what is available by default, which is often motivated by certain mathematical considerations that may be rather irrelevant for the application in hand. Many fairly simple but nonstandard things that can be useful in a certain situation are not implemented in standard packages. It's always good to question ones own ideas, but the bare fact that it isn't implemented doesn't mean there's something fundamentally wrong with it.
By the way here's something we've written some time ago: Some thoughts about the design of loss functions | Is Median Absolute Percentage Error useless?
Be careful with the median for performance metrics! Robustness to a small number of outliers is a good thing in most cases, but if you use the median a method may look good that in fact gives you a ba |
20,857 | Is Median Absolute Percentage Error useless? | I've spent a few years building real estate price regressors, which are known as "AVMs" (Automated Valuation Models). A few comments:
Yes, "median absolute percentage error" is both a reasonable metric and one that gets used in practice. (And in an instance of catastrophic acronym failure, "MAPE" can refer either to "Mean" or "Median" absolute percent error.)
Measuring "median absolute percentage error" is very strongly related to measuring "median absolute log error", and the latter has some advantages.
If you're considering log error, then deviations become symmetric. (This addresses quarague's observation about overestimating or underestimating by 2x.)
Implementing this metric should be very easy-- you just take the log() of everything and evaluate the MAE().
The logarithm also suppresses very large outliers (e.g., absurdly expensive mansions), which is one of the problems with examining real estate data. This can have both modeling and numerical stability advantages. (There are also non-arms-length real estate deals, which lead to outliers that are very small, but these are frequently easier to detect.)
There are companies that provide AVMs for banks and so forth; there are also companies that exist solely to evaluate the first set of companies. If you're trying to understand standard evaluation methodology in this space, you may want to examine the latter. To get started, maybe check out AVMetrics. (I have no connection with that company, by the way, aside from having read one of their analyses.) | Is Median Absolute Percentage Error useless? | I've spent a few years building real estate price regressors, which are known as "AVMs" (Automated Valuation Models). A few comments:
Yes, "median absolute percentage error" is both a reasonable met | Is Median Absolute Percentage Error useless?
I've spent a few years building real estate price regressors, which are known as "AVMs" (Automated Valuation Models). A few comments:
Yes, "median absolute percentage error" is both a reasonable metric and one that gets used in practice. (And in an instance of catastrophic acronym failure, "MAPE" can refer either to "Mean" or "Median" absolute percent error.)
Measuring "median absolute percentage error" is very strongly related to measuring "median absolute log error", and the latter has some advantages.
If you're considering log error, then deviations become symmetric. (This addresses quarague's observation about overestimating or underestimating by 2x.)
Implementing this metric should be very easy-- you just take the log() of everything and evaluate the MAE().
The logarithm also suppresses very large outliers (e.g., absurdly expensive mansions), which is one of the problems with examining real estate data. This can have both modeling and numerical stability advantages. (There are also non-arms-length real estate deals, which lead to outliers that are very small, but these are frequently easier to detect.)
There are companies that provide AVMs for banks and so forth; there are also companies that exist solely to evaluate the first set of companies. If you're trying to understand standard evaluation methodology in this space, you may want to examine the latter. To get started, maybe check out AVMetrics. (I have no connection with that company, by the way, aside from having read one of their analyses.) | Is Median Absolute Percentage Error useless?
I've spent a few years building real estate price regressors, which are known as "AVMs" (Automated Valuation Models). A few comments:
Yes, "median absolute percentage error" is both a reasonable met |
20,858 | Is Median Absolute Percentage Error useless? | The potential issue with absolute percentage error is that it is not symmetric with respect to over and underestimating. If you overestimate by a factor of 2 you will get an error of 100%, if you underestimate by a factor of 2 you will get an error of only 50%.
If you model is so good that most estimates are very close to the true value this effect becomes relatively small but if some estimates are way off this will impact your model choices. A model that occasionally make severe underestimates but no overestimates will look better than a model that occasionally makes severe overestimates.
Whether this happens in your case and if it does whether it is a problem depends on your specific situation but it is something to be aware off when looking at absolute percentage errors. | Is Median Absolute Percentage Error useless? | The potential issue with absolute percentage error is that it is not symmetric with respect to over and underestimating. If you overestimate by a factor of 2 you will get an error of 100%, if you unde | Is Median Absolute Percentage Error useless?
The potential issue with absolute percentage error is that it is not symmetric with respect to over and underestimating. If you overestimate by a factor of 2 you will get an error of 100%, if you underestimate by a factor of 2 you will get an error of only 50%.
If you model is so good that most estimates are very close to the true value this effect becomes relatively small but if some estimates are way off this will impact your model choices. A model that occasionally make severe underestimates but no overestimates will look better than a model that occasionally makes severe overestimates.
Whether this happens in your case and if it does whether it is a problem depends on your specific situation but it is something to be aware off when looking at absolute percentage errors. | Is Median Absolute Percentage Error useless?
The potential issue with absolute percentage error is that it is not symmetric with respect to over and underestimating. If you overestimate by a factor of 2 you will get an error of 100%, if you unde |
20,859 | Is Median Absolute Percentage Error useless? | TLDR; Here is a point of view in terms of what the median/mean tell about the behaviour of the tails. The median gives little information while the mean does.
A related question is Chebychev-like inequality based on the median absolute deviation (about the median)
I answered that question while understanding the problem to be about the mean absolute deviation. The reason that I did that is because the mean makes much more sense in relation to Chebychev-like inequalities. A problem with the median is that it only relates to a single point on the distribution curve
$$\text{median}(X) = x:F(x) = 0.5$$
The median tells little about the entire distribution and what the tail of the distribution does. The median can be even zero if more than 50% of the deviation is zero.
The mean on the other hand, gives a more weighted information about the entire distribution and includes the tails.
$$\text{mean}(X) = 1- \int_0^\infty F(x) dx$$
Let's look at a few curves with median or mean equal to 1.
The red curves have an average of 1 and will be restricted to be below the black curve 1/x.
The blue curves have a median of 1 and will only pass the point (1,0.5), but further from that they can be any shape. | Is Median Absolute Percentage Error useless? | TLDR; Here is a point of view in terms of what the median/mean tell about the behaviour of the tails. The median gives little information while the mean does.
A related question is Chebychev-like ine | Is Median Absolute Percentage Error useless?
TLDR; Here is a point of view in terms of what the median/mean tell about the behaviour of the tails. The median gives little information while the mean does.
A related question is Chebychev-like inequality based on the median absolute deviation (about the median)
I answered that question while understanding the problem to be about the mean absolute deviation. The reason that I did that is because the mean makes much more sense in relation to Chebychev-like inequalities. A problem with the median is that it only relates to a single point on the distribution curve
$$\text{median}(X) = x:F(x) = 0.5$$
The median tells little about the entire distribution and what the tail of the distribution does. The median can be even zero if more than 50% of the deviation is zero.
The mean on the other hand, gives a more weighted information about the entire distribution and includes the tails.
$$\text{mean}(X) = 1- \int_0^\infty F(x) dx$$
Let's look at a few curves with median or mean equal to 1.
The red curves have an average of 1 and will be restricted to be below the black curve 1/x.
The blue curves have a median of 1 and will only pass the point (1,0.5), but further from that they can be any shape. | Is Median Absolute Percentage Error useless?
TLDR; Here is a point of view in terms of what the median/mean tell about the behaviour of the tails. The median gives little information while the mean does.
A related question is Chebychev-like ine |
20,860 | How do I determine whether two correlations are significantly different? | Sometimes one might be able to accomplish this in multiple regression, where A is the DV, B is the score people have on a scale, and C is a dummy code that says it is either B1 or B2: lm(A~B+C+B*C). The interaction term, B*C, will tell you if the correlations are different, while simple slopes between A and B at both levels of C will tell you the correlations.
However, it is not possible to fit all types of comparisons between conditions in this framework. The cocor R package is very useful, and it has a very simple point-and-click interface on the web. Note that, with different missing data, you have neither independent nor dependent samples. I would use listwise deletion here, to keep it simple (and power isn't an issue for you). | How do I determine whether two correlations are significantly different? | Sometimes one might be able to accomplish this in multiple regression, where A is the DV, B is the score people have on a scale, and C is a dummy code that says it is either B1 or B2: lm(A~B+C+B*C). T | How do I determine whether two correlations are significantly different?
Sometimes one might be able to accomplish this in multiple regression, where A is the DV, B is the score people have on a scale, and C is a dummy code that says it is either B1 or B2: lm(A~B+C+B*C). The interaction term, B*C, will tell you if the correlations are different, while simple slopes between A and B at both levels of C will tell you the correlations.
However, it is not possible to fit all types of comparisons between conditions in this framework. The cocor R package is very useful, and it has a very simple point-and-click interface on the web. Note that, with different missing data, you have neither independent nor dependent samples. I would use listwise deletion here, to keep it simple (and power isn't an issue for you). | How do I determine whether two correlations are significantly different?
Sometimes one might be able to accomplish this in multiple regression, where A is the DV, B is the score people have on a scale, and C is a dummy code that says it is either B1 or B2: lm(A~B+C+B*C). T |
20,861 | How do I determine whether two correlations are significantly different? | Oh the power of the bootstrap. Lets look at three vectors for illustration: $A$, $B_1$ and $B_2$ where:
$$Cor(A, B_1) = 0.92$$
$$Cor(A, B_2) = 0.86$$
The goal is to determine if the correlation of these two data sets are significantly different. By taking bootstrap samples like so:
B <- 10000
cor1 <- cor2 <- rep(0, B)
for(i in 1:B){
samp <- sample(n, n, TRUE)
cor1[i] <- cor(A[samp], B1[samp])
cor2[i] <- cor(A[samp], B2[samp])
}
We can plot the bootstrap distributions of the two correlations:
We can also obtain 95% Confidence Intervals for $Cor(A, B_i)$.
95% CI for $Corr(A, B_1)$:
$$(0.897, 0.947)$$
95% CI for $Corr(A, B_2)$:
$$(0.810, 0.892)$$
The fact that the intervals don't overlap (barely) gives us some evidence that the difference in sample correlations which we observed is indeed statistically significant.
As amoeba points out in the comments, a more "powerful" result comes from getting the difference for each of the bootstrap samples.
A 95% CI for the difference between the two is:
$$(0.019, 0.108)$$
Noting that the interval (barely) excludes 0, we have similar evidence as before.
To handle the missing data problem, just select your bootstrap samples from the pairs which are contained in both data sets. | How do I determine whether two correlations are significantly different? | Oh the power of the bootstrap. Lets look at three vectors for illustration: $A$, $B_1$ and $B_2$ where:
$$Cor(A, B_1) = 0.92$$
$$Cor(A, B_2) = 0.86$$
The goal is to determine if the correlation of th | How do I determine whether two correlations are significantly different?
Oh the power of the bootstrap. Lets look at three vectors for illustration: $A$, $B_1$ and $B_2$ where:
$$Cor(A, B_1) = 0.92$$
$$Cor(A, B_2) = 0.86$$
The goal is to determine if the correlation of these two data sets are significantly different. By taking bootstrap samples like so:
B <- 10000
cor1 <- cor2 <- rep(0, B)
for(i in 1:B){
samp <- sample(n, n, TRUE)
cor1[i] <- cor(A[samp], B1[samp])
cor2[i] <- cor(A[samp], B2[samp])
}
We can plot the bootstrap distributions of the two correlations:
We can also obtain 95% Confidence Intervals for $Cor(A, B_i)$.
95% CI for $Corr(A, B_1)$:
$$(0.897, 0.947)$$
95% CI for $Corr(A, B_2)$:
$$(0.810, 0.892)$$
The fact that the intervals don't overlap (barely) gives us some evidence that the difference in sample correlations which we observed is indeed statistically significant.
As amoeba points out in the comments, a more "powerful" result comes from getting the difference for each of the bootstrap samples.
A 95% CI for the difference between the two is:
$$(0.019, 0.108)$$
Noting that the interval (barely) excludes 0, we have similar evidence as before.
To handle the missing data problem, just select your bootstrap samples from the pairs which are contained in both data sets. | How do I determine whether two correlations are significantly different?
Oh the power of the bootstrap. Lets look at three vectors for illustration: $A$, $B_1$ and $B_2$ where:
$$Cor(A, B_1) = 0.92$$
$$Cor(A, B_2) = 0.86$$
The goal is to determine if the correlation of th |
20,862 | How do I determine whether two correlations are significantly different? | Assume Fisher transformation: $r_1'=\tanh^{-1}(r_1)$ and $r_2'=\tanh^{-1} \left(r_2\right)$. Or, in an equivalent and perhaps clearer way (thanks to @dbwilson!), $r_1'={1\over2}\ln\left({1+r_1\over1-r_1}\right)$ and $r_2'={1\over2}\ln\left({1+r_2\over1-r_2}\right)$.
Then it follows that, due to the fact the Fisher transformed variables are now Normally distributed and the sum of normally distributed random variables is still normally distributed:
$$z={r_1'-r_2'\over S}\sim N(0,1) $$
With
$$S=\sqrt{S_1^2+S_2^2}=\sqrt{{1\over n_1-3}+{1\over n_2-3}}$$
So you test the null hypotheses $H_0:z=0$ by obtaining $P(z\neq0)=2\cdot P(Z>|z|)$.
Compared to the habitual $t$-test, notice we couldn't use the $t$-statistics so easily, see What is the distribution of the difference of two-t-distributions, so there's a consideration to be made on the degrees of freedom available in the computation, i.e. we assume $n$ large enough so the normal approximation can be reasonably to the respective $t$ statistics.
--
After the comment by @Josh, we can somewhat incorporate the possibility of interdependence between samples (remember both correlations depend on the distribution of A). Without assuming independent samples and using the Cauchy-Schwarz inequality we can get the following upper bound (see: How do I find the standard deviation of the difference between two means?):
$$S\leq S_1+S_2$$
$$S\leq {\sqrt{1\over n_1-3}+\sqrt{1\over n_2-3}}$$ | How do I determine whether two correlations are significantly different? | Assume Fisher transformation: $r_1'=\tanh^{-1}(r_1)$ and $r_2'=\tanh^{-1} \left(r_2\right)$. Or, in an equivalent and perhaps clearer way (thanks to @dbwilson!), $r_1'={1\over2}\ln\left({1+r_1\over1-r | How do I determine whether two correlations are significantly different?
Assume Fisher transformation: $r_1'=\tanh^{-1}(r_1)$ and $r_2'=\tanh^{-1} \left(r_2\right)$. Or, in an equivalent and perhaps clearer way (thanks to @dbwilson!), $r_1'={1\over2}\ln\left({1+r_1\over1-r_1}\right)$ and $r_2'={1\over2}\ln\left({1+r_2\over1-r_2}\right)$.
Then it follows that, due to the fact the Fisher transformed variables are now Normally distributed and the sum of normally distributed random variables is still normally distributed:
$$z={r_1'-r_2'\over S}\sim N(0,1) $$
With
$$S=\sqrt{S_1^2+S_2^2}=\sqrt{{1\over n_1-3}+{1\over n_2-3}}$$
So you test the null hypotheses $H_0:z=0$ by obtaining $P(z\neq0)=2\cdot P(Z>|z|)$.
Compared to the habitual $t$-test, notice we couldn't use the $t$-statistics so easily, see What is the distribution of the difference of two-t-distributions, so there's a consideration to be made on the degrees of freedom available in the computation, i.e. we assume $n$ large enough so the normal approximation can be reasonably to the respective $t$ statistics.
--
After the comment by @Josh, we can somewhat incorporate the possibility of interdependence between samples (remember both correlations depend on the distribution of A). Without assuming independent samples and using the Cauchy-Schwarz inequality we can get the following upper bound (see: How do I find the standard deviation of the difference between two means?):
$$S\leq S_1+S_2$$
$$S\leq {\sqrt{1\over n_1-3}+\sqrt{1\over n_2-3}}$$ | How do I determine whether two correlations are significantly different?
Assume Fisher transformation: $r_1'=\tanh^{-1}(r_1)$ and $r_2'=\tanh^{-1} \left(r_2\right)$. Or, in an equivalent and perhaps clearer way (thanks to @dbwilson!), $r_1'={1\over2}\ln\left({1+r_1\over1-r |
20,863 | How do I determine whether two correlations are significantly different? | Edited after helpful feedback from Mark White (thank you!)
One option is to calculate both relationships (B1 with A, and B2 with A) in a single model that also estimates the difference between them. This is easy to accomplish with multiple regression. You would run a model with A as the dependent variable, and then one continuous variable with all of the scores for B1 and B2, a categorical variable indicating which variable it was (B1 or B2), and the interaction between them. In r:
> set.seed(24601)
>
> library(tidyverse)
> library(mvtnorm)
> cov <- matrix(c(1, .4, .16,.4, 1, .4, .16, .4, 1), ncol=3, byrow=TRUE)
> mydata <- rmvnorm(n=100, sigma = cov)
> colnames(mydata) = c("A", "B1", "B2")
> head(mydata)
A B1 B2
[1,] -0.1046382 0.6031253 0.5641158
[2,] -1.9303293 -0.7663828 -0.7921836
[3,] 0.1244192 -0.4413581 -1.2376256
[4,] -3.2822601 -1.2512055 -0.5586773
[5,] -0.9543368 -0.1743740 1.1884185
[6,] -0.4843183 -0.2612668 -0.7161938
Here are the correlations from the data I generated:
> cor(mydata)
A B1 B2
A 1.0000000 0.4726093 0.3043496
B1 0.4726093 1.0000000 0.3779376
B2 0.3043496 0.3779376 1.0000000
>
Changing the format of the data to meet the needs of the model (reformatting to "long"):
> mydata <- as.data.frame(mydata) %>%
+ gather("var", "value", B1, B2)
>
Here's the model:
summary(lm(A~value*var, data = mydata))
Call:
lm(formula = A ~ value * var, data = mydata)
Residuals:
Min 1Q Median 3Q Max
-2.89310 -0.52638 0.02998 0.64424 2.85747
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.09699 0.09014 -1.076 0.283
value 0.47445 0.09305 5.099 8.03e-07 ***
varB2 -0.10117 0.12711 -0.796 0.427
value:varB2 -0.13256 0.13965 -0.949 0.344
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.891 on 196 degrees of freedom
Multiple R-squared: 0.158, Adjusted R-squared: 0.1451
F-statistic: 12.26 on 3 and 196 DF, p-value: 2.194e-07
The results here (from my made-up data) suggest that there is a significant relationship between B1 and A (the test of the "value" coefficient, since B1 is the reference group for the "var" coefficient), but that the difference between the B1 relationship with A and the B2 relationship with A is not significant (the test of the "value:varB2" coefficient).
If you like thinking in terms of correlation rather than regression coefficients, just standardize all of your variables (A, B1, and B2) before running the model and the regression coefficients you'll get will be standardized (not quite the same thing as a zero-order correlation, but much closer in terms of interpretation).
Also note that this will restrict your analysis to just the cases that have both B1 and B2 (listwise deletion). As long as that leaves you with enough data to not be underpowered, and as long as the missing data are missing randomly (or a small enough proportion of the total data to not matter much even if they are missing nonrandomly), then that's fine.
The fact that you're restricting your analysis to the same dataset for estimating effects for both B1 and B2 (rather than using slightly different datasets, based on the different patterns of missingness) has the advantage of making interpretation of the difference between correlations a little more straightforward. If you calculate the correlations separately for each and then test the difference between them, you run into the problem that the underlying data are slightly different in each case --- any difference you see could be due to differences in the samples as much as differences in the actual relationships between variables. | How do I determine whether two correlations are significantly different? | Edited after helpful feedback from Mark White (thank you!)
One option is to calculate both relationships (B1 with A, and B2 with A) in a single model that also estimates the difference between them. T | How do I determine whether two correlations are significantly different?
Edited after helpful feedback from Mark White (thank you!)
One option is to calculate both relationships (B1 with A, and B2 with A) in a single model that also estimates the difference between them. This is easy to accomplish with multiple regression. You would run a model with A as the dependent variable, and then one continuous variable with all of the scores for B1 and B2, a categorical variable indicating which variable it was (B1 or B2), and the interaction between them. In r:
> set.seed(24601)
>
> library(tidyverse)
> library(mvtnorm)
> cov <- matrix(c(1, .4, .16,.4, 1, .4, .16, .4, 1), ncol=3, byrow=TRUE)
> mydata <- rmvnorm(n=100, sigma = cov)
> colnames(mydata) = c("A", "B1", "B2")
> head(mydata)
A B1 B2
[1,] -0.1046382 0.6031253 0.5641158
[2,] -1.9303293 -0.7663828 -0.7921836
[3,] 0.1244192 -0.4413581 -1.2376256
[4,] -3.2822601 -1.2512055 -0.5586773
[5,] -0.9543368 -0.1743740 1.1884185
[6,] -0.4843183 -0.2612668 -0.7161938
Here are the correlations from the data I generated:
> cor(mydata)
A B1 B2
A 1.0000000 0.4726093 0.3043496
B1 0.4726093 1.0000000 0.3779376
B2 0.3043496 0.3779376 1.0000000
>
Changing the format of the data to meet the needs of the model (reformatting to "long"):
> mydata <- as.data.frame(mydata) %>%
+ gather("var", "value", B1, B2)
>
Here's the model:
summary(lm(A~value*var, data = mydata))
Call:
lm(formula = A ~ value * var, data = mydata)
Residuals:
Min 1Q Median 3Q Max
-2.89310 -0.52638 0.02998 0.64424 2.85747
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.09699 0.09014 -1.076 0.283
value 0.47445 0.09305 5.099 8.03e-07 ***
varB2 -0.10117 0.12711 -0.796 0.427
value:varB2 -0.13256 0.13965 -0.949 0.344
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.891 on 196 degrees of freedom
Multiple R-squared: 0.158, Adjusted R-squared: 0.1451
F-statistic: 12.26 on 3 and 196 DF, p-value: 2.194e-07
The results here (from my made-up data) suggest that there is a significant relationship between B1 and A (the test of the "value" coefficient, since B1 is the reference group for the "var" coefficient), but that the difference between the B1 relationship with A and the B2 relationship with A is not significant (the test of the "value:varB2" coefficient).
If you like thinking in terms of correlation rather than regression coefficients, just standardize all of your variables (A, B1, and B2) before running the model and the regression coefficients you'll get will be standardized (not quite the same thing as a zero-order correlation, but much closer in terms of interpretation).
Also note that this will restrict your analysis to just the cases that have both B1 and B2 (listwise deletion). As long as that leaves you with enough data to not be underpowered, and as long as the missing data are missing randomly (or a small enough proportion of the total data to not matter much even if they are missing nonrandomly), then that's fine.
The fact that you're restricting your analysis to the same dataset for estimating effects for both B1 and B2 (rather than using slightly different datasets, based on the different patterns of missingness) has the advantage of making interpretation of the difference between correlations a little more straightforward. If you calculate the correlations separately for each and then test the difference between them, you run into the problem that the underlying data are slightly different in each case --- any difference you see could be due to differences in the samples as much as differences in the actual relationships between variables. | How do I determine whether two correlations are significantly different?
Edited after helpful feedback from Mark White (thank you!)
One option is to calculate both relationships (B1 with A, and B2 with A) in a single model that also estimates the difference between them. T |
20,864 | What information does a Box Plot provide that a Histogram does not? | The fact that box plots provide more of a summary of a distribution can also be seen as an advantage in certain cases. Sometimes when we're comparing distributions we don't care about overall shape, but rather where the distributions lie with regard to one another. Plotting the quantiles side by side can be a useful way of doing this without distracting us with other details that we may not care about. | What information does a Box Plot provide that a Histogram does not? | The fact that box plots provide more of a summary of a distribution can also be seen as an advantage in certain cases. Sometimes when we're comparing distributions we don't care about overall shape, | What information does a Box Plot provide that a Histogram does not?
The fact that box plots provide more of a summary of a distribution can also be seen as an advantage in certain cases. Sometimes when we're comparing distributions we don't care about overall shape, but rather where the distributions lie with regard to one another. Plotting the quantiles side by side can be a useful way of doing this without distracting us with other details that we may not care about. | What information does a Box Plot provide that a Histogram does not?
The fact that box plots provide more of a summary of a distribution can also be seen as an advantage in certain cases. Sometimes when we're comparing distributions we don't care about overall shape, |
20,865 | What information does a Box Plot provide that a Histogram does not? | In the univariate case, box-plots do provide some information that the histogram does not (at least, not explicitly). That is, it typically provides the median, 25th and 75th percentile, min/max that is not an outlier and explicitly separates the points that are considered outliers. This can all be "eyeballed" from the histogram (and may be better to be eyeballed in the case of outliers).
However, the much bigger advantage is in comparing distributions across many different groups all at once. With 10+ groups, this is a tiring task with side-by-side histograms, but very easy with box plots.
As you mentioned, violin plots (or bean plots) are somewhat more informative alternatives. However, they require slightly more statistical knowledge than the box plots (i.e. if presenting to a non-statistical audience, it may be a little more intimidating) and box-plots have been around much longer than kernel density estimators, hence their greater popularity. | What information does a Box Plot provide that a Histogram does not? | In the univariate case, box-plots do provide some information that the histogram does not (at least, not explicitly). That is, it typically provides the median, 25th and 75th percentile, min/max that | What information does a Box Plot provide that a Histogram does not?
In the univariate case, box-plots do provide some information that the histogram does not (at least, not explicitly). That is, it typically provides the median, 25th and 75th percentile, min/max that is not an outlier and explicitly separates the points that are considered outliers. This can all be "eyeballed" from the histogram (and may be better to be eyeballed in the case of outliers).
However, the much bigger advantage is in comparing distributions across many different groups all at once. With 10+ groups, this is a tiring task with side-by-side histograms, but very easy with box plots.
As you mentioned, violin plots (or bean plots) are somewhat more informative alternatives. However, they require slightly more statistical knowledge than the box plots (i.e. if presenting to a non-statistical audience, it may be a little more intimidating) and box-plots have been around much longer than kernel density estimators, hence their greater popularity. | What information does a Box Plot provide that a Histogram does not?
In the univariate case, box-plots do provide some information that the histogram does not (at least, not explicitly). That is, it typically provides the median, 25th and 75th percentile, min/max that |
20,866 | What information does a Box Plot provide that a Histogram does not? | If I show you a histogram and ask you where the median is, you might be quite some time figuring it out... and then you'll only get an approximation to it. If I do the same with a boxplot you have it immediately; if that's what you're interested in, boxplots obviously win.
I agree that boxplots are not as effective as a description of the distribution of a single sample, since they reduce it to a few points and that doesn't tell you a lot.
However, if you're comparing many dozens of distributions, having all the details of each may be more information than is easily compared -- you may want to reduce the information to a smaller number of things to compare.
If more information is better, there are many better choices than the histogram; a stem and leaf plot, for example, or an ecdf / quantile plot.
Or you could add information to a histogram:
(plots from this answer)
The first of those -- adding a narrow boxplot to the margin -- gives you any benefits to be gained from either display. | What information does a Box Plot provide that a Histogram does not? | If I show you a histogram and ask you where the median is, you might be quite some time figuring it out... and then you'll only get an approximation to it. If I do the same with a boxplot you have it | What information does a Box Plot provide that a Histogram does not?
If I show you a histogram and ask you where the median is, you might be quite some time figuring it out... and then you'll only get an approximation to it. If I do the same with a boxplot you have it immediately; if that's what you're interested in, boxplots obviously win.
I agree that boxplots are not as effective as a description of the distribution of a single sample, since they reduce it to a few points and that doesn't tell you a lot.
However, if you're comparing many dozens of distributions, having all the details of each may be more information than is easily compared -- you may want to reduce the information to a smaller number of things to compare.
If more information is better, there are many better choices than the histogram; a stem and leaf plot, for example, or an ecdf / quantile plot.
Or you could add information to a histogram:
(plots from this answer)
The first of those -- adding a narrow boxplot to the margin -- gives you any benefits to be gained from either display. | What information does a Box Plot provide that a Histogram does not?
If I show you a histogram and ask you where the median is, you might be quite some time figuring it out... and then you'll only get an approximation to it. If I do the same with a boxplot you have it |
20,867 | What information does a Box Plot provide that a Histogram does not? | Bar plots provide only the range of frequency of observations while box plots are better in telling where several parameters of a distribution lie, example mean and variances that bar plots cannot. Box plots are thus used as an effective comparative tool if one has several distributions. | What information does a Box Plot provide that a Histogram does not? | Bar plots provide only the range of frequency of observations while box plots are better in telling where several parameters of a distribution lie, example mean and variances that bar plots cannot. Bo | What information does a Box Plot provide that a Histogram does not?
Bar plots provide only the range of frequency of observations while box plots are better in telling where several parameters of a distribution lie, example mean and variances that bar plots cannot. Box plots are thus used as an effective comparative tool if one has several distributions. | What information does a Box Plot provide that a Histogram does not?
Bar plots provide only the range of frequency of observations while box plots are better in telling where several parameters of a distribution lie, example mean and variances that bar plots cannot. Bo |
20,868 | Why trace of $I−X(X′X)^{-1}X′$ is $n-p$ in least square regression when the parameter vector $\beta$ is of p dimensions? | The conclusion merely counts dimensions of vector spaces. However, it is not generally true.
The most basic properties of matrix multiplication show that the linear transformation represented by the matrix $\mathbb{H}=X(X^\prime X)^{-}X^\prime$ satisfies
$$\mathbb{H}^2 = \left(X(X^\prime X)^{-}X^\prime\right)^2=X(X^\prime X)^{-}(X^\prime X)(X^\prime X)^{-}X^\prime=\mathbb{H},$$
exhibiting it as a projection operator. Therefore its complement
$$\mathbb{Q} = 1 - \mathbb{H}$$
(as given in the question) also is a projection operator. The trace of $\mathbb{H}$ is its rank $h$ (see below), whence the trace of $\mathbb{Q}$ equals $n-h$.
From its very formula it is apparent that $\mathbb{H}$ is the matrix associated with the composition of two linear transformations $$\mathbb{J}=(X^\prime X)^{-}X^\prime$$ and $X$ itself. The first ($\mathbb{J}$) transforms the $n$-vector $y$ into the $p$-vector $\hat\beta$. The second ($X$) is a transformation from $\mathbb{R}^p$ to $\mathbb{R}^n$ given by $\hat y = X\hat \beta$. Its rank cannot exceed the smaller of those two dimensions, which in a least squares setting is always $p$ (but could be less than $p$, whenever $\mathbb{J}$ is not of full rank). Consequently the rank of the composition $\mathbb{H}=X\mathbb{J}$ cannot exceed the rank of $X$. The correct conclusion, then, is
$\text{tr} (\mathbb{Q}) = n-p$ if and only if $\mathbb{J}$ is of full rank; and in general $n \ge \text{tr} (\mathbb{Q}) \ge n-p$. In the former case the model is said to be "identifiable" (for the coefficients of $\beta$).
$\mathbb{J}$ will be of full rank if and only if $X^\prime X$ is invertible.
Geometric interpretation
$\mathbb{H}$ represents the orthogonal projection from $n$-vectors $y$ (representing the "response" or "dependent variable") onto the space spanned by the columns of $X$ (representing the "independent variables" or "covariates"). The difference $\mathbb{Q}=1-\mathbb{H}$ shows how to decompose any $n$-vector $y$ into a sum of vectors $$y = \mathbb{H}(y) + \mathbb{Q}(y),$$ where the first can be "predicted" from $X$ and the second is perpendicular to it. When the $p$ columns of $X$ generate a $p$-dimensional space (that is, are not collinear), the rank of $\mathbb{H}$ is $p$ and the rank of $\mathbb{Q}$ is $n-p$, reflecting the $n-p$ additional dimensions of variation in the response that are not represented within the independent variables. The trace gives an algebraic formula for these dimensions.
Linear Algebra Background
A projection operator on a vector space $V$ (such as $\mathbb{R}^n$) is a linear transformation $\mathbb{P}:V\to V$ (that is, an endomorphism of $V$) such that $\mathbb{P}^2=\mathbb{P}$. This makes its complement $\mathbb{Q}=1-\mathbb{P}$ a projection operator, too, because
$$\mathbb{Q}^2 = \left(1 - \mathbb{P}\right)^2 = 1 - 2\mathbb{P} + \mathbb{P}^2 = 1-2\mathbb{P}+\mathbb{P} = \mathbb{Q}.$$
All projections fix every element of their images, for whenever $v\in \text{Im}(\mathbb{P})$ we may write $v = \mathbb{P}(w)$ for some $w\in V$, whence $$w = \mathbb{P}(v) = \mathbb{P}^2(v) = \mathbb{P}(\mathbb{P}(v)) = \mathbb{P}(w).$$
Associated with any endomorphism $\mathbb{P}$ of $V$ are two subspaces: its kernel $$\text{ker}(\mathbb{P}) = \{v\in v\,|\, \mathbb{P}(v)=0\}$$ and its image $$\text{Im}(\mathbb{P}) = \{v\in v\,|\, \exists_{w\in V} \mathbb{P}(w)=v\}.$$ Every vector $v\in V$ can be written in the form $$v = w+u$$ where $w\in \text{Im}(\mathbb{P})$ and $u\in \text{Ker}(\mathbb{P})$. We may therefore construct a basis $E \cup F$ for $V$ for which $E \subset \text{Ker}(\mathbb{P})$ and $F \subset \text{Im}(\mathbb{P})$. When $V$ is finite-dimensional, the matrix of $\mathbb{P}$ in this basis will therefore be in block-diagonal form, with one block (corresponding to the action of $\mathbb{P}$ on $E$) all zeros and the other (corresponding to the action of $\mathbb{P}$ on $F$) equal to the $f$ by $f$ identity matrix, where the dimension of $F$ is $f$. The trace of $\mathbb{P}$ is the sum of the values on the diagonal and therefore must equal $f\times 1 = f$. This number is the rank of $\mathbb{P}$: the dimension of its image.
The trace of $1-\mathbb{P}$ equals the trace of $1$ (equal to $n$, the dimension of $V$) minus the trace of $\mathbb{P}$.
These results may be summarized with the assertion that the trace of a projection equals its rank. | Why trace of $I−X(X′X)^{-1}X′$ is $n-p$ in least square regression when the parameter vector $\beta$ | The conclusion merely counts dimensions of vector spaces. However, it is not generally true.
The most basic properties of matrix multiplication show that the linear transformation represented by the m | Why trace of $I−X(X′X)^{-1}X′$ is $n-p$ in least square regression when the parameter vector $\beta$ is of p dimensions?
The conclusion merely counts dimensions of vector spaces. However, it is not generally true.
The most basic properties of matrix multiplication show that the linear transformation represented by the matrix $\mathbb{H}=X(X^\prime X)^{-}X^\prime$ satisfies
$$\mathbb{H}^2 = \left(X(X^\prime X)^{-}X^\prime\right)^2=X(X^\prime X)^{-}(X^\prime X)(X^\prime X)^{-}X^\prime=\mathbb{H},$$
exhibiting it as a projection operator. Therefore its complement
$$\mathbb{Q} = 1 - \mathbb{H}$$
(as given in the question) also is a projection operator. The trace of $\mathbb{H}$ is its rank $h$ (see below), whence the trace of $\mathbb{Q}$ equals $n-h$.
From its very formula it is apparent that $\mathbb{H}$ is the matrix associated with the composition of two linear transformations $$\mathbb{J}=(X^\prime X)^{-}X^\prime$$ and $X$ itself. The first ($\mathbb{J}$) transforms the $n$-vector $y$ into the $p$-vector $\hat\beta$. The second ($X$) is a transformation from $\mathbb{R}^p$ to $\mathbb{R}^n$ given by $\hat y = X\hat \beta$. Its rank cannot exceed the smaller of those two dimensions, which in a least squares setting is always $p$ (but could be less than $p$, whenever $\mathbb{J}$ is not of full rank). Consequently the rank of the composition $\mathbb{H}=X\mathbb{J}$ cannot exceed the rank of $X$. The correct conclusion, then, is
$\text{tr} (\mathbb{Q}) = n-p$ if and only if $\mathbb{J}$ is of full rank; and in general $n \ge \text{tr} (\mathbb{Q}) \ge n-p$. In the former case the model is said to be "identifiable" (for the coefficients of $\beta$).
$\mathbb{J}$ will be of full rank if and only if $X^\prime X$ is invertible.
Geometric interpretation
$\mathbb{H}$ represents the orthogonal projection from $n$-vectors $y$ (representing the "response" or "dependent variable") onto the space spanned by the columns of $X$ (representing the "independent variables" or "covariates"). The difference $\mathbb{Q}=1-\mathbb{H}$ shows how to decompose any $n$-vector $y$ into a sum of vectors $$y = \mathbb{H}(y) + \mathbb{Q}(y),$$ where the first can be "predicted" from $X$ and the second is perpendicular to it. When the $p$ columns of $X$ generate a $p$-dimensional space (that is, are not collinear), the rank of $\mathbb{H}$ is $p$ and the rank of $\mathbb{Q}$ is $n-p$, reflecting the $n-p$ additional dimensions of variation in the response that are not represented within the independent variables. The trace gives an algebraic formula for these dimensions.
Linear Algebra Background
A projection operator on a vector space $V$ (such as $\mathbb{R}^n$) is a linear transformation $\mathbb{P}:V\to V$ (that is, an endomorphism of $V$) such that $\mathbb{P}^2=\mathbb{P}$. This makes its complement $\mathbb{Q}=1-\mathbb{P}$ a projection operator, too, because
$$\mathbb{Q}^2 = \left(1 - \mathbb{P}\right)^2 = 1 - 2\mathbb{P} + \mathbb{P}^2 = 1-2\mathbb{P}+\mathbb{P} = \mathbb{Q}.$$
All projections fix every element of their images, for whenever $v\in \text{Im}(\mathbb{P})$ we may write $v = \mathbb{P}(w)$ for some $w\in V$, whence $$w = \mathbb{P}(v) = \mathbb{P}^2(v) = \mathbb{P}(\mathbb{P}(v)) = \mathbb{P}(w).$$
Associated with any endomorphism $\mathbb{P}$ of $V$ are two subspaces: its kernel $$\text{ker}(\mathbb{P}) = \{v\in v\,|\, \mathbb{P}(v)=0\}$$ and its image $$\text{Im}(\mathbb{P}) = \{v\in v\,|\, \exists_{w\in V} \mathbb{P}(w)=v\}.$$ Every vector $v\in V$ can be written in the form $$v = w+u$$ where $w\in \text{Im}(\mathbb{P})$ and $u\in \text{Ker}(\mathbb{P})$. We may therefore construct a basis $E \cup F$ for $V$ for which $E \subset \text{Ker}(\mathbb{P})$ and $F \subset \text{Im}(\mathbb{P})$. When $V$ is finite-dimensional, the matrix of $\mathbb{P}$ in this basis will therefore be in block-diagonal form, with one block (corresponding to the action of $\mathbb{P}$ on $E$) all zeros and the other (corresponding to the action of $\mathbb{P}$ on $F$) equal to the $f$ by $f$ identity matrix, where the dimension of $F$ is $f$. The trace of $\mathbb{P}$ is the sum of the values on the diagonal and therefore must equal $f\times 1 = f$. This number is the rank of $\mathbb{P}$: the dimension of its image.
The trace of $1-\mathbb{P}$ equals the trace of $1$ (equal to $n$, the dimension of $V$) minus the trace of $\mathbb{P}$.
These results may be summarized with the assertion that the trace of a projection equals its rank. | Why trace of $I−X(X′X)^{-1}X′$ is $n-p$ in least square regression when the parameter vector $\beta$
The conclusion merely counts dimensions of vector spaces. However, it is not generally true.
The most basic properties of matrix multiplication show that the linear transformation represented by the m |
20,869 | Why trace of $I−X(X′X)^{-1}X′$ is $n-p$ in least square regression when the parameter vector $\beta$ is of p dimensions? | @Dougal has already given an answer, but here is another one, a bit simpler.
First, let's use the fact that $\newcommand{\tr}{\mathrm{tr}}\tr(A - B) = \tr(A) - \tr(B)$. So, we get: $$\tr(Q) = \tr(I) - \tr(X(X'X)^{-1}X').$$ Now $I$ is an $n \times n$ identity matrix, so $\tr(I) = n$. Now let's use the fact that $\tr(AB) = \tr(BA)$, that is, the trace is invariant under cyclic permutations. So, we have: $$\tr(Q) = n - \tr((X'X)^{-1}(X'X)).$$ When we multiply $(X'X)^{-1}$ with $(X'X)$, we get a $p \times p$ identity matrix, whose trace is $p$. So, we get: $$\tr(Q) = n - p.$$ | Why trace of $I−X(X′X)^{-1}X′$ is $n-p$ in least square regression when the parameter vector $\beta$ | @Dougal has already given an answer, but here is another one, a bit simpler.
First, let's use the fact that $\newcommand{\tr}{\mathrm{tr}}\tr(A - B) = \tr(A) - \tr(B)$. So, we get: $$\tr(Q) = \tr(I) - | Why trace of $I−X(X′X)^{-1}X′$ is $n-p$ in least square regression when the parameter vector $\beta$ is of p dimensions?
@Dougal has already given an answer, but here is another one, a bit simpler.
First, let's use the fact that $\newcommand{\tr}{\mathrm{tr}}\tr(A - B) = \tr(A) - \tr(B)$. So, we get: $$\tr(Q) = \tr(I) - \tr(X(X'X)^{-1}X').$$ Now $I$ is an $n \times n$ identity matrix, so $\tr(I) = n$. Now let's use the fact that $\tr(AB) = \tr(BA)$, that is, the trace is invariant under cyclic permutations. So, we have: $$\tr(Q) = n - \tr((X'X)^{-1}(X'X)).$$ When we multiply $(X'X)^{-1}$ with $(X'X)$, we get a $p \times p$ identity matrix, whose trace is $p$. So, we get: $$\tr(Q) = n - p.$$ | Why trace of $I−X(X′X)^{-1}X′$ is $n-p$ in least square regression when the parameter vector $\beta$
@Dougal has already given an answer, but here is another one, a bit simpler.
First, let's use the fact that $\newcommand{\tr}{\mathrm{tr}}\tr(A - B) = \tr(A) - \tr(B)$. So, we get: $$\tr(Q) = \tr(I) - |
20,870 | Why trace of $I−X(X′X)^{-1}X′$ is $n-p$ in least square regression when the parameter vector $\beta$ is of p dimensions? | $\newcommand\R{\mathbb R}$Assume that $n \le p$ and that $X$ is full-rank.
Consider the compact singular value decomposition $X = U \Sigma V^T$, where $\Sigma \in \R^{p \times p}$ is diagonal and $U \in \R^{n \times p}, V \in \R^{p \times p}$ have $U^T U = V^T V = V V^T = I_p$ (but note $U U^T$ is rank at most $p$ so it cannot be $I_n$). Then
\begin{align}
X (X^T X)^{-1} X^T
&= U \Sigma V^T (V \Sigma U^T U \Sigma V^T)^{-1} V \Sigma U^T
\\&= U \Sigma V^T (V \Sigma^2 V^T)^{-1} V \Sigma U^T
\\&= U \Sigma V^T V \Sigma^{-2} V^T V \Sigma U^T
\\&= U U^T
.\end{align}
Now, there exists a matrix $U_2 \in \R^{n \times n-p}$ such that
$U_n = \begin{bmatrix}U & U_2\end{bmatrix}$ is unitary.
We can write
\begin{align}
I - X (X^T X)^{-1} X^T
&= U_n U_n^T - U U^T
\\&= U_n \left( I_n - \begin{bmatrix}I_p & 0 \\ 0 & 0\end{bmatrix} \right) U_n^T
\\&= U_n \begin{bmatrix}0 & 0 \\ 0 & I_{n-p}\end{bmatrix} U_n^T
.\end{align}
This form shows that $Q$ is positive semidefinite,
and since it is a valid svd and the singular values are the square of the eigenvalues for a square symmetric matrix,
also tells us that $Q$ has eigenvalues 1 (of multiplicity $n-p$) and 0 (of multiplicity $p$). Thus the trace of $Q$ is $n-p$. | Why trace of $I−X(X′X)^{-1}X′$ is $n-p$ in least square regression when the parameter vector $\beta$ | $\newcommand\R{\mathbb R}$Assume that $n \le p$ and that $X$ is full-rank.
Consider the compact singular value decomposition $X = U \Sigma V^T$, where $\Sigma \in \R^{p \times p}$ is diagonal and $U \ | Why trace of $I−X(X′X)^{-1}X′$ is $n-p$ in least square regression when the parameter vector $\beta$ is of p dimensions?
$\newcommand\R{\mathbb R}$Assume that $n \le p$ and that $X$ is full-rank.
Consider the compact singular value decomposition $X = U \Sigma V^T$, where $\Sigma \in \R^{p \times p}$ is diagonal and $U \in \R^{n \times p}, V \in \R^{p \times p}$ have $U^T U = V^T V = V V^T = I_p$ (but note $U U^T$ is rank at most $p$ so it cannot be $I_n$). Then
\begin{align}
X (X^T X)^{-1} X^T
&= U \Sigma V^T (V \Sigma U^T U \Sigma V^T)^{-1} V \Sigma U^T
\\&= U \Sigma V^T (V \Sigma^2 V^T)^{-1} V \Sigma U^T
\\&= U \Sigma V^T V \Sigma^{-2} V^T V \Sigma U^T
\\&= U U^T
.\end{align}
Now, there exists a matrix $U_2 \in \R^{n \times n-p}$ such that
$U_n = \begin{bmatrix}U & U_2\end{bmatrix}$ is unitary.
We can write
\begin{align}
I - X (X^T X)^{-1} X^T
&= U_n U_n^T - U U^T
\\&= U_n \left( I_n - \begin{bmatrix}I_p & 0 \\ 0 & 0\end{bmatrix} \right) U_n^T
\\&= U_n \begin{bmatrix}0 & 0 \\ 0 & I_{n-p}\end{bmatrix} U_n^T
.\end{align}
This form shows that $Q$ is positive semidefinite,
and since it is a valid svd and the singular values are the square of the eigenvalues for a square symmetric matrix,
also tells us that $Q$ has eigenvalues 1 (of multiplicity $n-p$) and 0 (of multiplicity $p$). Thus the trace of $Q$ is $n-p$. | Why trace of $I−X(X′X)^{-1}X′$ is $n-p$ in least square regression when the parameter vector $\beta$
$\newcommand\R{\mathbb R}$Assume that $n \le p$ and that $X$ is full-rank.
Consider the compact singular value decomposition $X = U \Sigma V^T$, where $\Sigma \in \R^{p \times p}$ is diagonal and $U \ |
20,871 | What is the meaning of the density of a distribution at a point? | Before answering your question directly, one important thing to note is that, for continuous variables, the density at $X = x$ cannot be interpreted as the probability that $X = x$. Indeed, the density at any given $x$ could be greater than one because all that matters is that the density integrates to one, and the intervals are infinitesimally small.
With that background in mind, the density does have several useful meanings. One is that it can be used to compute your relative belief that $X = x_1$ versus some other $x_2$. To do this, simply take the ratio of the two densities.
So although we are usually more interested in areas under the curve with width greater than zero, areas under the curve with infinitesimally small width can be compared. The relevance of that comparison, however depends on your research question.
To give you a concrete example of when comparing densities is useful, I point you to a question I recently asked on Cross Validated about useful prior distributions for a correlation coefficient when you want to avoid the boundaries of the distribution. I argued that one possible prior distribution, a beta distribution with both parameters equal to two, is quite informative in that it places about seven times the belief in the correlation being zero than in it being moderately negative at about -0.4 or strongly positive at about 0.94. To do that, I divided the approximate density at $x = 0$ by the approximate densities at the points in the domain of the beta distribution (which is 0,1) and got the number seven. So the Beta(2,2) distribution has seven times stronger of a belief that the correlation is zero than that it is moderately negative or strongly positive.
I hope this helps. | What is the meaning of the density of a distribution at a point? | Before answering your question directly, one important thing to note is that, for continuous variables, the density at $X = x$ cannot be interpreted as the probability that $X = x$. Indeed, the densit | What is the meaning of the density of a distribution at a point?
Before answering your question directly, one important thing to note is that, for continuous variables, the density at $X = x$ cannot be interpreted as the probability that $X = x$. Indeed, the density at any given $x$ could be greater than one because all that matters is that the density integrates to one, and the intervals are infinitesimally small.
With that background in mind, the density does have several useful meanings. One is that it can be used to compute your relative belief that $X = x_1$ versus some other $x_2$. To do this, simply take the ratio of the two densities.
So although we are usually more interested in areas under the curve with width greater than zero, areas under the curve with infinitesimally small width can be compared. The relevance of that comparison, however depends on your research question.
To give you a concrete example of when comparing densities is useful, I point you to a question I recently asked on Cross Validated about useful prior distributions for a correlation coefficient when you want to avoid the boundaries of the distribution. I argued that one possible prior distribution, a beta distribution with both parameters equal to two, is quite informative in that it places about seven times the belief in the correlation being zero than in it being moderately negative at about -0.4 or strongly positive at about 0.94. To do that, I divided the approximate density at $x = 0$ by the approximate densities at the points in the domain of the beta distribution (which is 0,1) and got the number seven. So the Beta(2,2) distribution has seven times stronger of a belief that the correlation is zero than that it is moderately negative or strongly positive.
I hope this helps. | What is the meaning of the density of a distribution at a point?
Before answering your question directly, one important thing to note is that, for continuous variables, the density at $X = x$ cannot be interpreted as the probability that $X = x$. Indeed, the densit |
20,872 | What is the meaning of the density of a distribution at a point? | As you know, a probability density is not a probability. One interpretation of density considers the relationship $$f_X(x) = F'_X(x).$$ In this context,the density at some value $X = x$ is the instantaneous rate of change of the cumulative distribution; i.e., how rapidly the probability of observing $X \le x$ is increasing.
Another interpretation comes from the limit $$f_X(x) = \lim_{\Delta x \to 0} \frac{1}{\Delta x} \Pr[x \le X \le x + \Delta x].$$ In this sense, the density is the differential probability of observing $X \in [x, x + \Delta x]$ divided by the length of the interval $\Delta x$. So in some sense it represents a likelihood of observing $X \in [x, x + \Delta x]$, with larger densities reflecting a larger likelihood of observing values in that interval. | What is the meaning of the density of a distribution at a point? | As you know, a probability density is not a probability. One interpretation of density considers the relationship $$f_X(x) = F'_X(x).$$ In this context,the density at some value $X = x$ is the insta | What is the meaning of the density of a distribution at a point?
As you know, a probability density is not a probability. One interpretation of density considers the relationship $$f_X(x) = F'_X(x).$$ In this context,the density at some value $X = x$ is the instantaneous rate of change of the cumulative distribution; i.e., how rapidly the probability of observing $X \le x$ is increasing.
Another interpretation comes from the limit $$f_X(x) = \lim_{\Delta x \to 0} \frac{1}{\Delta x} \Pr[x \le X \le x + \Delta x].$$ In this sense, the density is the differential probability of observing $X \in [x, x + \Delta x]$ divided by the length of the interval $\Delta x$. So in some sense it represents a likelihood of observing $X \in [x, x + \Delta x]$, with larger densities reflecting a larger likelihood of observing values in that interval. | What is the meaning of the density of a distribution at a point?
As you know, a probability density is not a probability. One interpretation of density considers the relationship $$f_X(x) = F'_X(x).$$ In this context,the density at some value $X = x$ is the insta |
20,873 | What is the meaning of the density of a distribution at a point? | $f(0)$ is the density at 0.
It is meaningful in several ways.
For example, the probability of being within a small distance of $x$ ($\pm \varepsilon/2$) of $x$ is approximately $\varepsilon f(x)$.
It is relative probability; for the standard normal $f(0)$ is $\sqrt{e}f(1)$, so a value very close to 0 is, relatively speaking, 1.65 times as likely as a value that close to 1. | What is the meaning of the density of a distribution at a point? | $f(0)$ is the density at 0.
It is meaningful in several ways.
For example, the probability of being within a small distance of $x$ ($\pm \varepsilon/2$) of $x$ is approximately $\varepsilon f(x)$.
I | What is the meaning of the density of a distribution at a point?
$f(0)$ is the density at 0.
It is meaningful in several ways.
For example, the probability of being within a small distance of $x$ ($\pm \varepsilon/2$) of $x$ is approximately $\varepsilon f(x)$.
It is relative probability; for the standard normal $f(0)$ is $\sqrt{e}f(1)$, so a value very close to 0 is, relatively speaking, 1.65 times as likely as a value that close to 1. | What is the meaning of the density of a distribution at a point?
$f(0)$ is the density at 0.
It is meaningful in several ways.
For example, the probability of being within a small distance of $x$ ($\pm \varepsilon/2$) of $x$ is approximately $\varepsilon f(x)$.
I |
20,874 | What is the meaning of the density of a distribution at a point? | Also note that while the density at a point has a value, the probability at a point for a continuous distribution will always be zero as the area under a point is 0. | What is the meaning of the density of a distribution at a point? | Also note that while the density at a point has a value, the probability at a point for a continuous distribution will always be zero as the area under a point is 0. | What is the meaning of the density of a distribution at a point?
Also note that while the density at a point has a value, the probability at a point for a continuous distribution will always be zero as the area under a point is 0. | What is the meaning of the density of a distribution at a point?
Also note that while the density at a point has a value, the probability at a point for a continuous distribution will always be zero as the area under a point is 0. |
20,875 | Why are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the errors are not normally distributed? | Short Answer
The probability density of a multivariate Gaussian distributed variable $x=(x_1, x_2,...,x_n)$, with mean $\mu=(\mu_1,\mu_2,...,\mu_n)$ is related to the square of the euclidean distance between the mean and the variable ($\vert \mu-x \vert_2^2$), or in other words the sum of squares.
Long Answer
If you multiply multiple Gaussian distributions for your $n$ errors, where you assume equal deviations, then you get a sum of squares.
$$ \begin{array}
\mathcal{L(\mu_j,x_{ij})} = P(x_{ij} \vert \mu_j) & =\prod_{i=1}^n \frac{1}{\sqrt{2 \pi \sigma^2}} exp\left[-\frac{(x_{ij}-\mu_i)^2}{2\sigma^2}\right] \\
&= \left(\frac{1}{\sqrt{2 \pi \sigma^2}} \right)^n exp \left[ -\frac{\sum_{i=1}^n(x_{ij}-\mu_i)^2}{2\sigma^2}\right]
\end{array}$$
or in the convenient logarithmic form:
$$
\log\left(\mathcal{L(\mu_j,x_{ij})} \right) = n \log \left( \frac{1}{\sqrt{2 \pi \sigma^2}} \right) -\frac{1}{2\sigma^2} \sum_{i=1}^n(x_{ij}-\mu_j)^2
$$
So optimizing the $\mu$ to minimize the sum of squares is equal to maximizing the (log) likelihood (ie. the product of multiple Gaussian distributions, or the multivariate Gaussian distribution).
It is this nested square of the difference $(\mu-x)$ inside exponential structure, $exp\left[ (x_i-\mu)^2 \right]$, which other distributions do not have.
Compare for instance with the case for Poisson distributions
$$\log(\mathcal{L}) = \log \left( \prod\frac{\mu_j^{x_{ij}}}{x_{ij}!} exp \left[ -\mu_j \right] \right) = -\sum \mu_j -\sum log(x_{ij}!) + \sum log(\mu_j) x_{ij} $$
which has a maximum when the following is minimized:
$$\sum \mu_j -log(\mu_j) x_{ij}$$
which is a different beast.
In addition (history)
The history of the normal distribution (ignoring deMoivre getting to this distribution as an approximation for the binomial distribution) is actually as the discovery of the distribution that makes the MLE correspond to the least squares method (rather than the the least squares method being a method that can express the MLE of the normal distribution, first came the least squares method, second came the Gaussian distribution)
Note that Gauss, connecting the 'method of maximum likelihood' with the 'the method of least squares', came up with the 'Gaussian distribution', $e^{-x^2}$ , as the sole distribution of errors that leads us to make this connection between the two methods.
From Charles Henry Davis' translation (Theory of the motion of the heavenly bodies moving about the sun in conic sections. A translation of Gauss's "Theoria motus," with an appendix) ...
Gauss defines:
Accordingly, the probability to be assigned to each error $\Delta$ will be expressed by a function of $\Delta$ which we shall denote by $\psi \Delta$.
(Italization done by me)
And continues (in section 177 pp. 258):
... whence it is readily inferred that $\frac{\psi^\prime\Delta}{\Delta}$ must be a constant quantity. which we will denote by $k$. Hence we have $$\text{log } \psi \Delta = \frac{1}{2} k \Delta \Delta + \text{Constant}$$ $$\psi \Delta = x e^{\frac{1}{2}k \Delta \Delta}$$ denoting the base of the hyperbolic logarithms by $e$ and assuming $$\text{Constant} = \log x$$
ending up (after normalization and realizing $k<0$) in
$$\psi \Delta = \frac{h}{\sqrt{\pi}} e^{-hh\Delta \Delta}
$$ | Why are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the error | Short Answer
The probability density of a multivariate Gaussian distributed variable $x=(x_1, x_2,...,x_n)$, with mean $\mu=(\mu_1,\mu_2,...,\mu_n)$ is related to the square of the euclidean distance | Why are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the errors are not normally distributed?
Short Answer
The probability density of a multivariate Gaussian distributed variable $x=(x_1, x_2,...,x_n)$, with mean $\mu=(\mu_1,\mu_2,...,\mu_n)$ is related to the square of the euclidean distance between the mean and the variable ($\vert \mu-x \vert_2^2$), or in other words the sum of squares.
Long Answer
If you multiply multiple Gaussian distributions for your $n$ errors, where you assume equal deviations, then you get a sum of squares.
$$ \begin{array}
\mathcal{L(\mu_j,x_{ij})} = P(x_{ij} \vert \mu_j) & =\prod_{i=1}^n \frac{1}{\sqrt{2 \pi \sigma^2}} exp\left[-\frac{(x_{ij}-\mu_i)^2}{2\sigma^2}\right] \\
&= \left(\frac{1}{\sqrt{2 \pi \sigma^2}} \right)^n exp \left[ -\frac{\sum_{i=1}^n(x_{ij}-\mu_i)^2}{2\sigma^2}\right]
\end{array}$$
or in the convenient logarithmic form:
$$
\log\left(\mathcal{L(\mu_j,x_{ij})} \right) = n \log \left( \frac{1}{\sqrt{2 \pi \sigma^2}} \right) -\frac{1}{2\sigma^2} \sum_{i=1}^n(x_{ij}-\mu_j)^2
$$
So optimizing the $\mu$ to minimize the sum of squares is equal to maximizing the (log) likelihood (ie. the product of multiple Gaussian distributions, or the multivariate Gaussian distribution).
It is this nested square of the difference $(\mu-x)$ inside exponential structure, $exp\left[ (x_i-\mu)^2 \right]$, which other distributions do not have.
Compare for instance with the case for Poisson distributions
$$\log(\mathcal{L}) = \log \left( \prod\frac{\mu_j^{x_{ij}}}{x_{ij}!} exp \left[ -\mu_j \right] \right) = -\sum \mu_j -\sum log(x_{ij}!) + \sum log(\mu_j) x_{ij} $$
which has a maximum when the following is minimized:
$$\sum \mu_j -log(\mu_j) x_{ij}$$
which is a different beast.
In addition (history)
The history of the normal distribution (ignoring deMoivre getting to this distribution as an approximation for the binomial distribution) is actually as the discovery of the distribution that makes the MLE correspond to the least squares method (rather than the the least squares method being a method that can express the MLE of the normal distribution, first came the least squares method, second came the Gaussian distribution)
Note that Gauss, connecting the 'method of maximum likelihood' with the 'the method of least squares', came up with the 'Gaussian distribution', $e^{-x^2}$ , as the sole distribution of errors that leads us to make this connection between the two methods.
From Charles Henry Davis' translation (Theory of the motion of the heavenly bodies moving about the sun in conic sections. A translation of Gauss's "Theoria motus," with an appendix) ...
Gauss defines:
Accordingly, the probability to be assigned to each error $\Delta$ will be expressed by a function of $\Delta$ which we shall denote by $\psi \Delta$.
(Italization done by me)
And continues (in section 177 pp. 258):
... whence it is readily inferred that $\frac{\psi^\prime\Delta}{\Delta}$ must be a constant quantity. which we will denote by $k$. Hence we have $$\text{log } \psi \Delta = \frac{1}{2} k \Delta \Delta + \text{Constant}$$ $$\psi \Delta = x e^{\frac{1}{2}k \Delta \Delta}$$ denoting the base of the hyperbolic logarithms by $e$ and assuming $$\text{Constant} = \log x$$
ending up (after normalization and realizing $k<0$) in
$$\psi \Delta = \frac{h}{\sqrt{\pi}} e^{-hh\Delta \Delta}
$$ | Why are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the error
Short Answer
The probability density of a multivariate Gaussian distributed variable $x=(x_1, x_2,...,x_n)$, with mean $\mu=(\mu_1,\mu_2,...,\mu_n)$ is related to the square of the euclidean distance |
20,876 | Why are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the errors are not normally distributed? | Because the MLE is derived from the assumption of residual normally distributed.
Note that
$$
\text{min}_\beta~~ \|X \beta - y \|^2
$$
Has no probabilistic meaning: just find the $\beta$ that minimize the squared loss function. Everything is deterministic, and no random components in there.
Where the concept of probability and likelihood come, is we assume
$$
y=X\beta + \epsilon
$$
Where we are considering $y$ as a random variable, and $\epsilon$ is normally distributed. | Why are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the error | Because the MLE is derived from the assumption of residual normally distributed.
Note that
$$
\text{min}_\beta~~ \|X \beta - y \|^2
$$
Has no probabilistic meaning: just find the $\beta$ that minimiz | Why are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the errors are not normally distributed?
Because the MLE is derived from the assumption of residual normally distributed.
Note that
$$
\text{min}_\beta~~ \|X \beta - y \|^2
$$
Has no probabilistic meaning: just find the $\beta$ that minimize the squared loss function. Everything is deterministic, and no random components in there.
Where the concept of probability and likelihood come, is we assume
$$
y=X\beta + \epsilon
$$
Where we are considering $y$ as a random variable, and $\epsilon$ is normally distributed. | Why are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the error
Because the MLE is derived from the assumption of residual normally distributed.
Note that
$$
\text{min}_\beta~~ \|X \beta - y \|^2
$$
Has no probabilistic meaning: just find the $\beta$ that minimiz |
20,877 | Why are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the errors are not normally distributed? | The least squares and the maximum (gaussian) likelihood fit are always equivalent. That is, they are minimized by the same set of coefficients.
Changing the assumption on the errors changes your likelihood function (maximizing the likelihood of a model is equivalent to maximizing the likelihood of the error term), and hence the function will no longer be minimized by the same set of coefficients.
So in practice the two are the same, but in theory, when you maximize a different likelihood, you will get to a different answer than Least-squares | Why are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the error | The least squares and the maximum (gaussian) likelihood fit are always equivalent. That is, they are minimized by the same set of coefficients.
Changing the assumption on the errors changes your likel | Why are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the errors are not normally distributed?
The least squares and the maximum (gaussian) likelihood fit are always equivalent. That is, they are minimized by the same set of coefficients.
Changing the assumption on the errors changes your likelihood function (maximizing the likelihood of a model is equivalent to maximizing the likelihood of the error term), and hence the function will no longer be minimized by the same set of coefficients.
So in practice the two are the same, but in theory, when you maximize a different likelihood, you will get to a different answer than Least-squares | Why are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the error
The least squares and the maximum (gaussian) likelihood fit are always equivalent. That is, they are minimized by the same set of coefficients.
Changing the assumption on the errors changes your likel |
20,878 | Why are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the errors are not normally distributed? | A concrete example: Suppose we take a simple error function p(1)=.9, p(-9) =.10 . If we take two points, then LS is just going to take the line through them. ML, on the other hand, is going to assume that both points are one unit too high, and thus will take the line through the points shifted down on unit. | Why are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the error | A concrete example: Suppose we take a simple error function p(1)=.9, p(-9) =.10 . If we take two points, then LS is just going to take the line through them. ML, on the other hand, is going to assume | Why are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the errors are not normally distributed?
A concrete example: Suppose we take a simple error function p(1)=.9, p(-9) =.10 . If we take two points, then LS is just going to take the line through them. ML, on the other hand, is going to assume that both points are one unit too high, and thus will take the line through the points shifted down on unit. | Why are the Least-Squares and Maximum-Likelihood methods of regression not equivalent when the error
A concrete example: Suppose we take a simple error function p(1)=.9, p(-9) =.10 . If we take two points, then LS is just going to take the line through them. ML, on the other hand, is going to assume |
20,879 | Are $t$ test and one-way ANOVA both Wald tests? | Consider the following setup. We have a $p$-dimensional parameter vector $\theta$ that specifies the model completely and a maximum-likelihood estimator $\hat{\theta}$. The Fisher information in $\theta$ is denoted $I(\theta)$.
What is usually referred to as the Wald statistic is
$$(\hat{\theta} - \theta)^T I(\hat{\theta}) (\hat{\theta} - \theta)$$
where $I(\hat{\theta})$ is the Fisher information evaluated in the maximum-likelihood estimator. Under regularity conditions the Wald statistic follows
asymptotically a $\chi^2$-distribution with $p$-degrees of freedom when $\theta$ is the true parameter. The Wald statistic can be used to test a simple hypothesis $H_0 : \theta = \theta_0$ on the entire parameter vector.
With $\Sigma(\theta) = I(\theta)^{-1}$ the inverse Fisher information the Wald test statistic of the hypothesis $H_0 : \theta_1 = \theta_{0,1}$ is
$$\frac{(\hat{\theta}_1 - \theta_{0,1})^2}{\Sigma(\hat{\theta})_{ii}}.$$
Its asymptotic distribution is a $\chi^2$-distribution with 1 degrees of freedom.
For the normal model where $\theta = (\mu, \sigma^2)$ is the vector of the mean and the variance parameters, the Wald test statistic of testing if $\mu = \mu_0$ is
$$\frac{n(\hat{\mu} - \mu_0)^2}{\hat{\sigma}^2}$$
with $n$ the sample size.
Here $\hat{\sigma}^2$ is the maximum-likelihood estimator of $\sigma^2$ (where you divide by $n$). The $t$-test statistic is
$$\frac{\sqrt{n}(\hat{\mu} - \mu_0)}{s}$$
where $s^2$ is the unbiased estimator of the variance (where you divide by the $n-1$). The Wald test statistic is almost but not exactly equal to the square of the $t$-test statistic, but they are asymptotically equivalent when $n \to \infty$. The squared $t$-test statistic has an exact $F(1, n-1)$-distribution, which converges to the $\chi^2$-distribution with 1 degrees of freedom for $n \to \infty$.
The same story holds regarding the $F$-test in one-way ANOVA. | Are $t$ test and one-way ANOVA both Wald tests? | Consider the following setup. We have a $p$-dimensional parameter vector $\theta$ that specifies the model completely and a maximum-likelihood estimator $\hat{\theta}$. The Fisher information in $\the | Are $t$ test and one-way ANOVA both Wald tests?
Consider the following setup. We have a $p$-dimensional parameter vector $\theta$ that specifies the model completely and a maximum-likelihood estimator $\hat{\theta}$. The Fisher information in $\theta$ is denoted $I(\theta)$.
What is usually referred to as the Wald statistic is
$$(\hat{\theta} - \theta)^T I(\hat{\theta}) (\hat{\theta} - \theta)$$
where $I(\hat{\theta})$ is the Fisher information evaluated in the maximum-likelihood estimator. Under regularity conditions the Wald statistic follows
asymptotically a $\chi^2$-distribution with $p$-degrees of freedom when $\theta$ is the true parameter. The Wald statistic can be used to test a simple hypothesis $H_0 : \theta = \theta_0$ on the entire parameter vector.
With $\Sigma(\theta) = I(\theta)^{-1}$ the inverse Fisher information the Wald test statistic of the hypothesis $H_0 : \theta_1 = \theta_{0,1}$ is
$$\frac{(\hat{\theta}_1 - \theta_{0,1})^2}{\Sigma(\hat{\theta})_{ii}}.$$
Its asymptotic distribution is a $\chi^2$-distribution with 1 degrees of freedom.
For the normal model where $\theta = (\mu, \sigma^2)$ is the vector of the mean and the variance parameters, the Wald test statistic of testing if $\mu = \mu_0$ is
$$\frac{n(\hat{\mu} - \mu_0)^2}{\hat{\sigma}^2}$$
with $n$ the sample size.
Here $\hat{\sigma}^2$ is the maximum-likelihood estimator of $\sigma^2$ (where you divide by $n$). The $t$-test statistic is
$$\frac{\sqrt{n}(\hat{\mu} - \mu_0)}{s}$$
where $s^2$ is the unbiased estimator of the variance (where you divide by the $n-1$). The Wald test statistic is almost but not exactly equal to the square of the $t$-test statistic, but they are asymptotically equivalent when $n \to \infty$. The squared $t$-test statistic has an exact $F(1, n-1)$-distribution, which converges to the $\chi^2$-distribution with 1 degrees of freedom for $n \to \infty$.
The same story holds regarding the $F$-test in one-way ANOVA. | Are $t$ test and one-way ANOVA both Wald tests?
Consider the following setup. We have a $p$-dimensional parameter vector $\theta$ that specifies the model completely and a maximum-likelihood estimator $\hat{\theta}$. The Fisher information in $\the |
20,880 | Are $t$ test and one-way ANOVA both Wald tests? | @NRH gave a good theoretical answer, here is one that intends to be simpler, more intuitive.
There is the formal Wald test (described in the answer by NRH), but we also refer to tests that look at the difference between an estimated parameter and its hypothesized value relative to the variation estimated at the estimated parameter as a Wald style test. So the t-test as we usually use it is a Wald Style test even if it is slightly different from the exact Wald test (a difference of $n$ vs. $n-1$ inside a square root). We could even design a Wald style test based on an estimated median minus the hypothesized median divided by a function of the IQR, but I don't know what distribution it would follow, it would be better to use a bootstrap, permutation, or simulated distribution for this test rather than depending on chi-square asymptotics. The F-test for ANOVA fits the general pattern as well, the numerator can be thought of as measuring the difference of the means from an overall mean and the denominator is a measure of the variation.
Also note that if you Square a random variable that follows a t distribution then it will follow an F distribution with 1 df for the numerator and the denominator df will be those from the t distribution. Also note that an F distribution with infinite denominator df is a chi-square distribution. So that means that both the t-statistic (squared) and the F statistic are asymptotically chi-squared just like the Wald statistic. We just use the more exact distribution in practice. | Are $t$ test and one-way ANOVA both Wald tests? | @NRH gave a good theoretical answer, here is one that intends to be simpler, more intuitive.
There is the formal Wald test (described in the answer by NRH), but we also refer to tests that look at the | Are $t$ test and one-way ANOVA both Wald tests?
@NRH gave a good theoretical answer, here is one that intends to be simpler, more intuitive.
There is the formal Wald test (described in the answer by NRH), but we also refer to tests that look at the difference between an estimated parameter and its hypothesized value relative to the variation estimated at the estimated parameter as a Wald style test. So the t-test as we usually use it is a Wald Style test even if it is slightly different from the exact Wald test (a difference of $n$ vs. $n-1$ inside a square root). We could even design a Wald style test based on an estimated median minus the hypothesized median divided by a function of the IQR, but I don't know what distribution it would follow, it would be better to use a bootstrap, permutation, or simulated distribution for this test rather than depending on chi-square asymptotics. The F-test for ANOVA fits the general pattern as well, the numerator can be thought of as measuring the difference of the means from an overall mean and the denominator is a measure of the variation.
Also note that if you Square a random variable that follows a t distribution then it will follow an F distribution with 1 df for the numerator and the denominator df will be those from the t distribution. Also note that an F distribution with infinite denominator df is a chi-square distribution. So that means that both the t-statistic (squared) and the F statistic are asymptotically chi-squared just like the Wald statistic. We just use the more exact distribution in practice. | Are $t$ test and one-way ANOVA both Wald tests?
@NRH gave a good theoretical answer, here is one that intends to be simpler, more intuitive.
There is the formal Wald test (described in the answer by NRH), but we also refer to tests that look at the |
20,881 | Extract standard errors of coefficient linear regression R [duplicate] | It's useful to see what kind of objects are contained within another object. Using names() or str() can help here.
Note that out <- summary(fit) is the summary of the linear regression object.
names(out)
str(out)
The simplest way to get the coefficients would probably be:
out$coefficients[ , 2] #extract 2nd column from the coefficients object in out | Extract standard errors of coefficient linear regression R [duplicate] | It's useful to see what kind of objects are contained within another object. Using names() or str() can help here.
Note that out <- summary(fit) is the summary of the linear regression object.
names | Extract standard errors of coefficient linear regression R [duplicate]
It's useful to see what kind of objects are contained within another object. Using names() or str() can help here.
Note that out <- summary(fit) is the summary of the linear regression object.
names(out)
str(out)
The simplest way to get the coefficients would probably be:
out$coefficients[ , 2] #extract 2nd column from the coefficients object in out | Extract standard errors of coefficient linear regression R [duplicate]
It's useful to see what kind of objects are contained within another object. Using names() or str() can help here.
Note that out <- summary(fit) is the summary of the linear regression object.
names |
20,882 | Extract standard errors of coefficient linear regression R [duplicate] | Like this:
fit = lm(ydata ~ .,data = data)
se <- sqrt(diag(vcov(fit)))
These are the classical asymptotic ones you see in summary. Please also see the links in my answer to this same question about alternative standard error options. | Extract standard errors of coefficient linear regression R [duplicate] | Like this:
fit = lm(ydata ~ .,data = data)
se <- sqrt(diag(vcov(fit)))
These are the classical asymptotic ones you see in summary. Please also see the links in my answer to this same question about | Extract standard errors of coefficient linear regression R [duplicate]
Like this:
fit = lm(ydata ~ .,data = data)
se <- sqrt(diag(vcov(fit)))
These are the classical asymptotic ones you see in summary. Please also see the links in my answer to this same question about alternative standard error options. | Extract standard errors of coefficient linear regression R [duplicate]
Like this:
fit = lm(ydata ~ .,data = data)
se <- sqrt(diag(vcov(fit)))
These are the classical asymptotic ones you see in summary. Please also see the links in my answer to this same question about |
20,883 | Why do we need regularization for linear least squares given that a line is the simplest model possible? | Great question! The need for regularization always depends on your sample size.
Imagine you do not have a lot of data, just three samples. I plotted three possible linear regression lines. The red one does not use regularization; for the green one, regularization on the slope parameter is used, and the blue regression has very strong regularization on the slope parameter.
Which one is the best model? We don't know. That will only become clear as more data is collected. But regularized models produce more conservative estimates, which often work better in practice.
That said, if you really only have the case of a simple linear regression (one input variable, like in my example), you will often have enough data to use an unregularized model.
But keep in mind that linear regression also works for multiple input variables. In that case, the model will have $p+1$ parameters if you have $p$ features. You need regularization if $p$ is large compared to your sample size.
In summary, a model is never a priori complex or simple, it always depends on the data you want to use the model for. | Why do we need regularization for linear least squares given that a line is the simplest model possi | Great question! The need for regularization always depends on your sample size.
Imagine you do not have a lot of data, just three samples. I plotted three possible linear regression lines. The red one | Why do we need regularization for linear least squares given that a line is the simplest model possible?
Great question! The need for regularization always depends on your sample size.
Imagine you do not have a lot of data, just three samples. I plotted three possible linear regression lines. The red one does not use regularization; for the green one, regularization on the slope parameter is used, and the blue regression has very strong regularization on the slope parameter.
Which one is the best model? We don't know. That will only become clear as more data is collected. But regularized models produce more conservative estimates, which often work better in practice.
That said, if you really only have the case of a simple linear regression (one input variable, like in my example), you will often have enough data to use an unregularized model.
But keep in mind that linear regression also works for multiple input variables. In that case, the model will have $p+1$ parameters if you have $p$ features. You need regularization if $p$ is large compared to your sample size.
In summary, a model is never a priori complex or simple, it always depends on the data you want to use the model for. | Why do we need regularization for linear least squares given that a line is the simplest model possi
Great question! The need for regularization always depends on your sample size.
Imagine you do not have a lot of data, just three samples. I plotted three possible linear regression lines. The red one |
20,884 | Why do we need regularization for linear least squares given that a line is the simplest model possible? | Another way of looking at it: A linear model is not the simplest model that can be fitted to the data - a linear model with a constraint on the values of the weights is even more simple. For example, if we constrain the number of non-zero weights, we have a model that is structurally less complex - it has fewer parameters. As we increase the number of non-zero weights, we create a sequence of nested models of increasing complexity. A model with three non-zero weights can do everything a model with two non-zero weights can do, and also a few things it can't, so it must be a more complex model in some sense.
We can do something similar with regularisation, but it is a bit more subtle. We can create another sequence of models of increasing complexity by putting a constraint on the norm of the weight vector (limiting the magnitude of the weights rather than the number of non-zero weights). We then have an optimisation problem of the form:
$\mathrm{min}_\vec{\theta} f(\vec{\theta}) \quad \mathrm{s.t.} \quad \|\vec{\theta}\|^2 < C$
where $\vec{\theta}$ is the vector of model parameters (weights) and $C$ is the hyper-parameter controlling the maximum allowed value of the norm of the parameter vector and $f(\cdot)$ is the loss function (e.g. sum of squares). If we have a model with a particular value of $C$ and then increase $C$ a bit (to $C'$), then it can do anything it previously could, but it can also realize some additional mappings that it couldn't before. So as we increase $C$, we create models that are potentially more and more complex.
How does this relate to regularisation? One way of solving a constrained optimisation problem is to take the Lagrangian to turn it into an unconstrained optimisation problem, and in this case, the Lagrangian is:
$\Lambda(\vec{\theta},\lambda) = f(\vec{\theta}) + \lambda\|\vec{\theta}\|^2 - \lambda C.$
We can ignore the last term as it doesn't depend on the parameter vector, and what is left is a regularised loss function.
So we see that regularisation is a way of controlling the complexity of even a linear model, by limiting the set of mappings it can implement.
If we think of one-dimensional regression tasks, it is difficult to find a lot of value in regularisation, but when we have models with lots of parameters, regularisation becomes more obviously useful. Some of the attributes may be correlated with the target by random chance and have no predictive value. Regularisation will help surprress those attributes, and result in a model with better generalsiation.
Modelling data generally involved fitting the complexity of the model (class) to the complexity of the data we have available, and regularisation provides a convenient way of doing that (it is a continuous hyper-parameter, so it is a bit less of a blunt instrument that e.g. feature selection).
Just to add, I've been a bit vague with terminology here. Really is is nested sets of model/hypothesis classes rather than models, i.e. the set of models that could be realised within the constraint imposed by the value of the regularisation parameter (c.f. @Ben's answer). | Why do we need regularization for linear least squares given that a line is the simplest model possi | Another way of looking at it: A linear model is not the simplest model that can be fitted to the data - a linear model with a constraint on the values of the weights is even more simple. For example | Why do we need regularization for linear least squares given that a line is the simplest model possible?
Another way of looking at it: A linear model is not the simplest model that can be fitted to the data - a linear model with a constraint on the values of the weights is even more simple. For example, if we constrain the number of non-zero weights, we have a model that is structurally less complex - it has fewer parameters. As we increase the number of non-zero weights, we create a sequence of nested models of increasing complexity. A model with three non-zero weights can do everything a model with two non-zero weights can do, and also a few things it can't, so it must be a more complex model in some sense.
We can do something similar with regularisation, but it is a bit more subtle. We can create another sequence of models of increasing complexity by putting a constraint on the norm of the weight vector (limiting the magnitude of the weights rather than the number of non-zero weights). We then have an optimisation problem of the form:
$\mathrm{min}_\vec{\theta} f(\vec{\theta}) \quad \mathrm{s.t.} \quad \|\vec{\theta}\|^2 < C$
where $\vec{\theta}$ is the vector of model parameters (weights) and $C$ is the hyper-parameter controlling the maximum allowed value of the norm of the parameter vector and $f(\cdot)$ is the loss function (e.g. sum of squares). If we have a model with a particular value of $C$ and then increase $C$ a bit (to $C'$), then it can do anything it previously could, but it can also realize some additional mappings that it couldn't before. So as we increase $C$, we create models that are potentially more and more complex.
How does this relate to regularisation? One way of solving a constrained optimisation problem is to take the Lagrangian to turn it into an unconstrained optimisation problem, and in this case, the Lagrangian is:
$\Lambda(\vec{\theta},\lambda) = f(\vec{\theta}) + \lambda\|\vec{\theta}\|^2 - \lambda C.$
We can ignore the last term as it doesn't depend on the parameter vector, and what is left is a regularised loss function.
So we see that regularisation is a way of controlling the complexity of even a linear model, by limiting the set of mappings it can implement.
If we think of one-dimensional regression tasks, it is difficult to find a lot of value in regularisation, but when we have models with lots of parameters, regularisation becomes more obviously useful. Some of the attributes may be correlated with the target by random chance and have no predictive value. Regularisation will help surprress those attributes, and result in a model with better generalsiation.
Modelling data generally involved fitting the complexity of the model (class) to the complexity of the data we have available, and regularisation provides a convenient way of doing that (it is a continuous hyper-parameter, so it is a bit less of a blunt instrument that e.g. feature selection).
Just to add, I've been a bit vague with terminology here. Really is is nested sets of model/hypothesis classes rather than models, i.e. the set of models that could be realised within the constraint imposed by the value of the regularisation parameter (c.f. @Ben's answer). | Why do we need regularization for linear least squares given that a line is the simplest model possi
Another way of looking at it: A linear model is not the simplest model that can be fitted to the data - a linear model with a constraint on the values of the weights is even more simple. For example |
20,885 | Why do we need regularization for linear least squares given that a line is the simplest model possible? | I think part of the misunderstanding may be driven by the meaning of "model". A linear model is a set of distributions, where (for simplicity) we can consider that each distribution is represented by a line. Thus a linear model is a set (or collection) of lines - not a single line. The bigger that set is, the more complex the model is. Regularization--like removing variables from a regression--reduces the complexity of the model by making the model contain fewer lines.
This can be helpful since more complex models can be more prone to overfitting. | Why do we need regularization for linear least squares given that a line is the simplest model possi | I think part of the misunderstanding may be driven by the meaning of "model". A linear model is a set of distributions, where (for simplicity) we can consider that each distribution is represented by | Why do we need regularization for linear least squares given that a line is the simplest model possible?
I think part of the misunderstanding may be driven by the meaning of "model". A linear model is a set of distributions, where (for simplicity) we can consider that each distribution is represented by a line. Thus a linear model is a set (or collection) of lines - not a single line. The bigger that set is, the more complex the model is. Regularization--like removing variables from a regression--reduces the complexity of the model by making the model contain fewer lines.
This can be helpful since more complex models can be more prone to overfitting. | Why do we need regularization for linear least squares given that a line is the simplest model possi
I think part of the misunderstanding may be driven by the meaning of "model". A linear model is a set of distributions, where (for simplicity) we can consider that each distribution is represented by |
20,886 | Why do we need regularization for linear least squares given that a line is the simplest model possible? | There is an elegant theoretical reason one might want to regularize a linear model. It is related to Dikran's answer, in that we are expressing an assumption about the weights. In essence, L2 regularization applied to a least squares linear fit expresses a Gaussian prior assumption on weight space. I'll show below the broad strokes of M-estimators used to derive two things:
(MLE) Assume Gaussian distributed observation noise $\implies$ least squares loss gives maximum likelihood model.
(MAP) Assume Gaussian distribution of model weights $\implies$ L2 regularization on loss gives maximum likelihood model.
I'll leave out details for brevity, since they are available broadly already. The point is that MSE loss and L2 regularization can be derived from first principles and simple distributional assumptions.
MLE from observation noise
In the linear regression setting, we learn model weights $\mathbf{w}$ to make scalar predictions $\hat{y}$ from samples $\mathbf{x}$ as
$$
\hat{y} = \mathbf{w}^T\mathbf{x}
$$
When one assumes the true underlying distribution is a linear combination and a Gaussian noise term,
$$
y|\mathbf{x} = \mathbf{w}^T \mathbf{x} + \mathcal{N}(0, \sigma^2)
$$
then maximum likelihood estimation (MLE) induces a mean squared error loss
$$
\mathcal{L}_{MLE}(\mathbf{w}) = \sum_{i=1}^n (\mathbf{w}^T\mathbf{x}_i - y)^2
$$
such that minimizing $\mathcal{L}_{MLE}$ produces the MLE estimate of weights.
MAP from weight distribution
Further, if one assumes a Gaussian prior distribution on the model weights $\mathbf{w}$ with each weight $w_i$ having identical variance $\nu^2$
$$
w_i \sim \mathcal{N}(0, \nu^2)
$$
then the analogous maximum a posteriori (MAP) estimation induces the L2 regularizer with regularization weight $\lambda = \frac{\sigma^2}{\nu^2}$
$$
\mathcal{L}_{MAP}(\mathbf{w}) = \sum_{i=1}^n (\mathbf{w}^T\mathbf{x}_i - y)^2 + \lambda||\mathbf{w}||^2_2
$$
such that minimizing $\mathcal{L}_{MAP}$ produces the MAP estimate of weights.
So choosing least squares loss expresses a Gaussian observation noise assumption. And choosing L2 regularization expresses a Gaussian model weight assumption, where $\lambda$ expresses an assumed variance ratio between observation noise and model weights. | Why do we need regularization for linear least squares given that a line is the simplest model possi | There is an elegant theoretical reason one might want to regularize a linear model. It is related to Dikran's answer, in that we are expressing an assumption about the weights. In essence, L2 regula | Why do we need regularization for linear least squares given that a line is the simplest model possible?
There is an elegant theoretical reason one might want to regularize a linear model. It is related to Dikran's answer, in that we are expressing an assumption about the weights. In essence, L2 regularization applied to a least squares linear fit expresses a Gaussian prior assumption on weight space. I'll show below the broad strokes of M-estimators used to derive two things:
(MLE) Assume Gaussian distributed observation noise $\implies$ least squares loss gives maximum likelihood model.
(MAP) Assume Gaussian distribution of model weights $\implies$ L2 regularization on loss gives maximum likelihood model.
I'll leave out details for brevity, since they are available broadly already. The point is that MSE loss and L2 regularization can be derived from first principles and simple distributional assumptions.
MLE from observation noise
In the linear regression setting, we learn model weights $\mathbf{w}$ to make scalar predictions $\hat{y}$ from samples $\mathbf{x}$ as
$$
\hat{y} = \mathbf{w}^T\mathbf{x}
$$
When one assumes the true underlying distribution is a linear combination and a Gaussian noise term,
$$
y|\mathbf{x} = \mathbf{w}^T \mathbf{x} + \mathcal{N}(0, \sigma^2)
$$
then maximum likelihood estimation (MLE) induces a mean squared error loss
$$
\mathcal{L}_{MLE}(\mathbf{w}) = \sum_{i=1}^n (\mathbf{w}^T\mathbf{x}_i - y)^2
$$
such that minimizing $\mathcal{L}_{MLE}$ produces the MLE estimate of weights.
MAP from weight distribution
Further, if one assumes a Gaussian prior distribution on the model weights $\mathbf{w}$ with each weight $w_i$ having identical variance $\nu^2$
$$
w_i \sim \mathcal{N}(0, \nu^2)
$$
then the analogous maximum a posteriori (MAP) estimation induces the L2 regularizer with regularization weight $\lambda = \frac{\sigma^2}{\nu^2}$
$$
\mathcal{L}_{MAP}(\mathbf{w}) = \sum_{i=1}^n (\mathbf{w}^T\mathbf{x}_i - y)^2 + \lambda||\mathbf{w}||^2_2
$$
such that minimizing $\mathcal{L}_{MAP}$ produces the MAP estimate of weights.
So choosing least squares loss expresses a Gaussian observation noise assumption. And choosing L2 regularization expresses a Gaussian model weight assumption, where $\lambda$ expresses an assumed variance ratio between observation noise and model weights. | Why do we need regularization for linear least squares given that a line is the simplest model possi
There is an elegant theoretical reason one might want to regularize a linear model. It is related to Dikran's answer, in that we are expressing an assumption about the weights. In essence, L2 regula |
20,887 | Why is the normalisation constant in Bayesian not a marginal distribution | $p(D)$ is a constant with respect to the variable $\theta$, not with respect to the variable $D$.
Think of $D$ as being some data given in the problem and $\theta$ as the parameter to be estimated from the data. In this example, $\theta$ is variable because we do not know the value of the parameter to be estimated, but the data $D$ is fixed. $p(D)$ gives the relative likelihood of observing the fixed data $D$ that we observe, which is constant when $D$ is constant and does not depend in any way on the possible parameter values $\theta$.
Addendum: A visualization would certainly help. Let's formulate a simple model: suppose that our prior distribution is a normal distribution with mean 0 and variance 1, i.e. $p(\theta) = N(0, 1)(\theta)$. And let's suppose that we're going to observe one data point $D$, where $D$ is drawn from a normal distribution with mean $\theta$ and variance 1, i.e. $p(D | \theta) = N(\theta, 1)(D)$. Plotted below is un-normalized posterior distribution $p(D | \theta) p(\theta)$, which is proportional to the normalized posterior $p(\theta | D) = \frac{p(D | \theta) p(\theta)}{p(D)}$.
For any particular value of $D$, look at the slice of this graph (I've shown two in red and blue). Here $p(D) = \int p(D | \theta) p(\theta) d\theta$ can be visualized as the area under each slice, which I've also plotted off to the side in green. Since the blue slice has a larger area than the red slice, it has a higher $p(D)$. But you can clearly see that these can't currently be proper distributions if they have different areas under them, since that area can't be 1 for both of them. This is why each slice needs to be normalized by dividing by its value of $p(D)$ to make it a proper distribution. | Why is the normalisation constant in Bayesian not a marginal distribution | $p(D)$ is a constant with respect to the variable $\theta$, not with respect to the variable $D$.
Think of $D$ as being some data given in the problem and $\theta$ as the parameter to be estimated fro | Why is the normalisation constant in Bayesian not a marginal distribution
$p(D)$ is a constant with respect to the variable $\theta$, not with respect to the variable $D$.
Think of $D$ as being some data given in the problem and $\theta$ as the parameter to be estimated from the data. In this example, $\theta$ is variable because we do not know the value of the parameter to be estimated, but the data $D$ is fixed. $p(D)$ gives the relative likelihood of observing the fixed data $D$ that we observe, which is constant when $D$ is constant and does not depend in any way on the possible parameter values $\theta$.
Addendum: A visualization would certainly help. Let's formulate a simple model: suppose that our prior distribution is a normal distribution with mean 0 and variance 1, i.e. $p(\theta) = N(0, 1)(\theta)$. And let's suppose that we're going to observe one data point $D$, where $D$ is drawn from a normal distribution with mean $\theta$ and variance 1, i.e. $p(D | \theta) = N(\theta, 1)(D)$. Plotted below is un-normalized posterior distribution $p(D | \theta) p(\theta)$, which is proportional to the normalized posterior $p(\theta | D) = \frac{p(D | \theta) p(\theta)}{p(D)}$.
For any particular value of $D$, look at the slice of this graph (I've shown two in red and blue). Here $p(D) = \int p(D | \theta) p(\theta) d\theta$ can be visualized as the area under each slice, which I've also plotted off to the side in green. Since the blue slice has a larger area than the red slice, it has a higher $p(D)$. But you can clearly see that these can't currently be proper distributions if they have different areas under them, since that area can't be 1 for both of them. This is why each slice needs to be normalized by dividing by its value of $p(D)$ to make it a proper distribution. | Why is the normalisation constant in Bayesian not a marginal distribution
$p(D)$ is a constant with respect to the variable $\theta$, not with respect to the variable $D$.
Think of $D$ as being some data given in the problem and $\theta$ as the parameter to be estimated fro |
20,888 | Why is the normalisation constant in Bayesian not a marginal distribution | The normalising constant in the posterior is the marginal density of the sample in the Bayesian model.
When writing the posterior density as $$p(\theta |D) = \frac{\overbrace{p(D|\theta)}^\text{likelihood }\overbrace{p(\theta)}^\text{ prior}}{\underbrace{\int p(D|\theta)p(\theta)\,\text{d}\theta}_\text{marginal}}$$
[which unfortunately uses the same symbol $p(\cdot)$ with different meanings], this density is conditional upon $D$, with
$$\int p(D|\theta)p(\theta)\,\text{d}\theta=\mathfrak e(D)$$
being the marginal density of the sample $D$. Obviously, conditional on a realisation of $D$, $\mathfrak e(D)$ is constant, while, as $D$ varies, so does $\mathfrak e(D)$. In probabilistic terms,
$$p(\theta|D) \mathfrak e(D) = p(D|\theta) p(\theta)$$
is the joint distribution density of the (random) pair $(\theta,D)$ in the Bayesian model [where both $D$ and $\theta$ are random variables].
The statistical meaning of $\mathfrak e(D)$ is one of "evidence" (or "prior predictive" or yet "marginal likelihood") about the assumed model $p(D|\theta)$. As nicely pointed out by Ilmari Karonen, this is the density of the sample prior to observing it and with the only information on the parameter $\theta$ provided by the prior distribution. Meaning that, the sample $D$ is obtained by first generating a parameter value $\theta$ from the prior, then generating the sample $D$ conditional on this realisation of $\theta$.
By taking the average of $p(D|\theta)$ across values of $\theta$, weighted by the prior $p(\theta)$, one produces a numerical value that can be used to compare this model [in the statistical sense of a family of parameterised distributions with unknown parameter] with other models, i.e. other families of parameterised distributions with unknown parameter. The Bayes factor is a ratio of such evidences.
For instance, if $D$ is made of a single obervation, say $x=2.13$, and if one wants to compare Model 1, a Normal (distribution) model, $X\sim \mathcal N(\theta,1)$, with $\theta$ unknown, to Model 2, an Exponential (distribution) model, $X\sim \mathcal E(\lambda)$, with $\lambda$ unknown, a Bayes factor would derive both evidences
$$\mathfrak e_1(x) = \int_{-\infty}^{+\infty} \frac{\exp\{-(x-\theta)^2/2\}}{\sqrt{2\pi}}\text{d}\pi_1(\theta)$$
and $$\mathfrak e_2(x) = \int_{0}^{+\infty} \lambda\exp\{-x\lambda\}\text{d}\pi_2(\lambda)$$
To construct such evidences, one need set both priors $\pi_1(\cdot)$ and $\pi_2(\cdot)$. For illustration sake, say
$$\pi_1(\theta)=\frac{\exp\{-\theta^2/2\}}{\sqrt{2\pi}}\quad\text{and}\quad\pi_2(\lambda)=e^{-\lambda}$$
Then
$$\mathfrak e_1(x) = \frac{\exp\{-x^2/4\}}{\sqrt{4\pi}}\quad\text{and}\quad\mathfrak e_2(x) = \frac{1}{1+x}$$
leading
$$\mathfrak e_1(2.13) = 0.091\quad\text{and}\quad\mathfrak e_2(2.13) = 0.32$$
which gives some degree of advantage to Model 2, the Exponential distribution model. | Why is the normalisation constant in Bayesian not a marginal distribution | The normalising constant in the posterior is the marginal density of the sample in the Bayesian model.
When writing the posterior density as $$p(\theta |D) = \frac{\overbrace{p(D|\theta)}^\text{likeli | Why is the normalisation constant in Bayesian not a marginal distribution
The normalising constant in the posterior is the marginal density of the sample in the Bayesian model.
When writing the posterior density as $$p(\theta |D) = \frac{\overbrace{p(D|\theta)}^\text{likelihood }\overbrace{p(\theta)}^\text{ prior}}{\underbrace{\int p(D|\theta)p(\theta)\,\text{d}\theta}_\text{marginal}}$$
[which unfortunately uses the same symbol $p(\cdot)$ with different meanings], this density is conditional upon $D$, with
$$\int p(D|\theta)p(\theta)\,\text{d}\theta=\mathfrak e(D)$$
being the marginal density of the sample $D$. Obviously, conditional on a realisation of $D$, $\mathfrak e(D)$ is constant, while, as $D$ varies, so does $\mathfrak e(D)$. In probabilistic terms,
$$p(\theta|D) \mathfrak e(D) = p(D|\theta) p(\theta)$$
is the joint distribution density of the (random) pair $(\theta,D)$ in the Bayesian model [where both $D$ and $\theta$ are random variables].
The statistical meaning of $\mathfrak e(D)$ is one of "evidence" (or "prior predictive" or yet "marginal likelihood") about the assumed model $p(D|\theta)$. As nicely pointed out by Ilmari Karonen, this is the density of the sample prior to observing it and with the only information on the parameter $\theta$ provided by the prior distribution. Meaning that, the sample $D$ is obtained by first generating a parameter value $\theta$ from the prior, then generating the sample $D$ conditional on this realisation of $\theta$.
By taking the average of $p(D|\theta)$ across values of $\theta$, weighted by the prior $p(\theta)$, one produces a numerical value that can be used to compare this model [in the statistical sense of a family of parameterised distributions with unknown parameter] with other models, i.e. other families of parameterised distributions with unknown parameter. The Bayes factor is a ratio of such evidences.
For instance, if $D$ is made of a single obervation, say $x=2.13$, and if one wants to compare Model 1, a Normal (distribution) model, $X\sim \mathcal N(\theta,1)$, with $\theta$ unknown, to Model 2, an Exponential (distribution) model, $X\sim \mathcal E(\lambda)$, with $\lambda$ unknown, a Bayes factor would derive both evidences
$$\mathfrak e_1(x) = \int_{-\infty}^{+\infty} \frac{\exp\{-(x-\theta)^2/2\}}{\sqrt{2\pi}}\text{d}\pi_1(\theta)$$
and $$\mathfrak e_2(x) = \int_{0}^{+\infty} \lambda\exp\{-x\lambda\}\text{d}\pi_2(\lambda)$$
To construct such evidences, one need set both priors $\pi_1(\cdot)$ and $\pi_2(\cdot)$. For illustration sake, say
$$\pi_1(\theta)=\frac{\exp\{-\theta^2/2\}}{\sqrt{2\pi}}\quad\text{and}\quad\pi_2(\lambda)=e^{-\lambda}$$
Then
$$\mathfrak e_1(x) = \frac{\exp\{-x^2/4\}}{\sqrt{4\pi}}\quad\text{and}\quad\mathfrak e_2(x) = \frac{1}{1+x}$$
leading
$$\mathfrak e_1(2.13) = 0.091\quad\text{and}\quad\mathfrak e_2(2.13) = 0.32$$
which gives some degree of advantage to Model 2, the Exponential distribution model. | Why is the normalisation constant in Bayesian not a marginal distribution
The normalising constant in the posterior is the marginal density of the sample in the Bayesian model.
When writing the posterior density as $$p(\theta |D) = \frac{\overbrace{p(D|\theta)}^\text{likeli |
20,889 | Why is the normalisation constant in Bayesian not a marginal distribution | I think the easiest way to figure out what's going on is to think about how you might approximate the integral.
We have $p(\mathcal{D}) = \int p(\mathcal{D}|\theta) p(\theta) \rm d \theta$.
Note that this is just the average of the likelihood (first term in the integrand) over the prior distribution.
One way to compute this integral approximately: sample from the prior, evaluate the likelihood, repeat this lots of times and average the results.
Because the prior and the dataset are both fixed, the result of this procedure doesn't depend on the value of $\theta$. $p(\mathcal{D})$ is just the expected likelihood under the prior. | Why is the normalisation constant in Bayesian not a marginal distribution | I think the easiest way to figure out what's going on is to think about how you might approximate the integral.
We have $p(\mathcal{D}) = \int p(\mathcal{D}|\theta) p(\theta) \rm d \theta$.
Note that | Why is the normalisation constant in Bayesian not a marginal distribution
I think the easiest way to figure out what's going on is to think about how you might approximate the integral.
We have $p(\mathcal{D}) = \int p(\mathcal{D}|\theta) p(\theta) \rm d \theta$.
Note that this is just the average of the likelihood (first term in the integrand) over the prior distribution.
One way to compute this integral approximately: sample from the prior, evaluate the likelihood, repeat this lots of times and average the results.
Because the prior and the dataset are both fixed, the result of this procedure doesn't depend on the value of $\theta$. $p(\mathcal{D})$ is just the expected likelihood under the prior. | Why is the normalisation constant in Bayesian not a marginal distribution
I think the easiest way to figure out what's going on is to think about how you might approximate the integral.
We have $p(\mathcal{D}) = \int p(\mathcal{D}|\theta) p(\theta) \rm d \theta$.
Note that |
20,890 | Why is the normalisation constant in Bayesian not a marginal distribution | Why is the normalisation constant in Bayesian not a marginal distribution?
The normalisation constant is a marginal distribution.
"How is $z$ evaluated to be a constant when evaluating the integral becomes the marginal distribution $p(D)$"
The integral provides indeed a probability density of the observations ($D$ can be any value). So $z$, or better $z(D)$, is a function of $D$.
But when you evaluate $z(D)$ for a particular given observation $D$ then the value is a constant (a single number and not a distribution).
$$p(\theta |D) = \frac{p(D|\theta)p(\theta)}{\int p(D|\theta)p(\theta)d\theta} = \frac{p(D|\theta)p(\theta)}{p(D)}$$
Note that the posterior $p(\theta |D)$ is a function of $D$. For different $D$ you will get a different result. | Why is the normalisation constant in Bayesian not a marginal distribution | Why is the normalisation constant in Bayesian not a marginal distribution?
The normalisation constant is a marginal distribution.
"How is $z$ evaluated to be a constant when evaluating the integral | Why is the normalisation constant in Bayesian not a marginal distribution
Why is the normalisation constant in Bayesian not a marginal distribution?
The normalisation constant is a marginal distribution.
"How is $z$ evaluated to be a constant when evaluating the integral becomes the marginal distribution $p(D)$"
The integral provides indeed a probability density of the observations ($D$ can be any value). So $z$, or better $z(D)$, is a function of $D$.
But when you evaluate $z(D)$ for a particular given observation $D$ then the value is a constant (a single number and not a distribution).
$$p(\theta |D) = \frac{p(D|\theta)p(\theta)}{\int p(D|\theta)p(\theta)d\theta} = \frac{p(D|\theta)p(\theta)}{p(D)}$$
Note that the posterior $p(\theta |D)$ is a function of $D$. For different $D$ you will get a different result. | Why is the normalisation constant in Bayesian not a marginal distribution
Why is the normalisation constant in Bayesian not a marginal distribution?
The normalisation constant is a marginal distribution.
"How is $z$ evaluated to be a constant when evaluating the integral |
20,891 | Is stationarity preserved under a linear combination? | Perhaps surprisingly, this is not true. (Independence of the two time series will make it true, however.)
I understand "stable" to mean stationary, because those words appear to be used interchangeably in millions of search hits, including at least one on our site.
For a counterexample, let $X$ be a non-constant stationary time series for which every $X_t$ is independent of $X_s$, $s\ne t,$ and whose marginal distributions are symmetric around $0$. Define
$$Y_t = (-1)^t X_t.$$
These plots show portions of the three time series discussed in this post. $X$ was simulated as a series of independent draws from a standard Normal distribution.
To show that $Y$ is stationary, we need to demonstrate that the joint distribution of $(Y_{s+t_1}, Y_{s+t_2}, \ldots, Y_{s+t_n})$ for any $t_1\lt t_2 \lt \cdots \lt t_n$ does not depend on $s$. But this follows directly from the symmetry and independence of the $X_t$.
These lagged scatterplots (for a sequence of 512 values of $Y$) illustrate the assertion that the joint bivariate distributions of $Y$ are as expected: independent and symmetric. (A "lagged scatterplot" displays the values of $Y_{t+s}$ against $Y_{t}$; values of $s=0,1,2$ are shown.)
Nevertheless, choosing $\alpha=\beta=1/2$, we have
$$\alpha X_t + \beta Y_t = X_t$$
for even $t$ and otherwise
$$\alpha X_t + \beta Y_t = 0.$$
Since $X$ is non-constant, obviously these two expressions have different distributions for any $t$ and $t+1$, whence the series $(X+Y)/2$ is not stationary. The colors in the first figure highlight this non-stationarity in $(X+Y)/2$ by distinguishing the zero values from the rest. | Is stationarity preserved under a linear combination? | Perhaps surprisingly, this is not true. (Independence of the two time series will make it true, however.)
I understand "stable" to mean stationary, because those words appear to be used interchangeab | Is stationarity preserved under a linear combination?
Perhaps surprisingly, this is not true. (Independence of the two time series will make it true, however.)
I understand "stable" to mean stationary, because those words appear to be used interchangeably in millions of search hits, including at least one on our site.
For a counterexample, let $X$ be a non-constant stationary time series for which every $X_t$ is independent of $X_s$, $s\ne t,$ and whose marginal distributions are symmetric around $0$. Define
$$Y_t = (-1)^t X_t.$$
These plots show portions of the three time series discussed in this post. $X$ was simulated as a series of independent draws from a standard Normal distribution.
To show that $Y$ is stationary, we need to demonstrate that the joint distribution of $(Y_{s+t_1}, Y_{s+t_2}, \ldots, Y_{s+t_n})$ for any $t_1\lt t_2 \lt \cdots \lt t_n$ does not depend on $s$. But this follows directly from the symmetry and independence of the $X_t$.
These lagged scatterplots (for a sequence of 512 values of $Y$) illustrate the assertion that the joint bivariate distributions of $Y$ are as expected: independent and symmetric. (A "lagged scatterplot" displays the values of $Y_{t+s}$ against $Y_{t}$; values of $s=0,1,2$ are shown.)
Nevertheless, choosing $\alpha=\beta=1/2$, we have
$$\alpha X_t + \beta Y_t = X_t$$
for even $t$ and otherwise
$$\alpha X_t + \beta Y_t = 0.$$
Since $X$ is non-constant, obviously these two expressions have different distributions for any $t$ and $t+1$, whence the series $(X+Y)/2$ is not stationary. The colors in the first figure highlight this non-stationarity in $(X+Y)/2$ by distinguishing the zero values from the rest. | Is stationarity preserved under a linear combination?
Perhaps surprisingly, this is not true. (Independence of the two time series will make it true, however.)
I understand "stable" to mean stationary, because those words appear to be used interchangeab |
20,892 | Is stationarity preserved under a linear combination? | Consider the two-dimensional process
$$w_t = (x_t, y_t)$$
If it is strictly stationary, or alternatively, if the processes $(x_t)$ and $(y_t)$ are jointly strictly stationary, then a process formed by any measurable function $f:= f(x_t,y_t), f:\mathbb R^2 \to \mathbb R$ will also be strictly stationary.
In @whuber's example we have
$$w_t = (x_t, (-1)^t x_t)$$
To examine whether this $w_t$ is strictly stationary, we have to first obtain its probability distribution. Assume the variables are absolutely continuous. For some $c \in \mathbb R$, we have
$$\text{Prob}(X_t \leq c,(-1)^t X_t \leq c)= \cases {\text{Prob}(X_t \leq c, X_t \leq c)\;\;\;\; \text{t is even}\\ \\ \text{Prob}(X_t \leq c, -X_t \leq c)\;\;\;\; \text{t is odd}}$$
$$= \cases {\text{Prob}(X_t \leq c)\;\;\;\; \text{t is even}\\ \\ \text{Prob}(-c\leq X_t \leq c)\;\;\;\; \text{t is odd}}$$
$$\implies \text{Prob}(X_t \leq c,(-1)^t X_t \leq c)= \cases {\text{Prob}(X_t \leq c)\;\;\;\; \text{t is even}\\ \\ \text{Prob}( |X_t| \leq c)\;\;\;\; \text{t is odd}}$$
Sticking with whuber's example, the two branches are different probability distributions because $x_t$ has a distribution symmetric around zero.
Now to examine strict stationarity, shift the index by a whole number $k>0$. We have
$$\text{Prob}(X_{t+k} \leq c,(-1)^t X_{t+k} \leq c)= \cases {\text{Prob}(X_{t+k} \leq c)\;\;\;\; \text{t+k is even}\\ \\ \text{Prob}( |X_{t+k}| \leq c)\;\;\;\; \text{t+k is odd}}$$
For strict stationarity, we must have
$$\text{Prob}(X_t \leq c,(-1)^t X_t \leq c)=\text{Prob}(X_{t+k} \leq c,(-1)^t X_{t+k} \leq c),\;\;\; \forall t,k$$
And we don't have this equality $\forall t,k$, because, say, if $t$ is even and $k$ is odd, then $t+k$ is odd, in which case
$$\text{Prob}(X_t \leq c,(-1)^t X_t \leq c) = \text{Prob}(X_t \leq c) $$
while
$$ \text{Prob}(X_{t+k} \leq c,(-1)^t X_{t+k} \leq c) = \text{Prob}( |X_{t+k}| \leq c)= \text{Prob}( |X_{t}| \leq c)$$
So we do not have joint strict stationarity, and then we have no guarantees about what will happen to a function of $f(x_t,y_t)$.
I have to point out that the dependence between $x_t$ and $y_t$, is a necessary but not a sufficient condition for the loss of joint strict stationarity. It is the additional assumption of dependence of $y_t$ on the index that does the job.
Consider
$$q_t = (x_t, \theta x_t),\;\;\; \theta \in \mathbb R$$
If one does the previous work for $(q_t)$ one will find that joint strict stationarity holds here.
This is good news because for a process to depend on the index and be strictly stationary is not among the modelling assumptions we need to make very often. In practice therefore, if we have marginal strict stationarity, we expect also joint strict stationarity even in the presence of dependence (although we should of course check.) | Is stationarity preserved under a linear combination? | Consider the two-dimensional process
$$w_t = (x_t, y_t)$$
If it is strictly stationary, or alternatively, if the processes $(x_t)$ and $(y_t)$ are jointly strictly stationary, then a process formed by | Is stationarity preserved under a linear combination?
Consider the two-dimensional process
$$w_t = (x_t, y_t)$$
If it is strictly stationary, or alternatively, if the processes $(x_t)$ and $(y_t)$ are jointly strictly stationary, then a process formed by any measurable function $f:= f(x_t,y_t), f:\mathbb R^2 \to \mathbb R$ will also be strictly stationary.
In @whuber's example we have
$$w_t = (x_t, (-1)^t x_t)$$
To examine whether this $w_t$ is strictly stationary, we have to first obtain its probability distribution. Assume the variables are absolutely continuous. For some $c \in \mathbb R$, we have
$$\text{Prob}(X_t \leq c,(-1)^t X_t \leq c)= \cases {\text{Prob}(X_t \leq c, X_t \leq c)\;\;\;\; \text{t is even}\\ \\ \text{Prob}(X_t \leq c, -X_t \leq c)\;\;\;\; \text{t is odd}}$$
$$= \cases {\text{Prob}(X_t \leq c)\;\;\;\; \text{t is even}\\ \\ \text{Prob}(-c\leq X_t \leq c)\;\;\;\; \text{t is odd}}$$
$$\implies \text{Prob}(X_t \leq c,(-1)^t X_t \leq c)= \cases {\text{Prob}(X_t \leq c)\;\;\;\; \text{t is even}\\ \\ \text{Prob}( |X_t| \leq c)\;\;\;\; \text{t is odd}}$$
Sticking with whuber's example, the two branches are different probability distributions because $x_t$ has a distribution symmetric around zero.
Now to examine strict stationarity, shift the index by a whole number $k>0$. We have
$$\text{Prob}(X_{t+k} \leq c,(-1)^t X_{t+k} \leq c)= \cases {\text{Prob}(X_{t+k} \leq c)\;\;\;\; \text{t+k is even}\\ \\ \text{Prob}( |X_{t+k}| \leq c)\;\;\;\; \text{t+k is odd}}$$
For strict stationarity, we must have
$$\text{Prob}(X_t \leq c,(-1)^t X_t \leq c)=\text{Prob}(X_{t+k} \leq c,(-1)^t X_{t+k} \leq c),\;\;\; \forall t,k$$
And we don't have this equality $\forall t,k$, because, say, if $t$ is even and $k$ is odd, then $t+k$ is odd, in which case
$$\text{Prob}(X_t \leq c,(-1)^t X_t \leq c) = \text{Prob}(X_t \leq c) $$
while
$$ \text{Prob}(X_{t+k} \leq c,(-1)^t X_{t+k} \leq c) = \text{Prob}( |X_{t+k}| \leq c)= \text{Prob}( |X_{t}| \leq c)$$
So we do not have joint strict stationarity, and then we have no guarantees about what will happen to a function of $f(x_t,y_t)$.
I have to point out that the dependence between $x_t$ and $y_t$, is a necessary but not a sufficient condition for the loss of joint strict stationarity. It is the additional assumption of dependence of $y_t$ on the index that does the job.
Consider
$$q_t = (x_t, \theta x_t),\;\;\; \theta \in \mathbb R$$
If one does the previous work for $(q_t)$ one will find that joint strict stationarity holds here.
This is good news because for a process to depend on the index and be strictly stationary is not among the modelling assumptions we need to make very often. In practice therefore, if we have marginal strict stationarity, we expect also joint strict stationarity even in the presence of dependence (although we should of course check.) | Is stationarity preserved under a linear combination?
Consider the two-dimensional process
$$w_t = (x_t, y_t)$$
If it is strictly stationary, or alternatively, if the processes $(x_t)$ and $(y_t)$ are jointly strictly stationary, then a process formed by |
20,893 | Is stationarity preserved under a linear combination? | I would say yes, since it has an MA representation.
One observation. I think that having a MA representation implies weak stationarity, not sure if it implies strong stationarity. | Is stationarity preserved under a linear combination? | I would say yes, since it has an MA representation.
One observation. I think that having a MA representation implies weak stationarity, not sure if it implies strong stationarity. | Is stationarity preserved under a linear combination?
I would say yes, since it has an MA representation.
One observation. I think that having a MA representation implies weak stationarity, not sure if it implies strong stationarity. | Is stationarity preserved under a linear combination?
I would say yes, since it has an MA representation.
One observation. I think that having a MA representation implies weak stationarity, not sure if it implies strong stationarity. |
20,894 | Is it wrong to refer to results as "nearly" or "somewhat" significant? | If you want to allow "significance" to admit of degrees then fair enough ("somewhat significant", "fairly significant"), but avoid phrases that suggest you're still wedded to the idea of a threshold, such as "nearly significant", "approaching significance", or "at the cusp of significance" (my favourite from "Still Not Significant" on the blog Probable Error), if you don't want to appear desperate. | Is it wrong to refer to results as "nearly" or "somewhat" significant? | If you want to allow "significance" to admit of degrees then fair enough ("somewhat significant", "fairly significant"), but avoid phrases that suggest you're still wedded to the idea of a threshold, | Is it wrong to refer to results as "nearly" or "somewhat" significant?
If you want to allow "significance" to admit of degrees then fair enough ("somewhat significant", "fairly significant"), but avoid phrases that suggest you're still wedded to the idea of a threshold, such as "nearly significant", "approaching significance", or "at the cusp of significance" (my favourite from "Still Not Significant" on the blog Probable Error), if you don't want to appear desperate. | Is it wrong to refer to results as "nearly" or "somewhat" significant?
If you want to allow "significance" to admit of degrees then fair enough ("somewhat significant", "fairly significant"), but avoid phrases that suggest you're still wedded to the idea of a threshold, |
20,895 | Is it wrong to refer to results as "nearly" or "somewhat" significant? | From my perspective, the issue boils down to what it actually means to carry out a significance test. Significance testing was devised as a means of making the decision of either to reject the null hypothesis or to fail to reject it. Fisher himself introduced the infamous 0.05 rule for making that (arbitrary) decision.
Basically, the logic of significance testing is that the user has to specify an alpha level for rejecting the null hypothesis (conventionally 0.05) before collecting the data. After completing the significance test, the user rejects the null if the p value is smaller than the alpha level (or fails to reject it otherwise).
The reason why you cannot declare an effect to be highly significant (say, at the 0.001 level) is because you cannot find stronger evidence than you set out to find. So, if you set your alpha level at 0.05 before the test, you can only find evidence at the 0.05 level, regardless of how small your p values is. In the same way, speaking of effects that are "somewhat significant" or "approaching significance" also doesn't make much sense because you chose this arbitrary criterion of 0.05. If you interpret the logic of significance testing very literally, anything bigger than 0.05 is not significant.
I agree that terms like "approaching significance" are often used to enhance the prospects of publication. However, I do not think that authors can be blamed for that because the current publication culture in some sciences still heavily relies on the "holy grail" of 0.05.
Some of these issues are discussed in:
Gigerenzer, G. (2004). Mindless statistics. The Journal of Socio-Economics, 33(5), 587-606.
Royall, R. (1997). Statistical evidence: a likelihood paradigm (Vol. 71). CRC press. | Is it wrong to refer to results as "nearly" or "somewhat" significant? | From my perspective, the issue boils down to what it actually means to carry out a significance test. Significance testing was devised as a means of making the decision of either to reject the null hy | Is it wrong to refer to results as "nearly" or "somewhat" significant?
From my perspective, the issue boils down to what it actually means to carry out a significance test. Significance testing was devised as a means of making the decision of either to reject the null hypothesis or to fail to reject it. Fisher himself introduced the infamous 0.05 rule for making that (arbitrary) decision.
Basically, the logic of significance testing is that the user has to specify an alpha level for rejecting the null hypothesis (conventionally 0.05) before collecting the data. After completing the significance test, the user rejects the null if the p value is smaller than the alpha level (or fails to reject it otherwise).
The reason why you cannot declare an effect to be highly significant (say, at the 0.001 level) is because you cannot find stronger evidence than you set out to find. So, if you set your alpha level at 0.05 before the test, you can only find evidence at the 0.05 level, regardless of how small your p values is. In the same way, speaking of effects that are "somewhat significant" or "approaching significance" also doesn't make much sense because you chose this arbitrary criterion of 0.05. If you interpret the logic of significance testing very literally, anything bigger than 0.05 is not significant.
I agree that terms like "approaching significance" are often used to enhance the prospects of publication. However, I do not think that authors can be blamed for that because the current publication culture in some sciences still heavily relies on the "holy grail" of 0.05.
Some of these issues are discussed in:
Gigerenzer, G. (2004). Mindless statistics. The Journal of Socio-Economics, 33(5), 587-606.
Royall, R. (1997). Statistical evidence: a likelihood paradigm (Vol. 71). CRC press. | Is it wrong to refer to results as "nearly" or "somewhat" significant?
From my perspective, the issue boils down to what it actually means to carry out a significance test. Significance testing was devised as a means of making the decision of either to reject the null hy |
20,896 | Is it wrong to refer to results as "nearly" or "somewhat" significant? | This slippery slope calls back to the Fisher vs Neyman/Pearson framework for null-hypothesis significance testing (NHST). On the one hand, one wants to make a quantitative assessment of just how unlikely a result is under the null hypothesis (e.g., effect sizes). On the other hand, at the end of the day you want a discrete decision as to whether your results are, or are not, likely to have been due to chance alone. What we've ended up with is a kind of hybrid approach that isn't very satisfying.
In most disciplines, the conventional p for significance is set at 0.05, but there is really no grounding for why this must be so. When I review a paper, I have absolutely no problem with an author calling 0.06 significant, or even 0.07, provided that the methodology is sound, and the entire picture, including all analyses, figures, etc. tell a consistent and believable story. Where you run into problems is when authors attempt to make a story out of trivial data with small effect sizes. Conversely, I might not fully 'believe' a test is practically meaningful even when it reaches conventional p < 0.05 significance. A colleague of mine once said: "Your statistics should simply back up what is already apparent in your figures."
That all said, I think Vasilev is correct. Given the broken publication system, you pretty much have to include p values, and therefore you pretty much have to use the word 'significant' to be taken seriously, even if it requires adjectives like "marginally" (which I prefer). You can always fight it out in peer review, but you have to get there first. | Is it wrong to refer to results as "nearly" or "somewhat" significant? | This slippery slope calls back to the Fisher vs Neyman/Pearson framework for null-hypothesis significance testing (NHST). On the one hand, one wants to make a quantitative assessment of just how unlik | Is it wrong to refer to results as "nearly" or "somewhat" significant?
This slippery slope calls back to the Fisher vs Neyman/Pearson framework for null-hypothesis significance testing (NHST). On the one hand, one wants to make a quantitative assessment of just how unlikely a result is under the null hypothesis (e.g., effect sizes). On the other hand, at the end of the day you want a discrete decision as to whether your results are, or are not, likely to have been due to chance alone. What we've ended up with is a kind of hybrid approach that isn't very satisfying.
In most disciplines, the conventional p for significance is set at 0.05, but there is really no grounding for why this must be so. When I review a paper, I have absolutely no problem with an author calling 0.06 significant, or even 0.07, provided that the methodology is sound, and the entire picture, including all analyses, figures, etc. tell a consistent and believable story. Where you run into problems is when authors attempt to make a story out of trivial data with small effect sizes. Conversely, I might not fully 'believe' a test is practically meaningful even when it reaches conventional p < 0.05 significance. A colleague of mine once said: "Your statistics should simply back up what is already apparent in your figures."
That all said, I think Vasilev is correct. Given the broken publication system, you pretty much have to include p values, and therefore you pretty much have to use the word 'significant' to be taken seriously, even if it requires adjectives like "marginally" (which I prefer). You can always fight it out in peer review, but you have to get there first. | Is it wrong to refer to results as "nearly" or "somewhat" significant?
This slippery slope calls back to the Fisher vs Neyman/Pearson framework for null-hypothesis significance testing (NHST). On the one hand, one wants to make a quantitative assessment of just how unlik |
20,897 | Is it wrong to refer to results as "nearly" or "somewhat" significant? | The difference between two p-values itself typically is not significant. So, it doesn't matter whether your p-value is 0.05, 0.049, 0.051...
With regards to p-values as a measure of strength of association: A p-value is not directly a measure of strength of association. A p-value is the probability of finding as extreme or more extreme data as the data you have observed, given the parameter is hypothesized to be 0 (if one's interested in the null hypothesis -- see Nick Cox' comment). However, this is often not the quantity the researcher is interested in. Many researchers are rather interested in answering questions like "what's the probability of the parameter to be greater than some chosen cut-off value?" If this is what you're interested in, you need to incorporate additional prior information in your model. | Is it wrong to refer to results as "nearly" or "somewhat" significant? | The difference between two p-values itself typically is not significant. So, it doesn't matter whether your p-value is 0.05, 0.049, 0.051...
With regards to p-values as a measure of strength of assoc | Is it wrong to refer to results as "nearly" or "somewhat" significant?
The difference between two p-values itself typically is not significant. So, it doesn't matter whether your p-value is 0.05, 0.049, 0.051...
With regards to p-values as a measure of strength of association: A p-value is not directly a measure of strength of association. A p-value is the probability of finding as extreme or more extreme data as the data you have observed, given the parameter is hypothesized to be 0 (if one's interested in the null hypothesis -- see Nick Cox' comment). However, this is often not the quantity the researcher is interested in. Many researchers are rather interested in answering questions like "what's the probability of the parameter to be greater than some chosen cut-off value?" If this is what you're interested in, you need to incorporate additional prior information in your model. | Is it wrong to refer to results as "nearly" or "somewhat" significant?
The difference between two p-values itself typically is not significant. So, it doesn't matter whether your p-value is 0.05, 0.049, 0.051...
With regards to p-values as a measure of strength of assoc |
20,898 | Is it wrong to refer to results as "nearly" or "somewhat" significant? | Whether "nearly significant" makes sense or not depends on one's philosophy of statistical inference. It's perfectly valid to consider the alpha level as a line in the sand, in which case one should only pay attention to whether $p<\alpha$ or $p>\alpha$. For such an "absolutist", "nearly significant" makes no sense. But it's also perfectly valid to think of p values as providing continuous measures of strength of support (not strength of effect, of course). For such a "continualist", "nearly significant" is a sensible way to describe a result with a moderate p-value. The problem arises when people mix these two philosophies - or worse, are not aware that both exist. (By the way - people often assume these map cleanly onto Neyman/Pearson and Fisher, but they don't; hence my admittedly clumsy terms for them). More detail about this in a blog post on this subject here: https://scientistseessquirrel.wordpress.com/2015/11/16/is-nearly-significant-ridiculous/ | Is it wrong to refer to results as "nearly" or "somewhat" significant? | Whether "nearly significant" makes sense or not depends on one's philosophy of statistical inference. It's perfectly valid to consider the alpha level as a line in the sand, in which case one should | Is it wrong to refer to results as "nearly" or "somewhat" significant?
Whether "nearly significant" makes sense or not depends on one's philosophy of statistical inference. It's perfectly valid to consider the alpha level as a line in the sand, in which case one should only pay attention to whether $p<\alpha$ or $p>\alpha$. For such an "absolutist", "nearly significant" makes no sense. But it's also perfectly valid to think of p values as providing continuous measures of strength of support (not strength of effect, of course). For such a "continualist", "nearly significant" is a sensible way to describe a result with a moderate p-value. The problem arises when people mix these two philosophies - or worse, are not aware that both exist. (By the way - people often assume these map cleanly onto Neyman/Pearson and Fisher, but they don't; hence my admittedly clumsy terms for them). More detail about this in a blog post on this subject here: https://scientistseessquirrel.wordpress.com/2015/11/16/is-nearly-significant-ridiculous/ | Is it wrong to refer to results as "nearly" or "somewhat" significant?
Whether "nearly significant" makes sense or not depends on one's philosophy of statistical inference. It's perfectly valid to consider the alpha level as a line in the sand, in which case one should |
20,899 | Is it wrong to refer to results as "nearly" or "somewhat" significant? | I tend to think saying something is almost statistically significant is not correct from a technical standpoint. Once you set your tolerance level the statistical test of significance is set. You have to go back to the idea of sampling distributions. If your tolerance level is say 0.05 and you happen to get a p-value of 0.053 then it is just by chance that the sample used yielded that statistic. You could very well get another sample that may not yield the same results- I believe the probability of that occurring is based on the tolerance level set and not on the sample statistic. Remember that you are testing samples against a population parameter and samples have their own sampling distribution. So in my opinion, either something is statistically significant or it is not. | Is it wrong to refer to results as "nearly" or "somewhat" significant? | I tend to think saying something is almost statistically significant is not correct from a technical standpoint. Once you set your tolerance level the statistical test of significance is set. You have | Is it wrong to refer to results as "nearly" or "somewhat" significant?
I tend to think saying something is almost statistically significant is not correct from a technical standpoint. Once you set your tolerance level the statistical test of significance is set. You have to go back to the idea of sampling distributions. If your tolerance level is say 0.05 and you happen to get a p-value of 0.053 then it is just by chance that the sample used yielded that statistic. You could very well get another sample that may not yield the same results- I believe the probability of that occurring is based on the tolerance level set and not on the sample statistic. Remember that you are testing samples against a population parameter and samples have their own sampling distribution. So in my opinion, either something is statistically significant or it is not. | Is it wrong to refer to results as "nearly" or "somewhat" significant?
I tend to think saying something is almost statistically significant is not correct from a technical standpoint. Once you set your tolerance level the statistical test of significance is set. You have |
20,900 | Is it wrong to refer to results as "nearly" or "somewhat" significant? | The p-value is uniformly distributed on $[0,1]$ under $\mathcal{H}_0$ so getting a result with a p-value of 0.051 is as likely as getting a result with a p-value of 1. Since you have to set the significance level before getting data you reject the null for every p-value $p > \alpha$. Since you don't reject your null, you have to assume a uniformly distributed p-value, a higher or lower value is essentially meaningless.
This is a wholly different story when you reject the null, since the p-value is not uniformly distributed under $\mathcal{H}_1$ but the distribution depends on the parameter.
See for example Wikipedia. | Is it wrong to refer to results as "nearly" or "somewhat" significant? | The p-value is uniformly distributed on $[0,1]$ under $\mathcal{H}_0$ so getting a result with a p-value of 0.051 is as likely as getting a result with a p-value of 1. Since you have to set the signif | Is it wrong to refer to results as "nearly" or "somewhat" significant?
The p-value is uniformly distributed on $[0,1]$ under $\mathcal{H}_0$ so getting a result with a p-value of 0.051 is as likely as getting a result with a p-value of 1. Since you have to set the significance level before getting data you reject the null for every p-value $p > \alpha$. Since you don't reject your null, you have to assume a uniformly distributed p-value, a higher or lower value is essentially meaningless.
This is a wholly different story when you reject the null, since the p-value is not uniformly distributed under $\mathcal{H}_1$ but the distribution depends on the parameter.
See for example Wikipedia. | Is it wrong to refer to results as "nearly" or "somewhat" significant?
The p-value is uniformly distributed on $[0,1]$ under $\mathcal{H}_0$ so getting a result with a p-value of 0.051 is as likely as getting a result with a p-value of 1. Since you have to set the signif |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.