idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
17,901 | Testing for statistically significant difference in time series? | I would not start by taking differences of stock prices, normalized for the same initial capital or not. Stock prices do not go below zero, so at best the differences between two stock prices (or accrued difference in initial capital outlay) would only be slightly more normal than the not normal distributions of price (or capital worth) of the stocks taken individually, and, not normal enough to justify a difference analysis.
However, as stock prices are approximately log-normal, I would start normalizing by taking the ratio of the two prices $\frac{\$A}{\$B}$, which obviates having to normalize to initial capital outlay. To be specific, what I am expecting is that stock prices vary as proportional data, that a change from a price of $\$1.00$ to $\$1.05$, discretization aside, is as expected as the change from $\$100.00$ to $\$105.00$. Then, all you have to worry about is whether the ratio of stock prices is increasing or decreasing in time. For that, I would suggest ARIMA or some other trending analysis. | Testing for statistically significant difference in time series? | I would not start by taking differences of stock prices, normalized for the same initial capital or not. Stock prices do not go below zero, so at best the differences between two stock prices (or accr | Testing for statistically significant difference in time series?
I would not start by taking differences of stock prices, normalized for the same initial capital or not. Stock prices do not go below zero, so at best the differences between two stock prices (or accrued difference in initial capital outlay) would only be slightly more normal than the not normal distributions of price (or capital worth) of the stocks taken individually, and, not normal enough to justify a difference analysis.
However, as stock prices are approximately log-normal, I would start normalizing by taking the ratio of the two prices $\frac{\$A}{\$B}$, which obviates having to normalize to initial capital outlay. To be specific, what I am expecting is that stock prices vary as proportional data, that a change from a price of $\$1.00$ to $\$1.05$, discretization aside, is as expected as the change from $\$100.00$ to $\$105.00$. Then, all you have to worry about is whether the ratio of stock prices is increasing or decreasing in time. For that, I would suggest ARIMA or some other trending analysis. | Testing for statistically significant difference in time series?
I would not start by taking differences of stock prices, normalized for the same initial capital or not. Stock prices do not go below zero, so at best the differences between two stock prices (or accr |
17,902 | Testing for statistically significant difference in time series? | You can use Kendalls Tau, spearmans rho, or just correlation coefficient to check for these. In R the code will look something like
library(fBasics)
> cor(A,B)
[1] 0.5485227
> cor(A,B,method='kendall')
[1] 0.3581761
> cor(A,B,method='spearman')
[1] 0.5095149 | Testing for statistically significant difference in time series? | You can use Kendalls Tau, spearmans rho, or just correlation coefficient to check for these. In R the code will look something like
library(fBasics)
> cor(A,B)
[1] 0.5485227
> cor(A,B,method='kendall | Testing for statistically significant difference in time series?
You can use Kendalls Tau, spearmans rho, or just correlation coefficient to check for these. In R the code will look something like
library(fBasics)
> cor(A,B)
[1] 0.5485227
> cor(A,B,method='kendall')
[1] 0.3581761
> cor(A,B,method='spearman')
[1] 0.5095149 | Testing for statistically significant difference in time series?
You can use Kendalls Tau, spearmans rho, or just correlation coefficient to check for these. In R the code will look something like
library(fBasics)
> cor(A,B)
[1] 0.5485227
> cor(A,B,method='kendall |
17,903 | Testing for statistically significant difference in time series? | This sounds like an attempt to compare two samples each of size one. If the two time series are not equal, then there is, with hindsight, and arbitrage strategy.
The question is whether this strategy is discoverable in advance. To answer this you must have some idea of the universe from which strategies can be drawn, e.g. an arbitrageur could be guided by exchange rates, weather, phases of the moon... You can then find the best arbitrage strategy from the family you have defined.
If the family is big, then there is risk of overfitting. | Testing for statistically significant difference in time series? | This sounds like an attempt to compare two samples each of size one. If the two time series are not equal, then there is, with hindsight, and arbitrage strategy.
The question is whether this strategy | Testing for statistically significant difference in time series?
This sounds like an attempt to compare two samples each of size one. If the two time series are not equal, then there is, with hindsight, and arbitrage strategy.
The question is whether this strategy is discoverable in advance. To answer this you must have some idea of the universe from which strategies can be drawn, e.g. an arbitrageur could be guided by exchange rates, weather, phases of the moon... You can then find the best arbitrage strategy from the family you have defined.
If the family is big, then there is risk of overfitting. | Testing for statistically significant difference in time series?
This sounds like an attempt to compare two samples each of size one. If the two time series are not equal, then there is, with hindsight, and arbitrage strategy.
The question is whether this strategy |
17,904 | Testing for statistically significant difference in time series? | Let me split my answer into two parts
1)Logical reasoning:
Are these two securities A and B belongs to same organization or product or firm or service? or different
If they both are different then we should not do test for comparision. Because, any difference between two numbers can not be global. It means, just by comparing numbers we can not conclude anything. So, we are missing the big picture.
2)Statistical reasoning:
Consider both these are independant items A and B, then you can go for statistical test for independence. (Depends on the size of data points you need to decide whether you have to go for parametric or non parametric test)
Then, check the P value and find out significant difference in mean value or not. | Testing for statistically significant difference in time series? | Let me split my answer into two parts
1)Logical reasoning:
Are these two securities A and B belongs to same organization or product or firm or service? or different
If they both are different then we | Testing for statistically significant difference in time series?
Let me split my answer into two parts
1)Logical reasoning:
Are these two securities A and B belongs to same organization or product or firm or service? or different
If they both are different then we should not do test for comparision. Because, any difference between two numbers can not be global. It means, just by comparing numbers we can not conclude anything. So, we are missing the big picture.
2)Statistical reasoning:
Consider both these are independant items A and B, then you can go for statistical test for independence. (Depends on the size of data points you need to decide whether you have to go for parametric or non parametric test)
Then, check the P value and find out significant difference in mean value or not. | Testing for statistically significant difference in time series?
Let me split my answer into two parts
1)Logical reasoning:
Are these two securities A and B belongs to same organization or product or firm or service? or different
If they both are different then we |
17,905 | Visualizing many left-skewed distributions | Just an idea: if you can describe the distributions you got relatively well with a normal distribution, you can do 2-dimensional plots showing the impact of A, B and C on the fitted distributions parameters: mean and standart deviation.
Or you try to find other describing measures for the distribution that you got and show the impact of the three variables on them.
If you find that two variables have interactions, you can do a 3d plot. Lets hope they do not all interact with one another. ;) | Visualizing many left-skewed distributions | Just an idea: if you can describe the distributions you got relatively well with a normal distribution, you can do 2-dimensional plots showing the impact of A, B and C on the fitted distributions para | Visualizing many left-skewed distributions
Just an idea: if you can describe the distributions you got relatively well with a normal distribution, you can do 2-dimensional plots showing the impact of A, B and C on the fitted distributions parameters: mean and standart deviation.
Or you try to find other describing measures for the distribution that you got and show the impact of the three variables on them.
If you find that two variables have interactions, you can do a 3d plot. Lets hope they do not all interact with one another. ;) | Visualizing many left-skewed distributions
Just an idea: if you can describe the distributions you got relatively well with a normal distribution, you can do 2-dimensional plots showing the impact of A, B and C on the fitted distributions para |
17,906 | Visualizing many left-skewed distributions | Something you can consider is a density plot that is staggered within the same plane, then facet by each factor so its more visible. With many factors this can be still hard, but at least you can crunch a lot of information in a tinier space this way. First I load three requisite libraries in the program R: tidyverse for data wrangling, lavaan for a the Holzinger data, and ggridges for the ridge plot. The rest of the code is fairly specific and would require some familiarity with R, but it is just to show you how and what it will look like. I have added annotations in hashtags if that is helpful.
#### Load Libraries ####
library(lavaan)
library(tidyverse)
library(ggridges)
#### Plot Density by Stagger ####
HolzingerSwineford1939 %>% # take this data
as_tibble() %>% # make it easy to read
select(school,sex,7:15) %>% # select only these columns (7:15 are "X" items)
mutate(sex = ifelse(sex==1,"Male","Female")) %>% # change gender coding
pivot_longer(cols = 3:11) %>% # pivot data
ggplot(aes(x=value, # plot values of survey data here
y=name, # arrange by name
fill=factor(sex)))+ # fill color by sex
geom_density_ridges()+ # plot ridges
facet_grid(school~sex)+ # facet ridges by these factors
scale_fill_manual(values = c("darkred","hotpink"))+ # fill with these colors
theme_bw()+ # edit theme
theme(legend.position = "none")+ # remove legend (redundant)
labs(x="Value",
y="Name",
title = "By Gender Density of X Values") # label plot
You should get a plot like this. The y axis represents 9 different test items, while the frame labels represent which factor the densities belong to. You can now plainly see which items are skewed and how they skew based off each factor: | Visualizing many left-skewed distributions | Something you can consider is a density plot that is staggered within the same plane, then facet by each factor so its more visible. With many factors this can be still hard, but at least you can crun | Visualizing many left-skewed distributions
Something you can consider is a density plot that is staggered within the same plane, then facet by each factor so its more visible. With many factors this can be still hard, but at least you can crunch a lot of information in a tinier space this way. First I load three requisite libraries in the program R: tidyverse for data wrangling, lavaan for a the Holzinger data, and ggridges for the ridge plot. The rest of the code is fairly specific and would require some familiarity with R, but it is just to show you how and what it will look like. I have added annotations in hashtags if that is helpful.
#### Load Libraries ####
library(lavaan)
library(tidyverse)
library(ggridges)
#### Plot Density by Stagger ####
HolzingerSwineford1939 %>% # take this data
as_tibble() %>% # make it easy to read
select(school,sex,7:15) %>% # select only these columns (7:15 are "X" items)
mutate(sex = ifelse(sex==1,"Male","Female")) %>% # change gender coding
pivot_longer(cols = 3:11) %>% # pivot data
ggplot(aes(x=value, # plot values of survey data here
y=name, # arrange by name
fill=factor(sex)))+ # fill color by sex
geom_density_ridges()+ # plot ridges
facet_grid(school~sex)+ # facet ridges by these factors
scale_fill_manual(values = c("darkred","hotpink"))+ # fill with these colors
theme_bw()+ # edit theme
theme(legend.position = "none")+ # remove legend (redundant)
labs(x="Value",
y="Name",
title = "By Gender Density of X Values") # label plot
You should get a plot like this. The y axis represents 9 different test items, while the frame labels represent which factor the densities belong to. You can now plainly see which items are skewed and how they skew based off each factor: | Visualizing many left-skewed distributions
Something you can consider is a density plot that is staggered within the same plane, then facet by each factor so its more visible. With many factors this can be still hard, but at least you can crun |
17,907 | Visualizing many left-skewed distributions | As noted in the comments, especially by Sextus Empiricus, fold change is often depicted on a logarithmic scale (specifically log2). It appears to work well in your case because the variances are much more similar when shown on the log scale.
I don't see any real evidence in the plots that skewness is a meaningful problem. There's a small handful of extreme values but those don't really call for a change in your visualisation.
That said, there are probably nicer ways to plot this (while retaining the general design and the log2 scale). If you have a small number of points, you could just plot the data directly or perhaps use a beeswarm plot instead of using boxplots. If you have a large number, violin plots, ridgeline plots (as Shawn Hemelstrand's answer shows), or density plots would all be good options. | Visualizing many left-skewed distributions | As noted in the comments, especially by Sextus Empiricus, fold change is often depicted on a logarithmic scale (specifically log2). It appears to work well in your case because the variances are much | Visualizing many left-skewed distributions
As noted in the comments, especially by Sextus Empiricus, fold change is often depicted on a logarithmic scale (specifically log2). It appears to work well in your case because the variances are much more similar when shown on the log scale.
I don't see any real evidence in the plots that skewness is a meaningful problem. There's a small handful of extreme values but those don't really call for a change in your visualisation.
That said, there are probably nicer ways to plot this (while retaining the general design and the log2 scale). If you have a small number of points, you could just plot the data directly or perhaps use a beeswarm plot instead of using boxplots. If you have a large number, violin plots, ridgeline plots (as Shawn Hemelstrand's answer shows), or density plots would all be good options. | Visualizing many left-skewed distributions
As noted in the comments, especially by Sextus Empiricus, fold change is often depicted on a logarithmic scale (specifically log2). It appears to work well in your case because the variances are much |
17,908 | Can I use optimally scaled variables for a factor analysis to account for rotation? If I can then how? | I don't consider rotation as a way to better understand factors (PCs) based on loadings. Rather, rotation is a way to enforce variables to "mostly load" on one factor, which may have large repercussions on factor determination. However, if you a priori know what the factors are supposed to represent, and then are trying to confirm that the appropriate variables are loading correctly on factors, then whatever rotation schemes work are likely to be appropriate. Otherwise, it sounds like you are performing confirmatory factor analysis.
You never stated what the factor scores are going to be used for(?) | Can I use optimally scaled variables for a factor analysis to account for rotation? If I can then ho | I don't consider rotation as a way to better understand factors (PCs) based on loadings. Rather, rotation is a way to enforce variables to "mostly load" on one factor, which may have large repercussi | Can I use optimally scaled variables for a factor analysis to account for rotation? If I can then how?
I don't consider rotation as a way to better understand factors (PCs) based on loadings. Rather, rotation is a way to enforce variables to "mostly load" on one factor, which may have large repercussions on factor determination. However, if you a priori know what the factors are supposed to represent, and then are trying to confirm that the appropriate variables are loading correctly on factors, then whatever rotation schemes work are likely to be appropriate. Otherwise, it sounds like you are performing confirmatory factor analysis.
You never stated what the factor scores are going to be used for(?) | Can I use optimally scaled variables for a factor analysis to account for rotation? If I can then ho
I don't consider rotation as a way to better understand factors (PCs) based on loadings. Rather, rotation is a way to enforce variables to "mostly load" on one factor, which may have large repercussi |
17,909 | How to treat illogical survey responses | This is a good situation for a sensitivity analysis. Analyze your data in each of three ways --
As they are
After excluding "the illogicals", i.e., people whose percentages don't add up to 100 (or 100 +/- 10)
After adjusting where necessary so that each person's percentages add up to 100
Then compare results, sharing any rationale you can develop as to which results might be more accurate, or more accurate in certain respects.
You can also investigate the range of ways in which the logicals and illogicals differ, if any. Do the illogicals tend to report higher incomes? To show greater support for certain ideas or programs? To skip more questions? To evince more bias in the sense of straightlining or disproportionately choosing middle responses or extreme responses?
With about 400 or these illogicals, you have enough data even to assess the relationship between degree of illogicality and degree of a given type of bias. Something like a dose-response relationship.
What you learn from these investigations might be fed back into your plan for dealing with the illogicals when it comes to the main analyses of interest. | How to treat illogical survey responses | This is a good situation for a sensitivity analysis. Analyze your data in each of three ways --
As they are
After excluding "the illogicals", i.e., people whose percentages don't add up to 100 (or 1 | How to treat illogical survey responses
This is a good situation for a sensitivity analysis. Analyze your data in each of three ways --
As they are
After excluding "the illogicals", i.e., people whose percentages don't add up to 100 (or 100 +/- 10)
After adjusting where necessary so that each person's percentages add up to 100
Then compare results, sharing any rationale you can develop as to which results might be more accurate, or more accurate in certain respects.
You can also investigate the range of ways in which the logicals and illogicals differ, if any. Do the illogicals tend to report higher incomes? To show greater support for certain ideas or programs? To skip more questions? To evince more bias in the sense of straightlining or disproportionately choosing middle responses or extreme responses?
With about 400 or these illogicals, you have enough data even to assess the relationship between degree of illogicality and degree of a given type of bias. Something like a dose-response relationship.
What you learn from these investigations might be fed back into your plan for dealing with the illogicals when it comes to the main analyses of interest. | How to treat illogical survey responses
This is a good situation for a sensitivity analysis. Analyze your data in each of three ways --
As they are
After excluding "the illogicals", i.e., people whose percentages don't add up to 100 (or 1 |
17,910 | How to treat illogical survey responses | As already alluded to here, those answers aren't necessarily illogical. For example, you say
The others don’t: for example, there are who answers that 70% of their income derived by his/her artistic activities and 60% by income government, and so on.
That makes perfect sense if 30% of income is derived from artistic activities done for the government. Then we really have three groups:
Artistic activity, unrelated to government: 40%.
Government, unrelated to artistic activity: 30%.
Artistic activity subsidized by government: 30%.
Those numbers add up to 100%.
Consider the following questions:
Are food stamps government or activities not related to the arts?
Is Social Security government or a pension?
Is a government pension (former government employee) a private pension or government?
Is a government grant to paint a painting artistic activity or government?
Is a job teaching arts at a local school government or artistic activity or activities not related to the arts?
If retired from a job in the arts (teaching or commercial, e.g. drawing gift cards), is the pension private or artistic activity? Or in the case of teaching, possibly government?
You seem to want these categories to be mutually exclusive. However, I don't think everyone would interpret them that way. You may have clear thoughts on how those activities should be categorized, but it's not clear that your respondents had the same divisions in mind when they answered. At minimum, if you want the numbers to add to 100%, you should tell people that.
Personally, I think that the best approach to this kind of a problem is to do a kind of focus group. In a traditional survey, you may not be able to validate answers. So call or visit people who would be targeted by the survey and start a conversation. Then when they give answers that you don't understand, ask them why. And beyond that, ask them how you should have asked the question so that you'd get the kind of results that you want. This works more like a focus group in that it is interactive.
Once you've done that, then you can get a better idea of how to handle the responses that don't fit into your format. For example, you might take the 30% extra and subtract half from each. Then you'd have 55% artistic activity and 45% government. Or you might recategorize as 40% private artistic activity, 30% government-sponsored artistic activity, and 30% other (in this case, government support unrelated to artistic activity, e.g. food stamps or rent support). Or throw out the survey and redo it, because people are not understanding your categories properly. Part of this depends on what you understood the categories to mean as well as how they interpreted it.
Too late now, but for future surveys, consider doing a regular focus group before the survey. Then you can test your questions in the group environment and improve them. You may even find that you get additional questions from the group. If it's too difficult to do this in person, consider doing it online. Or do a test survey (of a smaller number of people, validating responses with follow-up questions) by phone personally before doing the real survey. Any of these can help you make your questions clearer.
For example, perhaps your real categories should have been private income from artistic activity; government-sponsored artistic activity; private pension from a former job; other income unrelated to artistic activity. Or something different. Part of the problem is that I can't tell what you want, which makes me think your respondents couldn't either. If there are three different interpretations, it's almost like you are mixing responses from three different surveys. | How to treat illogical survey responses | As already alluded to here, those answers aren't necessarily illogical. For example, you say
The others don’t: for example, there are who answers that 70% of their income derived by his/her artisti | How to treat illogical survey responses
As already alluded to here, those answers aren't necessarily illogical. For example, you say
The others don’t: for example, there are who answers that 70% of their income derived by his/her artistic activities and 60% by income government, and so on.
That makes perfect sense if 30% of income is derived from artistic activities done for the government. Then we really have three groups:
Artistic activity, unrelated to government: 40%.
Government, unrelated to artistic activity: 30%.
Artistic activity subsidized by government: 30%.
Those numbers add up to 100%.
Consider the following questions:
Are food stamps government or activities not related to the arts?
Is Social Security government or a pension?
Is a government pension (former government employee) a private pension or government?
Is a government grant to paint a painting artistic activity or government?
Is a job teaching arts at a local school government or artistic activity or activities not related to the arts?
If retired from a job in the arts (teaching or commercial, e.g. drawing gift cards), is the pension private or artistic activity? Or in the case of teaching, possibly government?
You seem to want these categories to be mutually exclusive. However, I don't think everyone would interpret them that way. You may have clear thoughts on how those activities should be categorized, but it's not clear that your respondents had the same divisions in mind when they answered. At minimum, if you want the numbers to add to 100%, you should tell people that.
Personally, I think that the best approach to this kind of a problem is to do a kind of focus group. In a traditional survey, you may not be able to validate answers. So call or visit people who would be targeted by the survey and start a conversation. Then when they give answers that you don't understand, ask them why. And beyond that, ask them how you should have asked the question so that you'd get the kind of results that you want. This works more like a focus group in that it is interactive.
Once you've done that, then you can get a better idea of how to handle the responses that don't fit into your format. For example, you might take the 30% extra and subtract half from each. Then you'd have 55% artistic activity and 45% government. Or you might recategorize as 40% private artistic activity, 30% government-sponsored artistic activity, and 30% other (in this case, government support unrelated to artistic activity, e.g. food stamps or rent support). Or throw out the survey and redo it, because people are not understanding your categories properly. Part of this depends on what you understood the categories to mean as well as how they interpreted it.
Too late now, but for future surveys, consider doing a regular focus group before the survey. Then you can test your questions in the group environment and improve them. You may even find that you get additional questions from the group. If it's too difficult to do this in person, consider doing it online. Or do a test survey (of a smaller number of people, validating responses with follow-up questions) by phone personally before doing the real survey. Any of these can help you make your questions clearer.
For example, perhaps your real categories should have been private income from artistic activity; government-sponsored artistic activity; private pension from a former job; other income unrelated to artistic activity. Or something different. Part of the problem is that I can't tell what you want, which makes me think your respondents couldn't either. If there are three different interpretations, it's almost like you are mixing responses from three different surveys. | How to treat illogical survey responses
As already alluded to here, those answers aren't necessarily illogical. For example, you say
The others don’t: for example, there are who answers that 70% of their income derived by his/her artisti |
17,911 | How to treat illogical survey responses | I cannot give you an answer for the general case of illogical responses. But for this specific type of question - been there, done that. Not only in a survey, but also in semistructured interviews, where I had a chance to observe how people come up with this kind of answer. Based on this, as well as some general experience in observing and analyzing cognitive processess, I would suggest: normalize your data back to a sum of 100%. The reason is that people seem to first go to the most salient category - in your case, that would be the largest income - give a gut-feeling estimate for it in percent, then start thinking of the next smaller categories and base their estimate relative to the anchor of the first category, plus that of further already mentioned categories.
For an example, a train of thought will go like: "My first source of income is certainly more than half. It makes what, 60%? No, that's too low, let's say 65%. The second is about a third of that, so that would be a bit more than 20%, uh, difficult to calculate it in my head, let's round up to 25%. The third also feels like a third of the first, but it is actually always a bit more than the second, so it should be 30%. Or even 35? No, let's go with 30. Oh, and I forgot that I have a fourth source, that only happens once a year, that should be really small compared to the others, so 5 or 10%? Probabaly 5 is closer, it isn't really that much". And so you end up with an answer of 65 + 25 + 30 + 5 = 125%.
Because people tend to be more aware of the relative size of the income parts to each other than of each part to the total, I would say that normalizing them is in order here, if you want to run some kind of numeric analysis on the income. I would only work with the actual reported numbers if the difference between people's beliefs and statements about their income and objective reality is an important topic for your work, for example if you are a psychologist studying cognitive biases, or if you are more interested in the self-perception of artists than in their economic circumstances.
Sadly, I don't have a good literature source to prove that it really works as I described it, it is just my personal empirical observation. But I don't think that reviewers will get caught up on this kind of decision, since, as the other answers said, there is no single "right" way to treat it. If anything, they will dismiss your whole data from this question as invalid due to a flawed querying technique. The best you can do is to preemptively acknowledge it and come up with arguments why your work is nevertheless useful and why the conclusions you are drawing are still good despite this specific source of inaccuracy in the data. | How to treat illogical survey responses | I cannot give you an answer for the general case of illogical responses. But for this specific type of question - been there, done that. Not only in a survey, but also in semistructured interviews, wh | How to treat illogical survey responses
I cannot give you an answer for the general case of illogical responses. But for this specific type of question - been there, done that. Not only in a survey, but also in semistructured interviews, where I had a chance to observe how people come up with this kind of answer. Based on this, as well as some general experience in observing and analyzing cognitive processess, I would suggest: normalize your data back to a sum of 100%. The reason is that people seem to first go to the most salient category - in your case, that would be the largest income - give a gut-feeling estimate for it in percent, then start thinking of the next smaller categories and base their estimate relative to the anchor of the first category, plus that of further already mentioned categories.
For an example, a train of thought will go like: "My first source of income is certainly more than half. It makes what, 60%? No, that's too low, let's say 65%. The second is about a third of that, so that would be a bit more than 20%, uh, difficult to calculate it in my head, let's round up to 25%. The third also feels like a third of the first, but it is actually always a bit more than the second, so it should be 30%. Or even 35? No, let's go with 30. Oh, and I forgot that I have a fourth source, that only happens once a year, that should be really small compared to the others, so 5 or 10%? Probabaly 5 is closer, it isn't really that much". And so you end up with an answer of 65 + 25 + 30 + 5 = 125%.
Because people tend to be more aware of the relative size of the income parts to each other than of each part to the total, I would say that normalizing them is in order here, if you want to run some kind of numeric analysis on the income. I would only work with the actual reported numbers if the difference between people's beliefs and statements about their income and objective reality is an important topic for your work, for example if you are a psychologist studying cognitive biases, or if you are more interested in the self-perception of artists than in their economic circumstances.
Sadly, I don't have a good literature source to prove that it really works as I described it, it is just my personal empirical observation. But I don't think that reviewers will get caught up on this kind of decision, since, as the other answers said, there is no single "right" way to treat it. If anything, they will dismiss your whole data from this question as invalid due to a flawed querying technique. The best you can do is to preemptively acknowledge it and come up with arguments why your work is nevertheless useful and why the conclusions you are drawing are still good despite this specific source of inaccuracy in the data. | How to treat illogical survey responses
I cannot give you an answer for the general case of illogical responses. But for this specific type of question - been there, done that. Not only in a survey, but also in semistructured interviews, wh |
17,912 | How to treat illogical survey responses | If social science has taught me anything, it's that if you give people a chance to give logically inconsistent responses, they will. So rest assured that there's nothing unusual about your subjects. This is something to keep in mind for designing future surveys. For the time being, it may be best to leave the responses as-is and just keep in mind in your analyses that the responses won't actually add up to 100%, as one would think. Rather than true proportions, you have noisy signals of how much income each subject gets from each category, so analyze them that way. | How to treat illogical survey responses | If social science has taught me anything, it's that if you give people a chance to give logically inconsistent responses, they will. So rest assured that there's nothing unusual about your subjects. T | How to treat illogical survey responses
If social science has taught me anything, it's that if you give people a chance to give logically inconsistent responses, they will. So rest assured that there's nothing unusual about your subjects. This is something to keep in mind for designing future surveys. For the time being, it may be best to leave the responses as-is and just keep in mind in your analyses that the responses won't actually add up to 100%, as one would think. Rather than true proportions, you have noisy signals of how much income each subject gets from each category, so analyze them that way. | How to treat illogical survey responses
If social science has taught me anything, it's that if you give people a chance to give logically inconsistent responses, they will. So rest assured that there's nothing unusual about your subjects. T |
17,913 | How to treat illogical survey responses | artistic activity, government support, private pension, activities not related with arts
Just at a glance, it seems that "artistic activity" and "activity not related with arts" should add up to 100%.
Of course "activity not related with the arts" is not the same as "not activity related with the arts," since there can be income associated with no activity at all. But that's hair splitting that most artists won't notice.
If you assume that categories 1 and 4 should add up to 100%, and reinterpret those respondents accordingly, you may find that most of them have included categories 2 and 3 in with category 4.
However, all of this is data manipulation that is not ideal. If you want accurate statistical answers, you must collect data that is accurate. People might lie in response to your survey, and that's hard to guard against, but if people honestly trying to answer your questions can be in confusion over what is meant, your survey needs to be rewritten.
Next time proofread the survey for understandability, as well as ambiguity, before you send it out. | How to treat illogical survey responses | artistic activity, government support, private pension, activities not related with arts
Just at a glance, it seems that "artistic activity" and "activity not related with arts" should add up to 100% | How to treat illogical survey responses
artistic activity, government support, private pension, activities not related with arts
Just at a glance, it seems that "artistic activity" and "activity not related with arts" should add up to 100%.
Of course "activity not related with the arts" is not the same as "not activity related with the arts," since there can be income associated with no activity at all. But that's hair splitting that most artists won't notice.
If you assume that categories 1 and 4 should add up to 100%, and reinterpret those respondents accordingly, you may find that most of them have included categories 2 and 3 in with category 4.
However, all of this is data manipulation that is not ideal. If you want accurate statistical answers, you must collect data that is accurate. People might lie in response to your survey, and that's hard to guard against, but if people honestly trying to answer your questions can be in confusion over what is meant, your survey needs to be rewritten.
Next time proofread the survey for understandability, as well as ambiguity, before you send it out. | How to treat illogical survey responses
artistic activity, government support, private pension, activities not related with arts
Just at a glance, it seems that "artistic activity" and "activity not related with arts" should add up to 100% |
17,914 | How to treat illogical survey responses | You have given four categories for income - what about income that is in none of them? For example, dividend income from holding stocks. It's not income from any form of activity, yet it's not government support or a pension either. I would suggest that in the absence of other information you should regard the responses as correct and attribute the missing money to sources that the respondents did not consider to be covered by the categories. | How to treat illogical survey responses | You have given four categories for income - what about income that is in none of them? For example, dividend income from holding stocks. It's not income from any form of activity, yet it's not governm | How to treat illogical survey responses
You have given four categories for income - what about income that is in none of them? For example, dividend income from holding stocks. It's not income from any form of activity, yet it's not government support or a pension either. I would suggest that in the absence of other information you should regard the responses as correct and attribute the missing money to sources that the respondents did not consider to be covered by the categories. | How to treat illogical survey responses
You have given four categories for income - what about income that is in none of them? For example, dividend income from holding stocks. It's not income from any form of activity, yet it's not governm |
17,915 | How to treat illogical survey responses | Actually that is quite simple (and not even as illogical as you might think)! I assume that it is really the proportions of the different categories of income you are after by asking for percentages. So you can simply renormalize to 100%. In your example: if somebody says: 70% of my income is from artistic activities and 60% is from government support, this person (who has propably never had any training in working with percentages) is actually saying: the relative sizes or proportions of my income from artistic activities and government are about 70 to 60 or 7 to 6 (propably not realizing that percentages are supposed to add up to 100..). You can convert these statements about proportions to statements about percentages by simply renormalizing them, as follows: 70 / 130 * 100 = 53% artistic income, and 60 / 130 * 100 = 47% government support income..
(what I do here is actually take 130% as a "new" 100% and calculate the proportions..)
PS. this works for all cases where the sum of the stated percentages is unequal to 100
Hope this helps! | How to treat illogical survey responses | Actually that is quite simple (and not even as illogical as you might think)! I assume that it is really the proportions of the different categories of income you are after by asking for percentages. | How to treat illogical survey responses
Actually that is quite simple (and not even as illogical as you might think)! I assume that it is really the proportions of the different categories of income you are after by asking for percentages. So you can simply renormalize to 100%. In your example: if somebody says: 70% of my income is from artistic activities and 60% is from government support, this person (who has propably never had any training in working with percentages) is actually saying: the relative sizes or proportions of my income from artistic activities and government are about 70 to 60 or 7 to 6 (propably not realizing that percentages are supposed to add up to 100..). You can convert these statements about proportions to statements about percentages by simply renormalizing them, as follows: 70 / 130 * 100 = 53% artistic income, and 60 / 130 * 100 = 47% government support income..
(what I do here is actually take 130% as a "new" 100% and calculate the proportions..)
PS. this works for all cases where the sum of the stated percentages is unequal to 100
Hope this helps! | How to treat illogical survey responses
Actually that is quite simple (and not even as illogical as you might think)! I assume that it is really the proportions of the different categories of income you are after by asking for percentages. |
17,916 | How to treat illogical survey responses | The majority of answers already given have already provided some insight into the obvious survey methodological flaws, so I will not dwell on that here. Instead, I'll provide a few practical options on how to treat this data given that it's already collected and despite the flawed question. There are a few ways to handle this. You could consider marking responses that did not meet your definition of a "valid responses" by treating the entire question as missing and then following any number of practices for handling item-nonresponse such as those discussed here.
You might also consider scaling each response so that the percentages add to 100. Assuming each response is recorded as a percentage, this can be done by re-coding each original response $y_{{old}_{ij}}$ $(j=1,2,3,4)$ of the 4 sub-components to your question (i.e. artistic activity, government support, private pension, activities not related with arts) into a new response $y_{{new}_{ij}}$ as follows:
\begin{eqnarray*}
y_{new_{ij}} & = & \begin{cases}
0 & ,\,\text{for}\,\sum_{j=1}^{4}y_{old_{ij}}=0\\
\frac{y_{old_{ij}}}{\sum_{j=1}^{4}y_{old_{ij}}} & ,\,\text{for}\,\sum_{j=1}^{4}y_{old_{ij}}>0\\
Missing & ,\,Otherwise
\end{cases}
\end{eqnarray*}
So for example, say you had a respondent $i$ who answered as follows:
A. (i=1) Artistic activity: 10%
B. (i=2) Government support: 0%
C. (i=3) Private pension: 30%
D. (i=1) Activities not related with arts: 40%
Then you'd recode as follows:
\begin{eqnarray*}
y_{new_{i1}} & = & \frac{10}{10+0+30+40}=\frac{10}{80}=12.5\%\\
y_{new_{i2}} & = & \frac{0}{10+0+30+40}=\frac{0}{80}=00.0\%\\
y_{new_{i3}} & = & \frac{30}{10+0+30+40}=\frac{30}{80}=37.5\%\\
y_{new_{i4}} & = & \frac{40}{10+0+30+40}=\frac{40}{80}=50.0\%
\end{eqnarray*}
Note that all the new percentages now add to 100%. Whatever you do, please be sure you make any transformations very clear when reporting your results and I think @rolando2 provided some excellent advice on how to perform some sensitivity analyses to see how transformations like these might affect your conclusions. | How to treat illogical survey responses | The majority of answers already given have already provided some insight into the obvious survey methodological flaws, so I will not dwell on that here. Instead, I'll provide a few practical options | How to treat illogical survey responses
The majority of answers already given have already provided some insight into the obvious survey methodological flaws, so I will not dwell on that here. Instead, I'll provide a few practical options on how to treat this data given that it's already collected and despite the flawed question. There are a few ways to handle this. You could consider marking responses that did not meet your definition of a "valid responses" by treating the entire question as missing and then following any number of practices for handling item-nonresponse such as those discussed here.
You might also consider scaling each response so that the percentages add to 100. Assuming each response is recorded as a percentage, this can be done by re-coding each original response $y_{{old}_{ij}}$ $(j=1,2,3,4)$ of the 4 sub-components to your question (i.e. artistic activity, government support, private pension, activities not related with arts) into a new response $y_{{new}_{ij}}$ as follows:
\begin{eqnarray*}
y_{new_{ij}} & = & \begin{cases}
0 & ,\,\text{for}\,\sum_{j=1}^{4}y_{old_{ij}}=0\\
\frac{y_{old_{ij}}}{\sum_{j=1}^{4}y_{old_{ij}}} & ,\,\text{for}\,\sum_{j=1}^{4}y_{old_{ij}}>0\\
Missing & ,\,Otherwise
\end{cases}
\end{eqnarray*}
So for example, say you had a respondent $i$ who answered as follows:
A. (i=1) Artistic activity: 10%
B. (i=2) Government support: 0%
C. (i=3) Private pension: 30%
D. (i=1) Activities not related with arts: 40%
Then you'd recode as follows:
\begin{eqnarray*}
y_{new_{i1}} & = & \frac{10}{10+0+30+40}=\frac{10}{80}=12.5\%\\
y_{new_{i2}} & = & \frac{0}{10+0+30+40}=\frac{0}{80}=00.0\%\\
y_{new_{i3}} & = & \frac{30}{10+0+30+40}=\frac{30}{80}=37.5\%\\
y_{new_{i4}} & = & \frac{40}{10+0+30+40}=\frac{40}{80}=50.0\%
\end{eqnarray*}
Note that all the new percentages now add to 100%. Whatever you do, please be sure you make any transformations very clear when reporting your results and I think @rolando2 provided some excellent advice on how to perform some sensitivity analyses to see how transformations like these might affect your conclusions. | How to treat illogical survey responses
The majority of answers already given have already provided some insight into the obvious survey methodological flaws, so I will not dwell on that here. Instead, I'll provide a few practical options |
17,917 | Common words that have particular statistical meanings | I found a refereed paper from 2010 that looks at this question.
Anderson-Cook CM. Hidden jargon: Everyday words with meanings specific to statistics. ICOTS8, International Conference on Teaching Statistics, Ljubljana, Slovenia, 11-17 July 2010.
The paper is available for free online, so I am only providing a partial list of the terms that the author discusses:
confounding, control, factor, independent, random, uniform | Common words that have particular statistical meanings | I found a refereed paper from 2010 that looks at this question.
Anderson-Cook CM. Hidden jargon: Everyday words with meanings specific to statistics. ICOTS8, International Conference on Teaching Stati | Common words that have particular statistical meanings
I found a refereed paper from 2010 that looks at this question.
Anderson-Cook CM. Hidden jargon: Everyday words with meanings specific to statistics. ICOTS8, International Conference on Teaching Statistics, Ljubljana, Slovenia, 11-17 July 2010.
The paper is available for free online, so I am only providing a partial list of the terms that the author discusses:
confounding, control, factor, independent, random, uniform | Common words that have particular statistical meanings
I found a refereed paper from 2010 that looks at this question.
Anderson-Cook CM. Hidden jargon: Everyday words with meanings specific to statistics. ICOTS8, International Conference on Teaching Stati |
17,918 | Common words that have particular statistical meanings | "significant" -- here the common language use of the word is to mean something like 'important' or 'meaningful'. The statistical meaning is informally nearer to "can be discerned from random variation about the null"; it doesn't signify that the difference is large enough to matter.
Here are some examples where this distinction might have been the cause of some confusion: 1 2
"parameter" -- it often seems to happen - particularly in scientific experiments - that the word 'parameter' is used in the way a statistician would use the word 'variable'. Wikipedia puts it thus:
A statistical parameter is a parameter that indexes a family of probability distributions. It can be regarded as a numerical characteristic of a population or a model
Example where this one may be an issue: 1 - presumably the post that led to this question. (I saw another recently but I can't locate it right now) | Common words that have particular statistical meanings | "significant" -- here the common language use of the word is to mean something like 'important' or 'meaningful'. The statistical meaning is informally nearer to "can be discerned from random variation | Common words that have particular statistical meanings
"significant" -- here the common language use of the word is to mean something like 'important' or 'meaningful'. The statistical meaning is informally nearer to "can be discerned from random variation about the null"; it doesn't signify that the difference is large enough to matter.
Here are some examples where this distinction might have been the cause of some confusion: 1 2
"parameter" -- it often seems to happen - particularly in scientific experiments - that the word 'parameter' is used in the way a statistician would use the word 'variable'. Wikipedia puts it thus:
A statistical parameter is a parameter that indexes a family of probability distributions. It can be regarded as a numerical characteristic of a population or a model
Example where this one may be an issue: 1 - presumably the post that led to this question. (I saw another recently but I can't locate it right now) | Common words that have particular statistical meanings
"significant" -- here the common language use of the word is to mean something like 'important' or 'meaningful'. The statistical meaning is informally nearer to "can be discerned from random variation |
17,919 | Common words that have particular statistical meanings | "Error" - In statistics it often means any deviation between an observed and predicted value. In real life it means a mistake. | Common words that have particular statistical meanings | "Error" - In statistics it often means any deviation between an observed and predicted value. In real life it means a mistake. | Common words that have particular statistical meanings
"Error" - In statistics it often means any deviation between an observed and predicted value. In real life it means a mistake. | Common words that have particular statistical meanings
"Error" - In statistics it often means any deviation between an observed and predicted value. In real life it means a mistake. |
17,920 | Common words that have particular statistical meanings | I've come across the problem of using "falsification" as in "falsify a hypothesis", while others thought I was referring to "making up data". Also "biased" is nearly impossible to mention without causing confusion. | Common words that have particular statistical meanings | I've come across the problem of using "falsification" as in "falsify a hypothesis", while others thought I was referring to "making up data". Also "biased" is nearly impossible to mention without caus | Common words that have particular statistical meanings
I've come across the problem of using "falsification" as in "falsify a hypothesis", while others thought I was referring to "making up data". Also "biased" is nearly impossible to mention without causing confusion. | Common words that have particular statistical meanings
I've come across the problem of using "falsification" as in "falsify a hypothesis", while others thought I was referring to "making up data". Also "biased" is nearly impossible to mention without caus |
17,921 | Common words that have particular statistical meanings | "normal" - In common speech, normal means as expected, not out of the ordinary. In statistics, if a variable is normally distributed, it's referring to the Gaussian distribution. I don't believe it's standard to capitalize the word "normal" to distinguish it from the common speech meaning.
"normalization / standaridization" - In statistics, to normalize a variable means to subtract the mean and divide by the standard deviation.
"standard deviation versus standard error" - Standard deviation usually is calculated using the entire population whereas standard error is calculated using the sample. | Common words that have particular statistical meanings | "normal" - In common speech, normal means as expected, not out of the ordinary. In statistics, if a variable is normally distributed, it's referring to the Gaussian distribution. I don't believe it's | Common words that have particular statistical meanings
"normal" - In common speech, normal means as expected, not out of the ordinary. In statistics, if a variable is normally distributed, it's referring to the Gaussian distribution. I don't believe it's standard to capitalize the word "normal" to distinguish it from the common speech meaning.
"normalization / standaridization" - In statistics, to normalize a variable means to subtract the mean and divide by the standard deviation.
"standard deviation versus standard error" - Standard deviation usually is calculated using the entire population whereas standard error is calculated using the sample. | Common words that have particular statistical meanings
"normal" - In common speech, normal means as expected, not out of the ordinary. In statistics, if a variable is normally distributed, it's referring to the Gaussian distribution. I don't believe it's |
17,922 | Common words that have particular statistical meanings | Estimate -- In statistics it is the result of a calculation. For example, the sample mean is an estimate of the population mean, and the confidence interval of a mean is an interval estimate of the population mean. These are both results of exact calculations. The "estimation" is a precise generalization of trying to make an inference about a population from data in a sample.
In ordinary use, the word estimate means an informed guess or hunch, or the result of an approximate calculation. | Common words that have particular statistical meanings | Estimate -- In statistics it is the result of a calculation. For example, the sample mean is an estimate of the population mean, and the confidence interval of a mean is an interval estimate of the po | Common words that have particular statistical meanings
Estimate -- In statistics it is the result of a calculation. For example, the sample mean is an estimate of the population mean, and the confidence interval of a mean is an interval estimate of the population mean. These are both results of exact calculations. The "estimation" is a precise generalization of trying to make an inference about a population from data in a sample.
In ordinary use, the word estimate means an informed guess or hunch, or the result of an approximate calculation. | Common words that have particular statistical meanings
Estimate -- In statistics it is the result of a calculation. For example, the sample mean is an estimate of the population mean, and the confidence interval of a mean is an interval estimate of the po |
17,923 | Common words that have particular statistical meanings | Likelihood - in ordinary parlance the synonym of probability, but in statistics having a particular inverse relation to probability, in that, for any parameter set $\theta$ and data set $X$, $\mathcal{L}(\theta|X)=\Pr(X|\theta)$.
Representative - has a number of sometimes conflicting meanings in both everyday and scientific parlance. Refer to Kruskal & Mosteller 1979a, 1979b, 1979c and 1980. Most statisticians I know would consider a sample representative if it was sampled with known probability; most laypeople I know would consider it representative if the marginal distributions were akin to the population. | Common words that have particular statistical meanings | Likelihood - in ordinary parlance the synonym of probability, but in statistics having a particular inverse relation to probability, in that, for any parameter set $\theta$ and data set $X$, $\mathcal | Common words that have particular statistical meanings
Likelihood - in ordinary parlance the synonym of probability, but in statistics having a particular inverse relation to probability, in that, for any parameter set $\theta$ and data set $X$, $\mathcal{L}(\theta|X)=\Pr(X|\theta)$.
Representative - has a number of sometimes conflicting meanings in both everyday and scientific parlance. Refer to Kruskal & Mosteller 1979a, 1979b, 1979c and 1980. Most statisticians I know would consider a sample representative if it was sampled with known probability; most laypeople I know would consider it representative if the marginal distributions were akin to the population. | Common words that have particular statistical meanings
Likelihood - in ordinary parlance the synonym of probability, but in statistics having a particular inverse relation to probability, in that, for any parameter set $\theta$ and data set $X$, $\mathcal |
17,924 | Common words that have particular statistical meanings | Sample: while in statistics this refers to a set of cases, in many other disciplines a sample is one physical specimen.
Of course, sample size is also ambiguous, refering either to the number of cases in the statistical sample or the physical size (mass, volume, ...) of the specimen.
Sensitivity: for medical diagnostics the fraction of diseased cases that is recognized by the test. In analytical chemistry: the slope of the calibration curve (see below).
Specificity: in medical diagnostics the fraction of non-disease cases this correctly recognized by the test. In analytical chemistry, a method is specific if there are no cross-sensitivities.
Calibration: actually, two meanings are listed already for statistics in the Wiki article. In chemistry and physics, the reverse regression meaning is the usual one. Confusion arises, though:
In chemometrics, (forward) calibration models the measured signal $I$ dependent on the concentration $c$: $I = f (c)$. Prediction then solves for concentration $c$: $c = f^{-1} (I)$. Inverse calibration models $c = f (I)$. Thus, the forward model agrees with the causality (concentration of analyte causes signal, not the other way round), but the inverse models the direction that is used for the predictions.
(In practice, it is often possible to say that the error on $c$ or the error on $I$ is much larger than the other, and the appropriate modeling direction is/should be chosen from that)
I've seen plots of predicted probability over true probability called "calibration plots" (stats people). In analytical chemistry, the corresponding calibration plot would be predicted probability over measured signal (usually some other unit). The plot of predicted over true dependent variable would usually be called recovery curve.
Validation set: here I'd like to draw the attention to a potentially confusion use of terms which I think already arises within the different statistics-related fields, even though I again contrast .
In the context of nested/double validation or optimization vs. validation/testing, one line of terminology splits training - validation - test and uses the "validation" set for optimization of hyperparameters.
E.g. in the Elements of Statistical Learning, p. 222 in the 2nd ed.:
... divide the dataset into three parts: a training set, a validation
set, and a test set. The training set is used to fit the models; the validation
set is used to estimate prediction error for model selection; the test set is
used for assessment of the generalization error of the final chosen model.
In contrast, e.g. in analytical chemistry validation is the procedure that demonstrates that the model (actually, the assessment of the final model is only part of the validation of an analytical method) works well for the application, and measures its performance, see e.g. John K. Taylor: Validation of analytical methods, Analytical Chemistry 1983 55 (6), 600A-608A
or guidelines by institutions like the FDA. This would be "testing" in the other line of terminology, where the "validation" is actually used for optimization.
The crucial difference is, that the "optimization-validation" results are to be used to change (select) the model, whereas changes in a validated analytical method (including the data analytic model) mean that you have to revalidate (i.e. prove that the method still works as it is supposed to work).
If you happen to have to talk to chemists, a good reference of the analytical chemistry terminology is Danzer: Analytical Chemistry - Theoretical and Metrological Fundamentals, DOI 10.1007/b103950 | Common words that have particular statistical meanings | Sample: while in statistics this refers to a set of cases, in many other disciplines a sample is one physical specimen.
Of course, sample size is also ambiguous, refering either to the number of case | Common words that have particular statistical meanings
Sample: while in statistics this refers to a set of cases, in many other disciplines a sample is one physical specimen.
Of course, sample size is also ambiguous, refering either to the number of cases in the statistical sample or the physical size (mass, volume, ...) of the specimen.
Sensitivity: for medical diagnostics the fraction of diseased cases that is recognized by the test. In analytical chemistry: the slope of the calibration curve (see below).
Specificity: in medical diagnostics the fraction of non-disease cases this correctly recognized by the test. In analytical chemistry, a method is specific if there are no cross-sensitivities.
Calibration: actually, two meanings are listed already for statistics in the Wiki article. In chemistry and physics, the reverse regression meaning is the usual one. Confusion arises, though:
In chemometrics, (forward) calibration models the measured signal $I$ dependent on the concentration $c$: $I = f (c)$. Prediction then solves for concentration $c$: $c = f^{-1} (I)$. Inverse calibration models $c = f (I)$. Thus, the forward model agrees with the causality (concentration of analyte causes signal, not the other way round), but the inverse models the direction that is used for the predictions.
(In practice, it is often possible to say that the error on $c$ or the error on $I$ is much larger than the other, and the appropriate modeling direction is/should be chosen from that)
I've seen plots of predicted probability over true probability called "calibration plots" (stats people). In analytical chemistry, the corresponding calibration plot would be predicted probability over measured signal (usually some other unit). The plot of predicted over true dependent variable would usually be called recovery curve.
Validation set: here I'd like to draw the attention to a potentially confusion use of terms which I think already arises within the different statistics-related fields, even though I again contrast .
In the context of nested/double validation or optimization vs. validation/testing, one line of terminology splits training - validation - test and uses the "validation" set for optimization of hyperparameters.
E.g. in the Elements of Statistical Learning, p. 222 in the 2nd ed.:
... divide the dataset into three parts: a training set, a validation
set, and a test set. The training set is used to fit the models; the validation
set is used to estimate prediction error for model selection; the test set is
used for assessment of the generalization error of the final chosen model.
In contrast, e.g. in analytical chemistry validation is the procedure that demonstrates that the model (actually, the assessment of the final model is only part of the validation of an analytical method) works well for the application, and measures its performance, see e.g. John K. Taylor: Validation of analytical methods, Analytical Chemistry 1983 55 (6), 600A-608A
or guidelines by institutions like the FDA. This would be "testing" in the other line of terminology, where the "validation" is actually used for optimization.
The crucial difference is, that the "optimization-validation" results are to be used to change (select) the model, whereas changes in a validated analytical method (including the data analytic model) mean that you have to revalidate (i.e. prove that the method still works as it is supposed to work).
If you happen to have to talk to chemists, a good reference of the analytical chemistry terminology is Danzer: Analytical Chemistry - Theoretical and Metrological Fundamentals, DOI 10.1007/b103950 | Common words that have particular statistical meanings
Sample: while in statistics this refers to a set of cases, in many other disciplines a sample is one physical specimen.
Of course, sample size is also ambiguous, refering either to the number of case |
17,925 | Common words that have particular statistical meanings | Skewed in statistics implies asymmetric in distribution.
In ordinary language, and even within science, skewed is often used (and increasingly?) to mean what statistical people would usually call biased, as in "Results for mean height are skewed by including so many basketball players". | Common words that have particular statistical meanings | Skewed in statistics implies asymmetric in distribution.
In ordinary language, and even within science, skewed is often used (and increasingly?) to mean what statistical people would usually call bias | Common words that have particular statistical meanings
Skewed in statistics implies asymmetric in distribution.
In ordinary language, and even within science, skewed is often used (and increasingly?) to mean what statistical people would usually call biased, as in "Results for mean height are skewed by including so many basketball players". | Common words that have particular statistical meanings
Skewed in statistics implies asymmetric in distribution.
In ordinary language, and even within science, skewed is often used (and increasingly?) to mean what statistical people would usually call bias |
17,926 | Common words that have particular statistical meanings | "Parametric" versus "Non-Parametric": categories of tests that either require "Normal" or "not Normal" data. Parametric tests are preferred to non-parametric.
Common tests: T-test (paired), Mann-Whitney U, ANOVA, Anderson-Darling, etc.
Other terms include "significant". This is a measure of if the data indicates your hypothesis to be valid or not. When you test your hypothesis to a certain degree of likelihood (normally 95%), a "p-value" of less than 0.05 would indicate that you would reject your "null hypothesis" (i.e. data sets are not different) and accept your "alternative hypothesis" (i.e. data sets are different). | Common words that have particular statistical meanings | "Parametric" versus "Non-Parametric": categories of tests that either require "Normal" or "not Normal" data. Parametric tests are preferred to non-parametric.
Common tests: T-test (paired), Mann-Whitn | Common words that have particular statistical meanings
"Parametric" versus "Non-Parametric": categories of tests that either require "Normal" or "not Normal" data. Parametric tests are preferred to non-parametric.
Common tests: T-test (paired), Mann-Whitney U, ANOVA, Anderson-Darling, etc.
Other terms include "significant". This is a measure of if the data indicates your hypothesis to be valid or not. When you test your hypothesis to a certain degree of likelihood (normally 95%), a "p-value" of less than 0.05 would indicate that you would reject your "null hypothesis" (i.e. data sets are not different) and accept your "alternative hypothesis" (i.e. data sets are different). | Common words that have particular statistical meanings
"Parametric" versus "Non-Parametric": categories of tests that either require "Normal" or "not Normal" data. Parametric tests are preferred to non-parametric.
Common tests: T-test (paired), Mann-Whitn |
17,927 | Are there parameters where a biased estimator is considered "better" than the unbiased estimator? [duplicate] | One example is estimates from ordinary least squares regression when there is collinearity. They are unbiased but have huge variance. Ridge regression on the same problem yields estimates that are biased but have much lower variance. E.g.
install.packages("ridge")
library(ridge)
set.seed(831)
data(GenCont)
ridgemod <- linearRidge(Phenotypes ~ ., data = as.data.frame(GenCont))
summary(ridgemod)
linmod <- lm(Phenotypes ~ ., data = as.data.frame(GenCont))
summary(linmod)
The t values are much larger for ridge regression than linear regression. The bias is fairly small. | Are there parameters where a biased estimator is considered "better" than the unbiased estimator? [d | One example is estimates from ordinary least squares regression when there is collinearity. They are unbiased but have huge variance. Ridge regression on the same problem yields estimates that are bi | Are there parameters where a biased estimator is considered "better" than the unbiased estimator? [duplicate]
One example is estimates from ordinary least squares regression when there is collinearity. They are unbiased but have huge variance. Ridge regression on the same problem yields estimates that are biased but have much lower variance. E.g.
install.packages("ridge")
library(ridge)
set.seed(831)
data(GenCont)
ridgemod <- linearRidge(Phenotypes ~ ., data = as.data.frame(GenCont))
summary(ridgemod)
linmod <- lm(Phenotypes ~ ., data = as.data.frame(GenCont))
summary(linmod)
The t values are much larger for ridge regression than linear regression. The bias is fairly small. | Are there parameters where a biased estimator is considered "better" than the unbiased estimator? [d
One example is estimates from ordinary least squares regression when there is collinearity. They are unbiased but have huge variance. Ridge regression on the same problem yields estimates that are bi |
17,928 | Are there parameters where a biased estimator is considered "better" than the unbiased estimator? [duplicate] | Yes there are plenty of cases; you're beating around the bush that is the topic of Bias-Variance tradeoff (in particular, the graphic to the right is a good visualization).
As for a mathematical example, I am pulling the following example from the excellent Statistical Inference by Casella and Berger to show that a biased estimator has lower Mean Squared Error and thus is considered better.
Let $X_1, ..., X_n$ be i.i.d. n$(\mu, \sigma^2)$ (i.e. Gaussian with mean $\mu$ and variance $\sigma^2$ in their notation). We will compare two estimators of $\sigma^2$: the first, unbiased, estimator is
$$\hat{\sigma}_{unbiased}^2 := \frac{1}{n-1}\sum_{i=1}^{n} (X_i - \bar{X})^2$$ usually called $S^2$, the canonical sample variance, and the second is $$\hat{\sigma}_{biased}^2 := \frac{1}{n}\sum_{i=1}^{n} (X_i - \bar{X})^2 = \frac{n-1}{n}\hat{\sigma}_{unbiased}^2$$ which is the Maximum Likelihood estimate of $\sigma^2$. First, the MSE of the unbiased estimator:
$$\begin{align} \text{MSE}(\hat{\sigma}^2_{unbiased}) &= \text{Var}
\ \hat{\sigma}^2_{unbiased} + \text{Bias}(\hat{\sigma}^2_{unbiased})^2 \\
&= \frac{2\sigma^4}{n-1}\end{align}$$
The MSE of the biased, maximum likelihood estimate of $\sigma^2$ is:
$$\begin{align}\text{MSE}(\hat{\sigma}_{biased}^2) &= \text{Var}\ \hat{\sigma}_{biased}^2 + \text{Bias}(\hat{\sigma}_{biased}^2)^2\\ &=\text{Var}\left(\frac{n-1}{n}\hat{\sigma}^2_{unbiased}\right) + \left(\text{E}\hat{\sigma}_{biased}^2 - \sigma^2\right)^2 \\ &=\left(\frac{n-1}{n}\right)^2\text{Var}
\ \hat{\sigma}^2_{unbiased} \, + \left(\text{E}\left(\frac{n-1}{n}\hat{\sigma}^2_{unbiased}\right) -
\sigma^2\right)^2\\ &= \frac{2(n-1)\sigma^4}{n^2} + \left(\frac{n-1}{n}\sigma^2 - \sigma^2\right)^2\\ &= \left(\frac{2n-1}{n^2}\right)\sigma^4\end{align}$$
Hence,
$$\text{MSE}(\hat{\sigma}_{biased}^2) = \frac{2n-1}{n^2}\sigma^4 < \frac{2}{n-1}\sigma^4 = \text{MSE}(\hat{\sigma}_{unbiased}^2)$$ | Are there parameters where a biased estimator is considered "better" than the unbiased estimator? [d | Yes there are plenty of cases; you're beating around the bush that is the topic of Bias-Variance tradeoff (in particular, the graphic to the right is a good visualization).
As for a mathematical exam | Are there parameters where a biased estimator is considered "better" than the unbiased estimator? [duplicate]
Yes there are plenty of cases; you're beating around the bush that is the topic of Bias-Variance tradeoff (in particular, the graphic to the right is a good visualization).
As for a mathematical example, I am pulling the following example from the excellent Statistical Inference by Casella and Berger to show that a biased estimator has lower Mean Squared Error and thus is considered better.
Let $X_1, ..., X_n$ be i.i.d. n$(\mu, \sigma^2)$ (i.e. Gaussian with mean $\mu$ and variance $\sigma^2$ in their notation). We will compare two estimators of $\sigma^2$: the first, unbiased, estimator is
$$\hat{\sigma}_{unbiased}^2 := \frac{1}{n-1}\sum_{i=1}^{n} (X_i - \bar{X})^2$$ usually called $S^2$, the canonical sample variance, and the second is $$\hat{\sigma}_{biased}^2 := \frac{1}{n}\sum_{i=1}^{n} (X_i - \bar{X})^2 = \frac{n-1}{n}\hat{\sigma}_{unbiased}^2$$ which is the Maximum Likelihood estimate of $\sigma^2$. First, the MSE of the unbiased estimator:
$$\begin{align} \text{MSE}(\hat{\sigma}^2_{unbiased}) &= \text{Var}
\ \hat{\sigma}^2_{unbiased} + \text{Bias}(\hat{\sigma}^2_{unbiased})^2 \\
&= \frac{2\sigma^4}{n-1}\end{align}$$
The MSE of the biased, maximum likelihood estimate of $\sigma^2$ is:
$$\begin{align}\text{MSE}(\hat{\sigma}_{biased}^2) &= \text{Var}\ \hat{\sigma}_{biased}^2 + \text{Bias}(\hat{\sigma}_{biased}^2)^2\\ &=\text{Var}\left(\frac{n-1}{n}\hat{\sigma}^2_{unbiased}\right) + \left(\text{E}\hat{\sigma}_{biased}^2 - \sigma^2\right)^2 \\ &=\left(\frac{n-1}{n}\right)^2\text{Var}
\ \hat{\sigma}^2_{unbiased} \, + \left(\text{E}\left(\frac{n-1}{n}\hat{\sigma}^2_{unbiased}\right) -
\sigma^2\right)^2\\ &= \frac{2(n-1)\sigma^4}{n^2} + \left(\frac{n-1}{n}\sigma^2 - \sigma^2\right)^2\\ &= \left(\frac{2n-1}{n^2}\right)\sigma^4\end{align}$$
Hence,
$$\text{MSE}(\hat{\sigma}_{biased}^2) = \frac{2n-1}{n^2}\sigma^4 < \frac{2}{n-1}\sigma^4 = \text{MSE}(\hat{\sigma}_{unbiased}^2)$$ | Are there parameters where a biased estimator is considered "better" than the unbiased estimator? [d
Yes there are plenty of cases; you're beating around the bush that is the topic of Bias-Variance tradeoff (in particular, the graphic to the right is a good visualization).
As for a mathematical exam |
17,929 | Are there parameters where a biased estimator is considered "better" than the unbiased estimator? [duplicate] | There are numerous examples where the MLE has smaller mean square error (MSE) than the best available unbiased estimator (though often there are estimators with smaller MSE still). The "standard" example in normal sampling arises in estimating the variance for example.
There are many cases where no unbiased estimator exists, such as the inverse of the rate parameter in a Poisson.
It's also possible to find situations where an unbiased estimator can sometimes give "impossible" estimates (give estimated values that the parameter simply cannot take, such as sometimes giving negative estimates for necessarily positive quantities for example).
One widely known example (see for example [1], but it often appears in student exercises) is for $X\sim \text{Pois}(\lambda)$ where the only unbiased estimator of $e^{-2\lambda}$ is $(-1)^X$. On observing an odd value for $X$, the estimate is negative.
[1] Romano, J. P. and Siegel, A. F. (1986),
Counterexamples in Probability and Statistics.
Boca Raton: Chapman and Hall/CRC. | Are there parameters where a biased estimator is considered "better" than the unbiased estimator? [d | There are numerous examples where the MLE has smaller mean square error (MSE) than the best available unbiased estimator (though often there are estimators with smaller MSE still). The "standard" exam | Are there parameters where a biased estimator is considered "better" than the unbiased estimator? [duplicate]
There are numerous examples where the MLE has smaller mean square error (MSE) than the best available unbiased estimator (though often there are estimators with smaller MSE still). The "standard" example in normal sampling arises in estimating the variance for example.
There are many cases where no unbiased estimator exists, such as the inverse of the rate parameter in a Poisson.
It's also possible to find situations where an unbiased estimator can sometimes give "impossible" estimates (give estimated values that the parameter simply cannot take, such as sometimes giving negative estimates for necessarily positive quantities for example).
One widely known example (see for example [1], but it often appears in student exercises) is for $X\sim \text{Pois}(\lambda)$ where the only unbiased estimator of $e^{-2\lambda}$ is $(-1)^X$. On observing an odd value for $X$, the estimate is negative.
[1] Romano, J. P. and Siegel, A. F. (1986),
Counterexamples in Probability and Statistics.
Boca Raton: Chapman and Hall/CRC. | Are there parameters where a biased estimator is considered "better" than the unbiased estimator? [d
There are numerous examples where the MLE has smaller mean square error (MSE) than the best available unbiased estimator (though often there are estimators with smaller MSE still). The "standard" exam |
17,930 | Are effect sizes really superior to p-values? | The advice to provide effect sizes rather than P-values is based on a false dichotomy and is silly. Why not present both?
Scientific conclusions should be based on a rational assessment of available evidence and theory. P-values and observed effect sizes alone or together are not enough.
Neither of the quoted passages that you supply is helpful. Of course P-values vary from experiment to experiment, the strength of evidence in the data varies from experiment to experiment. The P-value is just a numerical extraction of that evidence by way of the statistical model. Given the nature of the P-value, it is very rarely relevant to analytical purposes to compare one P-value with another, so perhaps that is what the quote author is trying to convey.
If you find yourself wanting to compare P-values then you probably should have performed a significance test on a different arrangement of the data in order to sensibly answer the question of interest. See these questions:
p-values for p-values? and
If one group's mean differs from zero but the other does not, can we conclude that the groups are different?
So, the answer to your question is complex. I do not find dichotomous responses to data based on either P-values or effect sizes to be useful, so are effect sizes superior to P-values? Yes, no, sometimes, maybe, and it depends on your purpose. | Are effect sizes really superior to p-values? | The advice to provide effect sizes rather than P-values is based on a false dichotomy and is silly. Why not present both?
Scientific conclusions should be based on a rational assessment of available e | Are effect sizes really superior to p-values?
The advice to provide effect sizes rather than P-values is based on a false dichotomy and is silly. Why not present both?
Scientific conclusions should be based on a rational assessment of available evidence and theory. P-values and observed effect sizes alone or together are not enough.
Neither of the quoted passages that you supply is helpful. Of course P-values vary from experiment to experiment, the strength of evidence in the data varies from experiment to experiment. The P-value is just a numerical extraction of that evidence by way of the statistical model. Given the nature of the P-value, it is very rarely relevant to analytical purposes to compare one P-value with another, so perhaps that is what the quote author is trying to convey.
If you find yourself wanting to compare P-values then you probably should have performed a significance test on a different arrangement of the data in order to sensibly answer the question of interest. See these questions:
p-values for p-values? and
If one group's mean differs from zero but the other does not, can we conclude that the groups are different?
So, the answer to your question is complex. I do not find dichotomous responses to data based on either P-values or effect sizes to be useful, so are effect sizes superior to P-values? Yes, no, sometimes, maybe, and it depends on your purpose. | Are effect sizes really superior to p-values?
The advice to provide effect sizes rather than P-values is based on a false dichotomy and is silly. Why not present both?
Scientific conclusions should be based on a rational assessment of available e |
17,931 | Are effect sizes really superior to p-values? | In the context of applied research, effect sizes are necessary for readers to interpret the practical significance (as opposed to statistical significance) of the findings. In general, p-values are far more sensitive to sample size than effect sizes are. If an experiment measures an effect size accurately (i.e. it is sufficiently close to the population parameter it is estimating) but yields a non-significant p-value then, all things being equal, increasing the sample size will result in the same effect size but a lower p-value. This can be demonstrated with power analyses or simulations.
In light of this, it is possible to achieve highly significant p-values for effect sizes that have no practical significance. In contrast, study designs with low power can produce non-significant p-values for effect sizes of great practical importance.
It is difficult to discuss the concepts of statistical significance vis-a-vis effect size without a specific real-world application. As an example, consider an experiment that evaluates the effect of a new studying method on students' grade point average (GPA). I would argue that an effect size of 0.01 grade points has little practical significance (i.e. 2.50 compared to 2.51). Assuming a sample size of 2,000 students in both treatment and control groups, and a population standard deviation of 0.5 grade points:
set.seed(12345)
control.data <- rnorm(n=2000, mean = 2.5, sd = 0.5)
set.seed(12345)
treatment.data <- rnorm(n=2000, mean = 2.51, sd = 0.5)
t.test(x = control.data, y = treatment.data, alternative = "two.sided", var.equal = TRUE)
treatment sample mean = 2.51
control sample mean = 2.50
effect size = 2.51 - 2.50 = 0.01
p = 0.53
Increasing the sample size to 20,000 students and holding everything else constant yields a significant p-value:
set.seed(12345)
control.data <- rnorm(n=20000, mean = 2.5, sd = 0.5)
set.seed(12345)
treatment.data <- rnorm(n=20000, mean = 2.51, sd = 0.5)
t.test(x = control.data, y = treatment.data, alternative = "two.sided", var.equal = TRUE)
treatment sample mean = 2.51
control sample mean = 2.50
effect size = 2.51 - 2.50 = 0.01
p = 0.044
Obviously it's no trivial thing to increase the sample size by an order of magnitude! However, I think we can all agree that the practical improvement offered by this study method is negligible. If we relied solely on the p-value then we might believe otherwise in the n=20,000 case.
Personally I advocate for reporting both p-values and effect sizes. And bonus points for t- or F-statistics, degrees of freedom and model diagnostics! | Are effect sizes really superior to p-values? | In the context of applied research, effect sizes are necessary for readers to interpret the practical significance (as opposed to statistical significance) of the findings. In general, p-values are f | Are effect sizes really superior to p-values?
In the context of applied research, effect sizes are necessary for readers to interpret the practical significance (as opposed to statistical significance) of the findings. In general, p-values are far more sensitive to sample size than effect sizes are. If an experiment measures an effect size accurately (i.e. it is sufficiently close to the population parameter it is estimating) but yields a non-significant p-value then, all things being equal, increasing the sample size will result in the same effect size but a lower p-value. This can be demonstrated with power analyses or simulations.
In light of this, it is possible to achieve highly significant p-values for effect sizes that have no practical significance. In contrast, study designs with low power can produce non-significant p-values for effect sizes of great practical importance.
It is difficult to discuss the concepts of statistical significance vis-a-vis effect size without a specific real-world application. As an example, consider an experiment that evaluates the effect of a new studying method on students' grade point average (GPA). I would argue that an effect size of 0.01 grade points has little practical significance (i.e. 2.50 compared to 2.51). Assuming a sample size of 2,000 students in both treatment and control groups, and a population standard deviation of 0.5 grade points:
set.seed(12345)
control.data <- rnorm(n=2000, mean = 2.5, sd = 0.5)
set.seed(12345)
treatment.data <- rnorm(n=2000, mean = 2.51, sd = 0.5)
t.test(x = control.data, y = treatment.data, alternative = "two.sided", var.equal = TRUE)
treatment sample mean = 2.51
control sample mean = 2.50
effect size = 2.51 - 2.50 = 0.01
p = 0.53
Increasing the sample size to 20,000 students and holding everything else constant yields a significant p-value:
set.seed(12345)
control.data <- rnorm(n=20000, mean = 2.5, sd = 0.5)
set.seed(12345)
treatment.data <- rnorm(n=20000, mean = 2.51, sd = 0.5)
t.test(x = control.data, y = treatment.data, alternative = "two.sided", var.equal = TRUE)
treatment sample mean = 2.51
control sample mean = 2.50
effect size = 2.51 - 2.50 = 0.01
p = 0.044
Obviously it's no trivial thing to increase the sample size by an order of magnitude! However, I think we can all agree that the practical improvement offered by this study method is negligible. If we relied solely on the p-value then we might believe otherwise in the n=20,000 case.
Personally I advocate for reporting both p-values and effect sizes. And bonus points for t- or F-statistics, degrees of freedom and model diagnostics! | Are effect sizes really superior to p-values?
In the context of applied research, effect sizes are necessary for readers to interpret the practical significance (as opposed to statistical significance) of the findings. In general, p-values are f |
17,932 | Are effect sizes really superior to p-values? | I currently work in the data science field, and before then I worked in education research. While at each "career" I've collaborated with people who did not come from a formal background in statistics, and where emphasis of statistical (and practical) significance is heavily placed on the p-value. I've learned include and emphasize effect sizes in my analyses because there is a difference between statistical significance and practical significance.
Generally, the people I worked with cared about one thing "does our program/feature make and impact, yes or no?". To a question like this, you can do something as simple as a t-test and report to them "yes, your program/feature makes a difference". But how large or small is this "difference"?
First, before I begin delving into this topic, I'd like to summarize what we refer to when speaking of effect sizes
Effect size is simply a way of quantifying the size of the difference between two groups. [...] It is particularly valuable for quantifying the effectiveness of a particular intervention, relative to some comparison. It allows us to move beyond the simplistic, 'Does it work or not?' to the far more sophisticated, 'How well does it work in a range of contexts?' Moreover, by placing the emphasis on the most important aspect of an intervention - the size of the effect - rather than its statistical significance (which conflates effect size and sample size), it promotes a more scientific approach to the accumulation of knowledge. For these reasons, effect size is an important tool in reporting and interpreting effectiveness.
It's the Effect Size, Stupid: What effect size is and why it is important
Next, what is a p-value, and what information does it provide us? Well, a p-value, in as few words as possible, is a probability that the observed difference from the null distribution is by pure chance. We therefore reject (or fail to accept) the null hypothesis when this p-value is smaller than a threshold ($\alpha$).
Why Isn't the P Value Enough?
Statistical significance is the probability that the observed difference between two groups is due to chance. If the P value is larger than the alpha level chosen (eg, .05), any observed difference is assumed to be explained by sampling variability. With a sufficiently large sample, a statistical test will almost always demonstrate a significant difference, unless there is no effect whatsoever, that is, when the effect size is exactly zero; yet very small differences, even if significant, are often meaningless. Thus, reporting only the significant P value for an analysis is not adequate for readers to fully understand the results.
And to corroborate @DarrenJames's comments regarding large sample sizes
For example, if a sample size is 10 000, a significant P value is likely to be found even when the difference in outcomes between groups is negligible and may not justify an expensive or time-consuming intervention over another. The level of significance by itself does not predict effect size. Unlike significance tests, effect size is independent of sample size. Statistical significance, on the other hand, depends upon both sample size and effect size. For this reason, P values are considered to be confounded because of their dependence on sample size. Sometimes a statistically significant result means only that a huge sample size was used. [There is a mistaken view that this behaviour represents a bias against the null hypothesis. Why does frequentist hypothesis testing become biased towards rejecting the null hypothesis with sufficiently large samples? ]
Using Effect Size—or Why the P Value Is Not Enough
Report Both P-value and Effect Sizes
Now to answer the question, are effect sizes superior to p-values? I would argue, that these each serve as importance components in statistical analysis that cannot be compared in such terms, and should be reported together. The p-value is a statistic to indicate statistical significance (difference from the null distribution), where the effect size puts into words how much of a difference there is.
As an example, say your supervisor, Bob, who is not very stats-friendly is interested in seeing if there was a significant relationship between wt (weight) and mpg (miles per gallon). You start the analysis with hypotheses
$$
H_0: \beta_{mpg} = 0 \text{ vs } H_A: \beta_{mpg} \neq 0
$$
being tested at $\alpha = 0.05$
> data("mtcars")
>
> fit = lm(formula = mpg ~ wt, data = mtcars)
>
> summary(fit)
Call:
lm(formula = mpg ~ wt, data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-4.5432 -2.3647 -0.1252 1.4096 6.8727
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 37.2851 1.8776 19.858 < 2e-16 ***
wt -5.3445 0.5591 -9.559 1.29e-10 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 3.046 on 30 degrees of freedom
Multiple R-squared: 0.7528, Adjusted R-squared: 0.7446
F-statistic: 91.38 on 1 and 30 DF, p-value: 1.294e-10
From the summary output we can see that we have a t-statistic with a very small p-value. We can comfortably reject the null hypothesis and report that $\beta_{mpg} \neq 0$. However, your boss asks, well, how different is it? You can tell Bob, "well, it looks like there is a negative linear relationship between mpg and wt. Also, can be summarized that for every increased unit in wt there is a decrease of 5.3445 in mpg"
Thus, you were able to conclude that results were statistically significant, and communicate the significance in practical terms.
I hope this was useful in answering your question. | Are effect sizes really superior to p-values? | I currently work in the data science field, and before then I worked in education research. While at each "career" I've collaborated with people who did not come from a formal background in statistics | Are effect sizes really superior to p-values?
I currently work in the data science field, and before then I worked in education research. While at each "career" I've collaborated with people who did not come from a formal background in statistics, and where emphasis of statistical (and practical) significance is heavily placed on the p-value. I've learned include and emphasize effect sizes in my analyses because there is a difference between statistical significance and practical significance.
Generally, the people I worked with cared about one thing "does our program/feature make and impact, yes or no?". To a question like this, you can do something as simple as a t-test and report to them "yes, your program/feature makes a difference". But how large or small is this "difference"?
First, before I begin delving into this topic, I'd like to summarize what we refer to when speaking of effect sizes
Effect size is simply a way of quantifying the size of the difference between two groups. [...] It is particularly valuable for quantifying the effectiveness of a particular intervention, relative to some comparison. It allows us to move beyond the simplistic, 'Does it work or not?' to the far more sophisticated, 'How well does it work in a range of contexts?' Moreover, by placing the emphasis on the most important aspect of an intervention - the size of the effect - rather than its statistical significance (which conflates effect size and sample size), it promotes a more scientific approach to the accumulation of knowledge. For these reasons, effect size is an important tool in reporting and interpreting effectiveness.
It's the Effect Size, Stupid: What effect size is and why it is important
Next, what is a p-value, and what information does it provide us? Well, a p-value, in as few words as possible, is a probability that the observed difference from the null distribution is by pure chance. We therefore reject (or fail to accept) the null hypothesis when this p-value is smaller than a threshold ($\alpha$).
Why Isn't the P Value Enough?
Statistical significance is the probability that the observed difference between two groups is due to chance. If the P value is larger than the alpha level chosen (eg, .05), any observed difference is assumed to be explained by sampling variability. With a sufficiently large sample, a statistical test will almost always demonstrate a significant difference, unless there is no effect whatsoever, that is, when the effect size is exactly zero; yet very small differences, even if significant, are often meaningless. Thus, reporting only the significant P value for an analysis is not adequate for readers to fully understand the results.
And to corroborate @DarrenJames's comments regarding large sample sizes
For example, if a sample size is 10 000, a significant P value is likely to be found even when the difference in outcomes between groups is negligible and may not justify an expensive or time-consuming intervention over another. The level of significance by itself does not predict effect size. Unlike significance tests, effect size is independent of sample size. Statistical significance, on the other hand, depends upon both sample size and effect size. For this reason, P values are considered to be confounded because of their dependence on sample size. Sometimes a statistically significant result means only that a huge sample size was used. [There is a mistaken view that this behaviour represents a bias against the null hypothesis. Why does frequentist hypothesis testing become biased towards rejecting the null hypothesis with sufficiently large samples? ]
Using Effect Size—or Why the P Value Is Not Enough
Report Both P-value and Effect Sizes
Now to answer the question, are effect sizes superior to p-values? I would argue, that these each serve as importance components in statistical analysis that cannot be compared in such terms, and should be reported together. The p-value is a statistic to indicate statistical significance (difference from the null distribution), where the effect size puts into words how much of a difference there is.
As an example, say your supervisor, Bob, who is not very stats-friendly is interested in seeing if there was a significant relationship between wt (weight) and mpg (miles per gallon). You start the analysis with hypotheses
$$
H_0: \beta_{mpg} = 0 \text{ vs } H_A: \beta_{mpg} \neq 0
$$
being tested at $\alpha = 0.05$
> data("mtcars")
>
> fit = lm(formula = mpg ~ wt, data = mtcars)
>
> summary(fit)
Call:
lm(formula = mpg ~ wt, data = mtcars)
Residuals:
Min 1Q Median 3Q Max
-4.5432 -2.3647 -0.1252 1.4096 6.8727
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 37.2851 1.8776 19.858 < 2e-16 ***
wt -5.3445 0.5591 -9.559 1.29e-10 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 3.046 on 30 degrees of freedom
Multiple R-squared: 0.7528, Adjusted R-squared: 0.7446
F-statistic: 91.38 on 1 and 30 DF, p-value: 1.294e-10
From the summary output we can see that we have a t-statistic with a very small p-value. We can comfortably reject the null hypothesis and report that $\beta_{mpg} \neq 0$. However, your boss asks, well, how different is it? You can tell Bob, "well, it looks like there is a negative linear relationship between mpg and wt. Also, can be summarized that for every increased unit in wt there is a decrease of 5.3445 in mpg"
Thus, you were able to conclude that results were statistically significant, and communicate the significance in practical terms.
I hope this was useful in answering your question. | Are effect sizes really superior to p-values?
I currently work in the data science field, and before then I worked in education research. While at each "career" I've collaborated with people who did not come from a formal background in statistics |
17,933 | Are effect sizes really superior to p-values? | The utility of effect sizes relative to p-values (as well as other metrics of statistical inference) is routinely debated in my field—psychology—and the debate is currently “hotter”, than normal for reasons that are relevant to your question. And though I am sure psychology isn’t necessarily the most statistically sophisticated scientific field, it has readily discussed, studied—and at times, demonstrated—limitations of various approaches to statistical inference, or at least how they are limited by human use. The answers already posted include good insights, but in case you are interested in a more extensive list (and references) of reasons for and against each, see below.
Why are p-values undesirable?
As Darren James notes (and his simulation shows), p-values are largely contingent on the number of observations that you have (see Kirk, 2003)
As Jon notes, p-values represent the conditional probability of observing data as extreme or more extreme given that the null hypothesis is true. As most researchers would rather have probabilities of the research hypothesis, and/or the null-hypothesis, p-values do not speak to probabilities in which researchers are most interested (i.e., of the null or research hypothesis, see Dienes, 2008)
Many who use p-values do not understand what they mean/do not mean (Schmidt & Hunter, 1997). Michael Lew’s reference to Gelman and Stern’s (2006) paper further underscores researcher misunderstandings about what one can (or cannot) interpret from p-values. And as a relatively recent story on FiveThirtyEight demonstrates, this continues to be the case.
p-values are not great at predicting subsequent p-values (Cumming, 2008)
p-values are often misreported (more often inflating significance), and misreporting is linked to an unwillingness to share data (Bakker & Wicherts, 2011; Nuijten et al., 2016; Wicherts et al., 2011)
p-values can be (and historically, have been) actively distorted through analytic flexibility, and are therefore untrustworthy (John et al., 2012; Simmons et al., 2011)
p-values are disproportionately significant, as academic systems appear to reward scientists for statistical significance over scientific accuracy (Fanelli, 2010; Nosek et al., 2012; Rosenthal, 1979)
Why are effect sizes desirable?
Note that I am interpreting your question as referring specifically to standardized effect sizes, as you say they allow researchers to transform their findings “INTO A COMMON metric”.
As Jon and Darren James indicate, effect sizes indicate the magnitude of an effect, independent of the number of observations (American Psychological Association 2010; Cumming, 2014) as opposed to making dichotomous decisions of whether an effect is there or not there.
Effect sizes are valuable because they make meta-analyses possible, and meta-analysis drive cumulative knowledge (Borenstein et al., 2009; Chan & Arvey, 2012)
Effect sizes help to facilitate sample size planning via a priori power analysis, and therefore efficient resource allocation in research (Cohen, 1992)
Why are p-values desirable?
Though they are less frequently espoused, p-values have a number of perks. Some are well-known and longstanding, whereas others are relatively new.
P-values provide a convenient and familiar index of the strength of evidence against the statistical model null hypothesis.
When calculated correctly, p-values provide a means of making dichotomous decisions (which are sometimes necessary), and p-values help keep long-run false-positive error rates at an acceptable level (Dienes, 2008; Sakaluk, 2016) [It is not strictly correct to say that P-values are required for dichotomous decisions. They are indeed widely used that way, but Neyman & Pearson used 'critical regions' in the test statistic space for that purpose. See this question and its answers]
p-values can be used to facilitate continuously efficient sample size planning (not just one-time power-analysis) (Lakens, 2014)
p-values can be used to facilitate meta-analysis and evaluate evidential value (Simonsohn et al., 2014a; Simonsohn et al., 2014b). See this blogpost for an accessible discussion of how distributions of p-values can be used in this fashion, as well as this CV post for a related discussion.
p-values can be used forensically to determine whether questionable research practices may have been used, and how replicable results might be (Schimmack, 2014; also see Schönbrodt’s app, 2015)
Why are effect sizes undesirable (or overrated)?
Perhaps the most counter-intuitive position to many; why would reporting standardized effect sizes be undesirable, or at the very least, overrated?
In some cases, standardized effect sizes aren’t all that they are cracked up to be (e.g., Greenland, Schlesselman, & Criqui, 1986). Baguely (2009), in particular, has a nice description of some of the reasons why raw/unstandardized effect sizes may be more desirable.
Despite their utility for a priori power analysis, effect sizes are not actually used reliably to facilitate efficient sample-size planning (Maxwell, 2004)
Even when effect sizes are used in sample size planning, because they are inflated via publication bias (Rosenthal, 1979) published effect sizes are of questionable utility for reliable sample-size planning (Simonsohn, 2013)
Effect size estimates can be—and have been—systemically miscalculated in statistical software (Levine & Hullet, 2002)
Effect sizes are mistakenly extracted (and probably misreported) which undermines the credibility of meta-analyses (Gøtzsche et al., 2007)
Lastly, correcting for publication bias in effect sizes remains ineffective (see Carter et al., 2017), which, if you believe publication bias exists, renders meta-analyses less impactful.
Summary
Echoing the point made by Michael Lew, p-values and effect sizes are but two pieces of statistical evidence; there are others worth considering too. But like p-values and effect sizes, other metrics of evidential value have shared and unique problems too. Researchers commonly misapply and misinterpret confidence intervals (e.g., Hoekstra et al., 2014; Morey et al., 2016), for example, and the outcome of Bayesian analyses can distorted by researchers, just like when using p-values (e.g., Simonsohn, 2014).
All metrics of evidence have won and all must have prizes.
References
American Psychological Association. (2010). Publication manual of the American Psychological Association (6th edition). Washington, DC: American Psychological Association.
Baguley, T. (2009). Standardized or simple effect size: What should be reported?. British Journal of Psychology, 100(3), 603-617.
Bakker, M., & Wicherts, J. M. (2011). The (mis) reporting of statistical results in psychology journals. Behavior research methods, 43(3), 666-678.
Borenstein, M., Hedges, L. V., Higgins, J., & Rothstein, H. R. (2009). Introduction to meta-analysis. West Sussex, UK: John Wiley & Sons, Ltd.
Carter, E. C., Schönbrodt, F. D., Gervais, W. M., & Hilgard, J. (2017, August 12). Correcting for bias in psychology: A comparison of meta-analytic methods. Retrieved from osf.io/preprints/psyarxiv/9h3nu
Chan, M. E., & Arvey, R. D. (2012). Meta-analysis and the development of knowledge. Perspectives on Psychological Science, 7(1), 79-92.
Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155-159.
Cumming, G. (2008). Replication and p intervals: p values pre- dict the future only vaguely, but confidence intervals do much better. Perspectives on Psychological Science, 3, 286– 300.
Dienes, D. (2008). Understanding psychology as a science: An introduction to scientific and statistical inference. New York, NY: Palgrave MacMillan.
Fanelli, D. (2010). “Positive” results increase down the hierarchy of the sciences. PloS one, 5(4), e10068.
Gelman, A., & Stern, H. (2006). The difference between “significant” and “not significant” is not itself statistically significant. The American Statistician, 60(4), 328-331.
Gøtzsche, P. C., Hróbjartsson, A., Marić, K., & Tendal, B. (2007). Data extraction errors in meta-analyses that use standardized mean differences. JAMA, 298(4), 430-437.
Greenland, S., Schlesselman, J. J., & Criqui, M. H. (1986). The fallacy of employing standardized regression coefficients and correlations as measures of effect. American Journal of Epidemiology, 123(2), 203-208.
Hoekstra, R., Morey, R. D., Rouder, J. N., & Wagenmakers, E. J. (2014). Robust misinterpretation of confidence intervals. Psychonomic bulletin & review, 21(5), 1157-1164.
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. PsychologicalSscience, 23(5), 524-532.
Kirk, R. E. (2003). The importance of effect magnitude. In S. F. Davis (Ed.), Handbook of research methods in experimental psychology (pp. 83–105). Malden, MA: Blackwell.
Lakens, D. (2014). Performing high‐powered studies efficiently with sequential analyses. European Journal of Social Psychology, 44(7), 701-710.
Levine, T. R., & Hullett, C. R. (2002). Eta squared, partial eta squared, and misreporting of effect size in communication research. Human Communication Research, 28(4), 612-625.
Maxwell, S. E. (2004). The persistence of underpowered studies in psychological research: causes, consequences, and remedies. Psychological methods, 9(2), 147.
Morey, R. D., Hoekstra, R., Rouder, J. N., Lee, M. D., & Wagenmakers, E. J. (2016). The fallacy of placing confidence in confidence intervals. Psychonomic bulletin & review, 23(1), 103-123.
Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7(6), 615-631.
Nuijten, M. B., Hartgerink, C. H., van Assen, M. A., Epskamp, S., & Wicherts, J. M. (2016). The prevalence of statistical reporting errors in psychology (1985–2013). Behavior research methods, 48(4), 1205-1226.
Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86(3), 638-641.
Sakaluk, J. K. (2016). Exploring small, confirming big: An alternative system to the new statistics for advancing cumulative and replicable psychological research. Journal of Experimental Social Psychology, 66, 47-54.
Schimmack, U. (2014). Quantifying Statistical Research Integrity: The Replicability-Index. Retrieved from http://www.r-index.org
Schmidt, F. L., & Hunter, J. E. (1997). Eight common but false objections to the discontinuation of significance testing in the analysis of research data. In L. L. Harlow, S. A. Mulaik, & J. H. Steiger (Eds.), What if there were no significance tests? (pp. 37–64). Mahwah, NJ: Erlbaum.
Schönbrodt, F. D. (2015). p-checker: One-for-all p-value analyzer. Retrieved from http://shinyapps.org/apps/p-checker/.
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological science, 22(11), 1359-1366.
Simonsohn, U. (2013). The folly of powering replications based on observed effect size. Retreived from http://datacolada.org/4
Simonsohn, U. (2014). Posterior-hacking. Retrieved from http://datacolada.org/13.
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). P-curve: A key to the file-drawer. Journal of Experimental Psychology: General, 143(2), 534-547.
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). P-curve and effect size: Correcting for publication bias using only significant results. Perspectives on Psychological Science, 9(6), 666-681.
Wicherts, J. M., Bakker, M., & Molenaar, D. (2011). Willingness to share research data is related to the strength of the evidence and the quality of reporting of statistical results. PloS one, 6(11), e26828. | Are effect sizes really superior to p-values? | The utility of effect sizes relative to p-values (as well as other metrics of statistical inference) is routinely debated in my field—psychology—and the debate is currently “hotter”, than normal for r | Are effect sizes really superior to p-values?
The utility of effect sizes relative to p-values (as well as other metrics of statistical inference) is routinely debated in my field—psychology—and the debate is currently “hotter”, than normal for reasons that are relevant to your question. And though I am sure psychology isn’t necessarily the most statistically sophisticated scientific field, it has readily discussed, studied—and at times, demonstrated—limitations of various approaches to statistical inference, or at least how they are limited by human use. The answers already posted include good insights, but in case you are interested in a more extensive list (and references) of reasons for and against each, see below.
Why are p-values undesirable?
As Darren James notes (and his simulation shows), p-values are largely contingent on the number of observations that you have (see Kirk, 2003)
As Jon notes, p-values represent the conditional probability of observing data as extreme or more extreme given that the null hypothesis is true. As most researchers would rather have probabilities of the research hypothesis, and/or the null-hypothesis, p-values do not speak to probabilities in which researchers are most interested (i.e., of the null or research hypothesis, see Dienes, 2008)
Many who use p-values do not understand what they mean/do not mean (Schmidt & Hunter, 1997). Michael Lew’s reference to Gelman and Stern’s (2006) paper further underscores researcher misunderstandings about what one can (or cannot) interpret from p-values. And as a relatively recent story on FiveThirtyEight demonstrates, this continues to be the case.
p-values are not great at predicting subsequent p-values (Cumming, 2008)
p-values are often misreported (more often inflating significance), and misreporting is linked to an unwillingness to share data (Bakker & Wicherts, 2011; Nuijten et al., 2016; Wicherts et al., 2011)
p-values can be (and historically, have been) actively distorted through analytic flexibility, and are therefore untrustworthy (John et al., 2012; Simmons et al., 2011)
p-values are disproportionately significant, as academic systems appear to reward scientists for statistical significance over scientific accuracy (Fanelli, 2010; Nosek et al., 2012; Rosenthal, 1979)
Why are effect sizes desirable?
Note that I am interpreting your question as referring specifically to standardized effect sizes, as you say they allow researchers to transform their findings “INTO A COMMON metric”.
As Jon and Darren James indicate, effect sizes indicate the magnitude of an effect, independent of the number of observations (American Psychological Association 2010; Cumming, 2014) as opposed to making dichotomous decisions of whether an effect is there or not there.
Effect sizes are valuable because they make meta-analyses possible, and meta-analysis drive cumulative knowledge (Borenstein et al., 2009; Chan & Arvey, 2012)
Effect sizes help to facilitate sample size planning via a priori power analysis, and therefore efficient resource allocation in research (Cohen, 1992)
Why are p-values desirable?
Though they are less frequently espoused, p-values have a number of perks. Some are well-known and longstanding, whereas others are relatively new.
P-values provide a convenient and familiar index of the strength of evidence against the statistical model null hypothesis.
When calculated correctly, p-values provide a means of making dichotomous decisions (which are sometimes necessary), and p-values help keep long-run false-positive error rates at an acceptable level (Dienes, 2008; Sakaluk, 2016) [It is not strictly correct to say that P-values are required for dichotomous decisions. They are indeed widely used that way, but Neyman & Pearson used 'critical regions' in the test statistic space for that purpose. See this question and its answers]
p-values can be used to facilitate continuously efficient sample size planning (not just one-time power-analysis) (Lakens, 2014)
p-values can be used to facilitate meta-analysis and evaluate evidential value (Simonsohn et al., 2014a; Simonsohn et al., 2014b). See this blogpost for an accessible discussion of how distributions of p-values can be used in this fashion, as well as this CV post for a related discussion.
p-values can be used forensically to determine whether questionable research practices may have been used, and how replicable results might be (Schimmack, 2014; also see Schönbrodt’s app, 2015)
Why are effect sizes undesirable (or overrated)?
Perhaps the most counter-intuitive position to many; why would reporting standardized effect sizes be undesirable, or at the very least, overrated?
In some cases, standardized effect sizes aren’t all that they are cracked up to be (e.g., Greenland, Schlesselman, & Criqui, 1986). Baguely (2009), in particular, has a nice description of some of the reasons why raw/unstandardized effect sizes may be more desirable.
Despite their utility for a priori power analysis, effect sizes are not actually used reliably to facilitate efficient sample-size planning (Maxwell, 2004)
Even when effect sizes are used in sample size planning, because they are inflated via publication bias (Rosenthal, 1979) published effect sizes are of questionable utility for reliable sample-size planning (Simonsohn, 2013)
Effect size estimates can be—and have been—systemically miscalculated in statistical software (Levine & Hullet, 2002)
Effect sizes are mistakenly extracted (and probably misreported) which undermines the credibility of meta-analyses (Gøtzsche et al., 2007)
Lastly, correcting for publication bias in effect sizes remains ineffective (see Carter et al., 2017), which, if you believe publication bias exists, renders meta-analyses less impactful.
Summary
Echoing the point made by Michael Lew, p-values and effect sizes are but two pieces of statistical evidence; there are others worth considering too. But like p-values and effect sizes, other metrics of evidential value have shared and unique problems too. Researchers commonly misapply and misinterpret confidence intervals (e.g., Hoekstra et al., 2014; Morey et al., 2016), for example, and the outcome of Bayesian analyses can distorted by researchers, just like when using p-values (e.g., Simonsohn, 2014).
All metrics of evidence have won and all must have prizes.
References
American Psychological Association. (2010). Publication manual of the American Psychological Association (6th edition). Washington, DC: American Psychological Association.
Baguley, T. (2009). Standardized or simple effect size: What should be reported?. British Journal of Psychology, 100(3), 603-617.
Bakker, M., & Wicherts, J. M. (2011). The (mis) reporting of statistical results in psychology journals. Behavior research methods, 43(3), 666-678.
Borenstein, M., Hedges, L. V., Higgins, J., & Rothstein, H. R. (2009). Introduction to meta-analysis. West Sussex, UK: John Wiley & Sons, Ltd.
Carter, E. C., Schönbrodt, F. D., Gervais, W. M., & Hilgard, J. (2017, August 12). Correcting for bias in psychology: A comparison of meta-analytic methods. Retrieved from osf.io/preprints/psyarxiv/9h3nu
Chan, M. E., & Arvey, R. D. (2012). Meta-analysis and the development of knowledge. Perspectives on Psychological Science, 7(1), 79-92.
Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155-159.
Cumming, G. (2008). Replication and p intervals: p values pre- dict the future only vaguely, but confidence intervals do much better. Perspectives on Psychological Science, 3, 286– 300.
Dienes, D. (2008). Understanding psychology as a science: An introduction to scientific and statistical inference. New York, NY: Palgrave MacMillan.
Fanelli, D. (2010). “Positive” results increase down the hierarchy of the sciences. PloS one, 5(4), e10068.
Gelman, A., & Stern, H. (2006). The difference between “significant” and “not significant” is not itself statistically significant. The American Statistician, 60(4), 328-331.
Gøtzsche, P. C., Hróbjartsson, A., Marić, K., & Tendal, B. (2007). Data extraction errors in meta-analyses that use standardized mean differences. JAMA, 298(4), 430-437.
Greenland, S., Schlesselman, J. J., & Criqui, M. H. (1986). The fallacy of employing standardized regression coefficients and correlations as measures of effect. American Journal of Epidemiology, 123(2), 203-208.
Hoekstra, R., Morey, R. D., Rouder, J. N., & Wagenmakers, E. J. (2014). Robust misinterpretation of confidence intervals. Psychonomic bulletin & review, 21(5), 1157-1164.
John, L. K., Loewenstein, G., & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. PsychologicalSscience, 23(5), 524-532.
Kirk, R. E. (2003). The importance of effect magnitude. In S. F. Davis (Ed.), Handbook of research methods in experimental psychology (pp. 83–105). Malden, MA: Blackwell.
Lakens, D. (2014). Performing high‐powered studies efficiently with sequential analyses. European Journal of Social Psychology, 44(7), 701-710.
Levine, T. R., & Hullett, C. R. (2002). Eta squared, partial eta squared, and misreporting of effect size in communication research. Human Communication Research, 28(4), 612-625.
Maxwell, S. E. (2004). The persistence of underpowered studies in psychological research: causes, consequences, and remedies. Psychological methods, 9(2), 147.
Morey, R. D., Hoekstra, R., Rouder, J. N., Lee, M. D., & Wagenmakers, E. J. (2016). The fallacy of placing confidence in confidence intervals. Psychonomic bulletin & review, 23(1), 103-123.
Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific utopia: II. Restructuring incentives and practices to promote truth over publishability. Perspectives on Psychological Science, 7(6), 615-631.
Nuijten, M. B., Hartgerink, C. H., van Assen, M. A., Epskamp, S., & Wicherts, J. M. (2016). The prevalence of statistical reporting errors in psychology (1985–2013). Behavior research methods, 48(4), 1205-1226.
Rosenthal, R. (1979). The file drawer problem and tolerance for null results. Psychological Bulletin, 86(3), 638-641.
Sakaluk, J. K. (2016). Exploring small, confirming big: An alternative system to the new statistics for advancing cumulative and replicable psychological research. Journal of Experimental Social Psychology, 66, 47-54.
Schimmack, U. (2014). Quantifying Statistical Research Integrity: The Replicability-Index. Retrieved from http://www.r-index.org
Schmidt, F. L., & Hunter, J. E. (1997). Eight common but false objections to the discontinuation of significance testing in the analysis of research data. In L. L. Harlow, S. A. Mulaik, & J. H. Steiger (Eds.), What if there were no significance tests? (pp. 37–64). Mahwah, NJ: Erlbaum.
Schönbrodt, F. D. (2015). p-checker: One-for-all p-value analyzer. Retrieved from http://shinyapps.org/apps/p-checker/.
Simmons, J. P., Nelson, L. D., & Simonsohn, U. (2011). False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological science, 22(11), 1359-1366.
Simonsohn, U. (2013). The folly of powering replications based on observed effect size. Retreived from http://datacolada.org/4
Simonsohn, U. (2014). Posterior-hacking. Retrieved from http://datacolada.org/13.
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). P-curve: A key to the file-drawer. Journal of Experimental Psychology: General, 143(2), 534-547.
Simonsohn, U., Nelson, L. D., & Simmons, J. P. (2014). P-curve and effect size: Correcting for publication bias using only significant results. Perspectives on Psychological Science, 9(6), 666-681.
Wicherts, J. M., Bakker, M., & Molenaar, D. (2011). Willingness to share research data is related to the strength of the evidence and the quality of reporting of statistical results. PloS one, 6(11), e26828. | Are effect sizes really superior to p-values?
The utility of effect sizes relative to p-values (as well as other metrics of statistical inference) is routinely debated in my field—psychology—and the debate is currently “hotter”, than normal for r |
17,934 | Are effect sizes really superior to p-values? | From the perspective of an Epidemiologist, on why I prefer effect sizes over p-values (though as some people have noted, it's something of a false dichotomy):
The effect size tells me what I actually want, the p-value just tells me if it's distinguishable from null. A relative risk of 1.0001, 1.5, 5, and 50 might all have the same p-value associated with them, but mean vastly different things in terms of what we might need to do at a population level.
Relying on a p-value reinforces the notion that significance-based hypothesis testing is the end-all, be-all of evidence. Consider the following two statements: "Doctors smiling at patients was not significantly associated with an adverse outcome during their hospital stay." vs. "Patients who had their doctor smile at them were 50% less likely to have an adverse outcome (p = 0.086)." Would you still, maybe, given it has absolutely no cost, consider suggesting doctors smile at their patients?
I work with a lot of stochastic simulation models, wherein sample size is a function of computing power and patience, and p-values are essentially meaningless. I have managed to get p < 0.05 results for things that have absolutely no clinical or public health relevance. | Are effect sizes really superior to p-values? | From the perspective of an Epidemiologist, on why I prefer effect sizes over p-values (though as some people have noted, it's something of a false dichotomy):
The effect size tells me what I actually | Are effect sizes really superior to p-values?
From the perspective of an Epidemiologist, on why I prefer effect sizes over p-values (though as some people have noted, it's something of a false dichotomy):
The effect size tells me what I actually want, the p-value just tells me if it's distinguishable from null. A relative risk of 1.0001, 1.5, 5, and 50 might all have the same p-value associated with them, but mean vastly different things in terms of what we might need to do at a population level.
Relying on a p-value reinforces the notion that significance-based hypothesis testing is the end-all, be-all of evidence. Consider the following two statements: "Doctors smiling at patients was not significantly associated with an adverse outcome during their hospital stay." vs. "Patients who had their doctor smile at them were 50% less likely to have an adverse outcome (p = 0.086)." Would you still, maybe, given it has absolutely no cost, consider suggesting doctors smile at their patients?
I work with a lot of stochastic simulation models, wherein sample size is a function of computing power and patience, and p-values are essentially meaningless. I have managed to get p < 0.05 results for things that have absolutely no clinical or public health relevance. | Are effect sizes really superior to p-values?
From the perspective of an Epidemiologist, on why I prefer effect sizes over p-values (though as some people have noted, it's something of a false dichotomy):
The effect size tells me what I actually |
17,935 | Why the p-value of t.test() is not statistically significant when mean values look really different | I agree with @pikachu that the standard deviations are too large
compared with the difference between means for a t test to find
a significant difference.
Thank you for posting your data. It is always a good idea to take a look at some graphic displays
of the data before doing formal tests.
Stripcharts of observations in the two groups do not show a meaningful difference
in locations relative to the variability of the samples.
stripchart(ANGPTL7 ~OSA_status, pch="|", ylim=c(.5,2.5))
Here are boxplots of the two groups. The 'notches' in the the sides of
the boxes are nonparametric confidence intervals, calibrated so that
overlapping notches tend to indicate no significant difference in location.
boxplot(ANGPTL7 ~ OSA_status, notch=T,
col="skyblue2", horizontal=T)
Even with sample sizes as large as these, I would be reluctant to do a two-sample t test on account of the marked skewness of the data. I would do a nonparametric two-sample Wilcoxon rank sum test (which also shows no
significant difference).
wilcox.test(ANGPTL7 ~ OSA_status)
Wilcoxon rank sum test with continuity correction
data: ANGPTL7 by OSA_status
W = 1456.5, p-value = 0.2139
alternative hypothesis: true location shift is not equal to 0 | Why the p-value of t.test() is not statistically significant when mean values look really different | I agree with @pikachu that the standard deviations are too large
compared with the difference between means for a t test to find
a significant difference.
Thank you for posting your data. It is always | Why the p-value of t.test() is not statistically significant when mean values look really different
I agree with @pikachu that the standard deviations are too large
compared with the difference between means for a t test to find
a significant difference.
Thank you for posting your data. It is always a good idea to take a look at some graphic displays
of the data before doing formal tests.
Stripcharts of observations in the two groups do not show a meaningful difference
in locations relative to the variability of the samples.
stripchart(ANGPTL7 ~OSA_status, pch="|", ylim=c(.5,2.5))
Here are boxplots of the two groups. The 'notches' in the the sides of
the boxes are nonparametric confidence intervals, calibrated so that
overlapping notches tend to indicate no significant difference in location.
boxplot(ANGPTL7 ~ OSA_status, notch=T,
col="skyblue2", horizontal=T)
Even with sample sizes as large as these, I would be reluctant to do a two-sample t test on account of the marked skewness of the data. I would do a nonparametric two-sample Wilcoxon rank sum test (which also shows no
significant difference).
wilcox.test(ANGPTL7 ~ OSA_status)
Wilcoxon rank sum test with continuity correction
data: ANGPTL7 by OSA_status
W = 1456.5, p-value = 0.2139
alternative hypothesis: true location shift is not equal to 0 | Why the p-value of t.test() is not statistically significant when mean values look really different
I agree with @pikachu that the standard deviations are too large
compared with the difference between means for a t test to find
a significant difference.
Thank you for posting your data. It is always |
17,936 | Why the p-value of t.test() is not statistically significant when mean values look really different | When you consider the difference between means you have to use a different unit than the simple absolute difference. Take into account that you are measuring the difference in means produced by two random sources. Theose random sources (whose outcomes are your two samples) contains variability. It is this variability which should be used to compare the difference in means. The standard deviations are measured in the same units as the difference so you should see the difference in terms of standard deviations.
Now you see that the difference in means $|1142-864|$ is much lower than any of the sample standard deviations by far. If you approximate with a normal distribution than you will see that the difference in means is about one third of a standard deviation (whatever one can be used) and you can see that there are a lot of chances that the difference in mean to be produced simply by random errors. Use as an approximation that in normal distribution with one standard deviation on the left of mean and on the right you will get aproximately 60% of your data, and with 2 standard deviations you will cover around 95% of your data. The mean difference is far less than any standard deviations so you should consider them equal.
To illustrate the idea of using the mean difference and variation for comparison you can take a look at this image
When you compare the means taking into account the variance you see that the two distributions are not very different, their overlap is large.
When you work out the t test for difference the distribution for the difference in means looks like the following:
The vertical bar is where your mean difference is in terms of standard deviations and the distribution displays how it will variate. It should be clar that the value of the standard deviation scaled difference in mean is not an exceptional value and as a consequence it cannot produce a small p-value. | Why the p-value of t.test() is not statistically significant when mean values look really different | When you consider the difference between means you have to use a different unit than the simple absolute difference. Take into account that you are measuring the difference in means produced by two ra | Why the p-value of t.test() is not statistically significant when mean values look really different
When you consider the difference between means you have to use a different unit than the simple absolute difference. Take into account that you are measuring the difference in means produced by two random sources. Theose random sources (whose outcomes are your two samples) contains variability. It is this variability which should be used to compare the difference in means. The standard deviations are measured in the same units as the difference so you should see the difference in terms of standard deviations.
Now you see that the difference in means $|1142-864|$ is much lower than any of the sample standard deviations by far. If you approximate with a normal distribution than you will see that the difference in means is about one third of a standard deviation (whatever one can be used) and you can see that there are a lot of chances that the difference in mean to be produced simply by random errors. Use as an approximation that in normal distribution with one standard deviation on the left of mean and on the right you will get aproximately 60% of your data, and with 2 standard deviations you will cover around 95% of your data. The mean difference is far less than any standard deviations so you should consider them equal.
To illustrate the idea of using the mean difference and variation for comparison you can take a look at this image
When you compare the means taking into account the variance you see that the two distributions are not very different, their overlap is large.
When you work out the t test for difference the distribution for the difference in means looks like the following:
The vertical bar is where your mean difference is in terms of standard deviations and the distribution displays how it will variate. It should be clar that the value of the standard deviation scaled difference in mean is not an exceptional value and as a consequence it cannot produce a small p-value. | Why the p-value of t.test() is not statistically significant when mean values look really different
When you consider the difference between means you have to use a different unit than the simple absolute difference. Take into account that you are measuring the difference in means produced by two ra |
17,937 | Why the p-value of t.test() is not statistically significant when mean values look really different | With respect to plotting the data, I'd like to point the R package ggbeeswarm. In this case I think it is better than a boxplot or violin plot. The horizontal segments are the quantiles at 5%, 50%, 95%; dat is the data from the OP
library(ggplot2)
library(ggbeeswarm)
library(data.table)
dat <- as.data.table(dat)
dat[, OSA_status := as.factor(OSA_status)]
qq <- dat[, list(quantile= quantile(ANGPTL7, p= c(0.05, 0.5, 0.95))), by= OSA_status]
gg <- ggplot(data= dat, aes(x= OSA_status, y= ANGPTL7)) +
geom_quasirandom(width= 0.25) +
geom_segment(data= qq, aes(y= quantile, yend= quantile, x= as.numeric(OSA_status)-0.1, xend= as.numeric(OSA_status)+0.1), colour= 'blue') +
theme_light() | Why the p-value of t.test() is not statistically significant when mean values look really different | With respect to plotting the data, I'd like to point the R package ggbeeswarm. In this case I think it is better than a boxplot or violin plot. The horizontal segments are the quantiles at 5%, 50%, 95 | Why the p-value of t.test() is not statistically significant when mean values look really different
With respect to plotting the data, I'd like to point the R package ggbeeswarm. In this case I think it is better than a boxplot or violin plot. The horizontal segments are the quantiles at 5%, 50%, 95%; dat is the data from the OP
library(ggplot2)
library(ggbeeswarm)
library(data.table)
dat <- as.data.table(dat)
dat[, OSA_status := as.factor(OSA_status)]
qq <- dat[, list(quantile= quantile(ANGPTL7, p= c(0.05, 0.5, 0.95))), by= OSA_status]
gg <- ggplot(data= dat, aes(x= OSA_status, y= ANGPTL7)) +
geom_quasirandom(width= 0.25) +
geom_segment(data= qq, aes(y= quantile, yend= quantile, x= as.numeric(OSA_status)-0.1, xend= as.numeric(OSA_status)+0.1), colour= 'blue') +
theme_light() | Why the p-value of t.test() is not statistically significant when mean values look really different
With respect to plotting the data, I'd like to point the R package ggbeeswarm. In this case I think it is better than a boxplot or violin plot. The horizontal segments are the quantiles at 5%, 50%, 95 |
17,938 | Why the p-value of t.test() is not statistically significant when mean values look really different | You have too high standard deviation (uncertainty) and with selected probability (I assume $\alpha = 0.05$ that is by default) the confidence intervals are overlapping, thus no statistically significant difference between means.
EDIT: CI of one mean covers the estimate of the second mean, thus they are not significantly different. Thanks for correction from comments. | Why the p-value of t.test() is not statistically significant when mean values look really different | You have too high standard deviation (uncertainty) and with selected probability (I assume $\alpha = 0.05$ that is by default) the confidence intervals are overlapping, thus no statistically significa | Why the p-value of t.test() is not statistically significant when mean values look really different
You have too high standard deviation (uncertainty) and with selected probability (I assume $\alpha = 0.05$ that is by default) the confidence intervals are overlapping, thus no statistically significant difference between means.
EDIT: CI of one mean covers the estimate of the second mean, thus they are not significantly different. Thanks for correction from comments. | Why the p-value of t.test() is not statistically significant when mean values look really different
You have too high standard deviation (uncertainty) and with selected probability (I assume $\alpha = 0.05$ that is by default) the confidence intervals are overlapping, thus no statistically significa |
17,939 | Why the p-value of t.test() is not statistically significant when mean values look really different | Excellent points in other answers. I went a little further in looking at the data. The common practice of firing up box plots when the question is about means is a good idea because a good box plot will give you a good overview of the data, but it is a little indirect whenever -- as is a typical default in software -- the means aren't shown at all, but have to be guessed at, or inserted mentally from another calculation, or indeed added whenever the software allows. I started out with something a little like @BruceET's display, considered a logarithmic transformation and then retreated because that went too far. A square root scale seemed to work better for visualization.
This display is a hybrid quantile and box plot using square root scale. Each box plot is minimal and shows only minimum, maximum, median and quartiles, for all of which it is true in principle that summary(root()) $=$ root(summary()). The box plot is minimal because alongside each a quantile plot is shown with all the observed values in order. The horizontal reference line is at the mean of the square roots, which is close to the median in each case.
Fine structure is revealed by the quantile plot that the box plot conceals, and an easy tabulation shows that all values are multiples of 5, except that there is a minor mode at the lowest reported value, 2.5. That leads me to guess at a detection limit problem, such that any observed zero has been fudged, nudged or kludged to half the smallest non-zero value of 5. (I take it that the data input with some values just above and some values just below integers shows some kind of precision problem.)
Although the square root scale works well at achieving symmetry, further analysis might use a generalized linear model with log link as being appropriate for counts or count-like outcomes -- except that, as we started, it seems that there is at best a small difference between these groups on this outcome that doesn't achieve conventional significance.
EDIT Some experiments with generalized linear models showed that it doesn't much matter what link (say identity, square root or log) or family (say Gaussian or gamma or Poisson) you specify. A hypothesis of different means yields P-values around 0.15 in every seemingly plausible case tried. The science may run that there should be a notable difference, but you need a bigger sample to establish it conventionally. | Why the p-value of t.test() is not statistically significant when mean values look really different | Excellent points in other answers. I went a little further in looking at the data. The common practice of firing up box plots when the question is about means is a good idea because a good box plot wi | Why the p-value of t.test() is not statistically significant when mean values look really different
Excellent points in other answers. I went a little further in looking at the data. The common practice of firing up box plots when the question is about means is a good idea because a good box plot will give you a good overview of the data, but it is a little indirect whenever -- as is a typical default in software -- the means aren't shown at all, but have to be guessed at, or inserted mentally from another calculation, or indeed added whenever the software allows. I started out with something a little like @BruceET's display, considered a logarithmic transformation and then retreated because that went too far. A square root scale seemed to work better for visualization.
This display is a hybrid quantile and box plot using square root scale. Each box plot is minimal and shows only minimum, maximum, median and quartiles, for all of which it is true in principle that summary(root()) $=$ root(summary()). The box plot is minimal because alongside each a quantile plot is shown with all the observed values in order. The horizontal reference line is at the mean of the square roots, which is close to the median in each case.
Fine structure is revealed by the quantile plot that the box plot conceals, and an easy tabulation shows that all values are multiples of 5, except that there is a minor mode at the lowest reported value, 2.5. That leads me to guess at a detection limit problem, such that any observed zero has been fudged, nudged or kludged to half the smallest non-zero value of 5. (I take it that the data input with some values just above and some values just below integers shows some kind of precision problem.)
Although the square root scale works well at achieving symmetry, further analysis might use a generalized linear model with log link as being appropriate for counts or count-like outcomes -- except that, as we started, it seems that there is at best a small difference between these groups on this outcome that doesn't achieve conventional significance.
EDIT Some experiments with generalized linear models showed that it doesn't much matter what link (say identity, square root or log) or family (say Gaussian or gamma or Poisson) you specify. A hypothesis of different means yields P-values around 0.15 in every seemingly plausible case tried. The science may run that there should be a notable difference, but you need a bigger sample to establish it conventionally. | Why the p-value of t.test() is not statistically significant when mean values look really different
Excellent points in other answers. I went a little further in looking at the data. The common practice of firing up box plots when the question is about means is a good idea because a good box plot wi |
17,940 | Why the p-value of t.test() is not statistically significant when mean values look really different | A number by itself is rather meaningless. Is 1000 a large number? A 1000 kg sandwich is really big, but a 1000 mg one is tiny. When we look at the difference between two means, we want to compare it to something, and usually that something is the standard deviation. The basic formula for combining two population standard deviations is like the Pythagorean formula: the combined SD squared is the sum of the SDs squared (there are more complicated formulas to take into account that we don't actually have the population SD but instead are estimating it from the sample, but the differences between them isn't significant here). This gives us $\sqrt{1079^2+ 922^2}$ or about 1419. The difference between the means is 278, and 278 is only about 20% of the combined standard deviation, which is rather small. For a two-tailed test, we'd generally need 200% to consider the difference significant.
For the mean standard deviation, we need to also take into account the sample sizes. As result, the SD for the distribution of sample means is smaller than the population standard deviation, and thus the difference to SD ratio is higher, but not enough to reach significance. | Why the p-value of t.test() is not statistically significant when mean values look really different | A number by itself is rather meaningless. Is 1000 a large number? A 1000 kg sandwich is really big, but a 1000 mg one is tiny. When we look at the difference between two means, we want to compare it t | Why the p-value of t.test() is not statistically significant when mean values look really different
A number by itself is rather meaningless. Is 1000 a large number? A 1000 kg sandwich is really big, but a 1000 mg one is tiny. When we look at the difference between two means, we want to compare it to something, and usually that something is the standard deviation. The basic formula for combining two population standard deviations is like the Pythagorean formula: the combined SD squared is the sum of the SDs squared (there are more complicated formulas to take into account that we don't actually have the population SD but instead are estimating it from the sample, but the differences between them isn't significant here). This gives us $\sqrt{1079^2+ 922^2}$ or about 1419. The difference between the means is 278, and 278 is only about 20% of the combined standard deviation, which is rather small. For a two-tailed test, we'd generally need 200% to consider the difference significant.
For the mean standard deviation, we need to also take into account the sample sizes. As result, the SD for the distribution of sample means is smaller than the population standard deviation, and thus the difference to SD ratio is higher, but not enough to reach significance. | Why the p-value of t.test() is not statistically significant when mean values look really different
A number by itself is rather meaningless. Is 1000 a large number? A 1000 kg sandwich is really big, but a 1000 mg one is tiny. When we look at the difference between two means, we want to compare it t |
17,941 | Why the p-value of t.test() is not statistically significant when mean values look really different | Most of the answers so far have focused on the distribution of the data itself, but that is not the correct reason that the t test is not rejecting. The $p$ value of the test you did will be less than $0.05$ whenever the test statistic $$\frac{\bar{X}_1 - \bar{X}_2}{\sqrt{\frac{S_1^2}{n_1} + \frac{S_2^2}{n_2}}}$$ is larger in absolute value than roughly $2$. The numerator of this test statistic is the difference in sample means, which you correctly point out is high in your case. However, the denominator which is determined by the standard errors of the sample means (i.e. $S_1^2/n_1$ and $S_2^2/n_2$) matters too. In your case, the standard errors are high enough that the test statistic is too small to reject.
That's all there is to it. Let's run through a series of other questions and evaluate whether they influence the answer:
Does the normality of the data matter? No.
Does the skewness of the data matter? No.
Does the distribution of the data matter? No, except for the sample mean and standard deviation.
Does the data standard deviation give you full information about the relevant variability? No, the standard error is what matters, which depends on the sample size.
All you need to know is the difference in sample means $\bar{X}_1 - \bar{X}_2$ and each groups standard error of the mean, $S_1^2/n_1$ and $S_2^2/n_2$. I can't emphasize this enough. No other term appears in the statistic above (and none other appear in the null distribution).
Almost all of the earlier answers have focused exclusively on the data standard deviations (i.e. $S_1^2$ and $S_2^2$) instead of the standard errors of the sample means (i.e. $S_1^2/n_1$ and $S_2^2/n_2$). This is a mistake which I hope hasn't led to too much confusion. They've also focused on inspecting whether the t test is appropriate for your data. This is an interesting question which must be known to determine the true type I error rate of the $t$ test on your data, but it isn't the one you asked.
A convenient way to visualize these relevant terms is below. Since only the mean and standard errors matter, we are assuming the assumptions of the $t$ test for convenience. Each curve below visualizes how much information we have to determine the true mean. Because they overlap considerably, we don't have enough information to separate the two means and so the $t$-test does not reject. Note, it does not matter whether the data distributions overlap or not. | Why the p-value of t.test() is not statistically significant when mean values look really different | Most of the answers so far have focused on the distribution of the data itself, but that is not the correct reason that the t test is not rejecting. The $p$ value of the test you did will be less than | Why the p-value of t.test() is not statistically significant when mean values look really different
Most of the answers so far have focused on the distribution of the data itself, but that is not the correct reason that the t test is not rejecting. The $p$ value of the test you did will be less than $0.05$ whenever the test statistic $$\frac{\bar{X}_1 - \bar{X}_2}{\sqrt{\frac{S_1^2}{n_1} + \frac{S_2^2}{n_2}}}$$ is larger in absolute value than roughly $2$. The numerator of this test statistic is the difference in sample means, which you correctly point out is high in your case. However, the denominator which is determined by the standard errors of the sample means (i.e. $S_1^2/n_1$ and $S_2^2/n_2$) matters too. In your case, the standard errors are high enough that the test statistic is too small to reject.
That's all there is to it. Let's run through a series of other questions and evaluate whether they influence the answer:
Does the normality of the data matter? No.
Does the skewness of the data matter? No.
Does the distribution of the data matter? No, except for the sample mean and standard deviation.
Does the data standard deviation give you full information about the relevant variability? No, the standard error is what matters, which depends on the sample size.
All you need to know is the difference in sample means $\bar{X}_1 - \bar{X}_2$ and each groups standard error of the mean, $S_1^2/n_1$ and $S_2^2/n_2$. I can't emphasize this enough. No other term appears in the statistic above (and none other appear in the null distribution).
Almost all of the earlier answers have focused exclusively on the data standard deviations (i.e. $S_1^2$ and $S_2^2$) instead of the standard errors of the sample means (i.e. $S_1^2/n_1$ and $S_2^2/n_2$). This is a mistake which I hope hasn't led to too much confusion. They've also focused on inspecting whether the t test is appropriate for your data. This is an interesting question which must be known to determine the true type I error rate of the $t$ test on your data, but it isn't the one you asked.
A convenient way to visualize these relevant terms is below. Since only the mean and standard errors matter, we are assuming the assumptions of the $t$ test for convenience. Each curve below visualizes how much information we have to determine the true mean. Because they overlap considerably, we don't have enough information to separate the two means and so the $t$-test does not reject. Note, it does not matter whether the data distributions overlap or not. | Why the p-value of t.test() is not statistically significant when mean values look really different
Most of the answers so far have focused on the distribution of the data itself, but that is not the correct reason that the t test is not rejecting. The $p$ value of the test you did will be less than |
17,942 | My distribution is normal; Kolmogorov-Smirnov test doesn't agree | You have no basis to assert your data are normal. Even if your skewness and excess kurtosis both were exactly 0, that doesn't imply your data are normal. While skewness and kurtosis far from the expected values indicate non-normality, the converse doesn't hold. There are non-normal distributions that have the same skewness and kurtosis as the normal. An example is discussed here, the density of which is reproduced below:
As you see, it's distinctly bimodal. In this case, the distribution is symmetric, so as long as sufficient moments exist, the typical skewness measure will be 0 (indeed all the usual measures will be). For the kurtosis, the contribution to 4th moments from the region close to the mean will tend to make the kurtosis smaller, but the tail is relatively heavy, which tends to make it larger. If you choose just right, the kurtosis comes out with the same value as for the normal.
Your sample skewness is actually around -0.5, which is suggestive of mild left-skewness. Your histogram and Q-Q plot both indicate the same - a mildly left-skew distribution. (Such mild skewness is unlikely to be a problem for most of the common normal-theory procedures.)
You're looking at several different indicators of non-normality which you shouldn't expect to agree a priori, since they consider different aspects of the distribution; with smallish mildly non-normal samples, they'll frequently disagree.
Now for the big question: *Why are you testing for normality?*
[edited in response from comments:]
I'm not really sure , I though I should before doing an ANOVA
There are a number of points to be made here.
i. Normality is an assumption of ANOVA if you're using it for inference (such as hypothesis testing), but it's not especially sensitive to non-normality in larger samples - mild non-normality is of little consequence and as sample sizes increase the distribution may become more non-normal and the test may be only a little affected.
ii. You appear to be testing normality of the response (the DV). The (unconditional) distribution of DV itself is not assumed to be normal in ANOVA. You check the residuals to assess the reasonableness of the assumption about the conditional distribution (that is, its the error term in the model that's assumed normal) - i.e. you don't seem to be looking at the right thing. Indeed, because the check is done on residuals, you do it after model fitting, rather than before.
iii. Formal testing can be next to useless. The question of interest here is 'how badly is the degree of non-normality affecting my inference?', which the hypothesis test really doesn't respond to. As the sample size gets larger, the test becomes more and more able to detect trivial differences from normality, while the effect on the significance level in the ANOVA becomes smaller and smaller. That is, if your sample size is reasonably large, the test of normality is mostly telling you you have a large sample size, which means you may not have much to worry about. At least with a Q-Q plot you have a visual assessment of how non-normal it is.
iv. at reasonable sample sizes, other assumptions - like equality of variance and independence - generally matter much more than mild non-normality. Worry about the other assumptions first ... but again, formal testing isn't answering the right question
v. choosing whether you do an ANOVA or some other test based on the outcome of a hypothesis test tends to have worse properties than simply deciding to act as if the assumption doesn't hold. (There are a variety of methods that are suitable for one-way ANOVA-like analyses on data that isn't assumed to be normal that you can use whenever you don't think you have reason to assume normality. Some have very good power at the normal, and with decent software there's no reason to avoid them.)
[I believe I had a reference for this last point but I can't locate it right now; if I find it I'll try to come back and put it in] | My distribution is normal; Kolmogorov-Smirnov test doesn't agree | You have no basis to assert your data are normal. Even if your skewness and excess kurtosis both were exactly 0, that doesn't imply your data are normal. While skewness and kurtosis far from the expec | My distribution is normal; Kolmogorov-Smirnov test doesn't agree
You have no basis to assert your data are normal. Even if your skewness and excess kurtosis both were exactly 0, that doesn't imply your data are normal. While skewness and kurtosis far from the expected values indicate non-normality, the converse doesn't hold. There are non-normal distributions that have the same skewness and kurtosis as the normal. An example is discussed here, the density of which is reproduced below:
As you see, it's distinctly bimodal. In this case, the distribution is symmetric, so as long as sufficient moments exist, the typical skewness measure will be 0 (indeed all the usual measures will be). For the kurtosis, the contribution to 4th moments from the region close to the mean will tend to make the kurtosis smaller, but the tail is relatively heavy, which tends to make it larger. If you choose just right, the kurtosis comes out with the same value as for the normal.
Your sample skewness is actually around -0.5, which is suggestive of mild left-skewness. Your histogram and Q-Q plot both indicate the same - a mildly left-skew distribution. (Such mild skewness is unlikely to be a problem for most of the common normal-theory procedures.)
You're looking at several different indicators of non-normality which you shouldn't expect to agree a priori, since they consider different aspects of the distribution; with smallish mildly non-normal samples, they'll frequently disagree.
Now for the big question: *Why are you testing for normality?*
[edited in response from comments:]
I'm not really sure , I though I should before doing an ANOVA
There are a number of points to be made here.
i. Normality is an assumption of ANOVA if you're using it for inference (such as hypothesis testing), but it's not especially sensitive to non-normality in larger samples - mild non-normality is of little consequence and as sample sizes increase the distribution may become more non-normal and the test may be only a little affected.
ii. You appear to be testing normality of the response (the DV). The (unconditional) distribution of DV itself is not assumed to be normal in ANOVA. You check the residuals to assess the reasonableness of the assumption about the conditional distribution (that is, its the error term in the model that's assumed normal) - i.e. you don't seem to be looking at the right thing. Indeed, because the check is done on residuals, you do it after model fitting, rather than before.
iii. Formal testing can be next to useless. The question of interest here is 'how badly is the degree of non-normality affecting my inference?', which the hypothesis test really doesn't respond to. As the sample size gets larger, the test becomes more and more able to detect trivial differences from normality, while the effect on the significance level in the ANOVA becomes smaller and smaller. That is, if your sample size is reasonably large, the test of normality is mostly telling you you have a large sample size, which means you may not have much to worry about. At least with a Q-Q plot you have a visual assessment of how non-normal it is.
iv. at reasonable sample sizes, other assumptions - like equality of variance and independence - generally matter much more than mild non-normality. Worry about the other assumptions first ... but again, formal testing isn't answering the right question
v. choosing whether you do an ANOVA or some other test based on the outcome of a hypothesis test tends to have worse properties than simply deciding to act as if the assumption doesn't hold. (There are a variety of methods that are suitable for one-way ANOVA-like analyses on data that isn't assumed to be normal that you can use whenever you don't think you have reason to assume normality. Some have very good power at the normal, and with decent software there's no reason to avoid them.)
[I believe I had a reference for this last point but I can't locate it right now; if I find it I'll try to come back and put it in] | My distribution is normal; Kolmogorov-Smirnov test doesn't agree
You have no basis to assert your data are normal. Even if your skewness and excess kurtosis both were exactly 0, that doesn't imply your data are normal. While skewness and kurtosis far from the expec |
17,943 | My distribution is normal; Kolmogorov-Smirnov test doesn't agree | The Kolmogorov-Smirnov Test has a fair bit of power when samples sizes are large, so it can be easy to reject the null hypothesis that your data does not differ from normality. In other words, the test will sometimes suggest that a distribution is not normal in large samples even if it is normal for most intentions.
Think of it like a t-test. If you have two populations that differ in height by only a thousandth of a millimetre, an incredibly large samples will statistically support that these are different, even if the difference is meaningless.
Perhaps you can rely on other methods to determine the normality of your data. The plots you use are two good examples, as well as the skew/kurtosis values.
This other topic seems particularly related: Is normality testing 'essentially useless'? | My distribution is normal; Kolmogorov-Smirnov test doesn't agree | The Kolmogorov-Smirnov Test has a fair bit of power when samples sizes are large, so it can be easy to reject the null hypothesis that your data does not differ from normality. In other words, the tes | My distribution is normal; Kolmogorov-Smirnov test doesn't agree
The Kolmogorov-Smirnov Test has a fair bit of power when samples sizes are large, so it can be easy to reject the null hypothesis that your data does not differ from normality. In other words, the test will sometimes suggest that a distribution is not normal in large samples even if it is normal for most intentions.
Think of it like a t-test. If you have two populations that differ in height by only a thousandth of a millimetre, an incredibly large samples will statistically support that these are different, even if the difference is meaningless.
Perhaps you can rely on other methods to determine the normality of your data. The plots you use are two good examples, as well as the skew/kurtosis values.
This other topic seems particularly related: Is normality testing 'essentially useless'? | My distribution is normal; Kolmogorov-Smirnov test doesn't agree
The Kolmogorov-Smirnov Test has a fair bit of power when samples sizes are large, so it can be easy to reject the null hypothesis that your data does not differ from normality. In other words, the tes |
17,944 | My distribution is normal; Kolmogorov-Smirnov test doesn't agree | The Kolmogorov–Smirnov test is distribution-free when the null hypothesis is fully specified—if the mean & variance are estimated from the data be sure to use the Lilliefors variant when testing normality (if you must). That's not to gainsay the other answers. | My distribution is normal; Kolmogorov-Smirnov test doesn't agree | The Kolmogorov–Smirnov test is distribution-free when the null hypothesis is fully specified—if the mean & variance are estimated from the data be sure to use the Lilliefors variant when testing norma | My distribution is normal; Kolmogorov-Smirnov test doesn't agree
The Kolmogorov–Smirnov test is distribution-free when the null hypothesis is fully specified—if the mean & variance are estimated from the data be sure to use the Lilliefors variant when testing normality (if you must). That's not to gainsay the other answers. | My distribution is normal; Kolmogorov-Smirnov test doesn't agree
The Kolmogorov–Smirnov test is distribution-free when the null hypothesis is fully specified—if the mean & variance are estimated from the data be sure to use the Lilliefors variant when testing norma |
17,945 | Is this causation? | can we say that A causes B?
No, this is (presumably) a simple observational study. To infer causation it is necessary (but not necessarily sufficient) to conduct an experiment or a controlled trial.
Just because you are able to make good predictions does not say anything about causality. If I observe the number of people who carry cigarette lighters, this will predict the number of people who have a cancer diagnosis, but it doesn't mean that carrying a lighter causes cancer.
Edit: To address one of the points in the comments:
But now I wonder: can there ever be causation without correlation?
Yes. This can happen in a number of ways. One of the easiest to demonstrate is where the causal relation is not linear. For example:
> X <- 1:20
> Y <- 21*X - X^2
> cor(X,Y)
[1] 0
Clearly Y is caused by X, yet the correlation is zero. | Is this causation? | can we say that A causes B?
No, this is (presumably) a simple observational study. To infer causation it is necessary (but not necessarily sufficient) to conduct an experiment or a controlled trial.
| Is this causation?
can we say that A causes B?
No, this is (presumably) a simple observational study. To infer causation it is necessary (but not necessarily sufficient) to conduct an experiment or a controlled trial.
Just because you are able to make good predictions does not say anything about causality. If I observe the number of people who carry cigarette lighters, this will predict the number of people who have a cancer diagnosis, but it doesn't mean that carrying a lighter causes cancer.
Edit: To address one of the points in the comments:
But now I wonder: can there ever be causation without correlation?
Yes. This can happen in a number of ways. One of the easiest to demonstrate is where the causal relation is not linear. For example:
> X <- 1:20
> Y <- 21*X - X^2
> cor(X,Y)
[1] 0
Clearly Y is caused by X, yet the correlation is zero. | Is this causation?
can we say that A causes B?
No, this is (presumably) a simple observational study. To infer causation it is necessary (but not necessarily sufficient) to conduct an experiment or a controlled trial.
|
17,946 | Is this causation? | Both of the previous answers are good, but I want to dive into the weeds on this question a little more. So we know that correlation is not causation, but correlation is also not not causation. So when do we get to say that correlation is causation. Unfortunately, the data itself can never tell us this, we can only arrive at this by imposing assumptions on the data.
Simple Example:
I am going to use directed acyclic graphs (DAGs) since they graphically encode the assumptions. Let's focus on three variables: $A$, $B$, and $U$ (you can extend this to more, but the basic concepts remain the same). $U$ is some variable we did not have the opportunity to collect. Each arrow in the DAG indicates a causal relationship, with the direction of the arrow indicating what causes what. For three variables (and the ordering restriction), following are some possible DAGs that will result in a correlation between $A$ and $B$:
Correlation is causation in only DAGs numbered 1, 2, and 3; which requires appealing to outside knowledge (although 3 is tricky since $U$ being a common cause of both $A$ and $B$ can flip the relationship from the true causal direction, e.g. $A$ is protective from $B$ in reality but $U$ makes it look harmful).
One way to determine whether correlation is consistent with causation is if we conducted a randomized experiment. If we did not randomize based on $U$ and $B$ was measured after $A$ was randomized, then we know that an arrow from $U$ to $A$ and $B$ to $A$ are implausible. Therefore, we can say that the correlation is causation. Alternatively, maybe we have some subject matter knowledge on the topic of $A$ and $B$ that says there are no common causes (unlikely in reality but this is only an example), similarly we can say that correlation is causation.
The important part is that the assumptions used to claim correlation is causation are supported by outside knowledge. How and exactly what outside knowledge is needed is an important issue.
Conclusion:
There are a variety of frameworks and formal assumptions that can be used to make the claim that a certain correlation is causation. The key part is that the data alone cannot tell you whether a correlation is or isn't causation. Some outside assumptions or procedures must be applied in order to distinguish non-causal correlations from causal correlations.
Aside:
As to my example of a scenario with causation but no correlation, DAGs are assumed to be faithful. This basically means that there are no perfect cancellations that occur (all the individual causal effects don't cancel out perfectly to result in no average causal effect). Because of this, it is a little trickier to claim that no correlation means no causation. | Is this causation? | Both of the previous answers are good, but I want to dive into the weeds on this question a little more. So we know that correlation is not causation, but correlation is also not not causation. So whe | Is this causation?
Both of the previous answers are good, but I want to dive into the weeds on this question a little more. So we know that correlation is not causation, but correlation is also not not causation. So when do we get to say that correlation is causation. Unfortunately, the data itself can never tell us this, we can only arrive at this by imposing assumptions on the data.
Simple Example:
I am going to use directed acyclic graphs (DAGs) since they graphically encode the assumptions. Let's focus on three variables: $A$, $B$, and $U$ (you can extend this to more, but the basic concepts remain the same). $U$ is some variable we did not have the opportunity to collect. Each arrow in the DAG indicates a causal relationship, with the direction of the arrow indicating what causes what. For three variables (and the ordering restriction), following are some possible DAGs that will result in a correlation between $A$ and $B$:
Correlation is causation in only DAGs numbered 1, 2, and 3; which requires appealing to outside knowledge (although 3 is tricky since $U$ being a common cause of both $A$ and $B$ can flip the relationship from the true causal direction, e.g. $A$ is protective from $B$ in reality but $U$ makes it look harmful).
One way to determine whether correlation is consistent with causation is if we conducted a randomized experiment. If we did not randomize based on $U$ and $B$ was measured after $A$ was randomized, then we know that an arrow from $U$ to $A$ and $B$ to $A$ are implausible. Therefore, we can say that the correlation is causation. Alternatively, maybe we have some subject matter knowledge on the topic of $A$ and $B$ that says there are no common causes (unlikely in reality but this is only an example), similarly we can say that correlation is causation.
The important part is that the assumptions used to claim correlation is causation are supported by outside knowledge. How and exactly what outside knowledge is needed is an important issue.
Conclusion:
There are a variety of frameworks and formal assumptions that can be used to make the claim that a certain correlation is causation. The key part is that the data alone cannot tell you whether a correlation is or isn't causation. Some outside assumptions or procedures must be applied in order to distinguish non-causal correlations from causal correlations.
Aside:
As to my example of a scenario with causation but no correlation, DAGs are assumed to be faithful. This basically means that there are no perfect cancellations that occur (all the individual causal effects don't cancel out perfectly to result in no average causal effect). Because of this, it is a little trickier to claim that no correlation means no causation. | Is this causation?
Both of the previous answers are good, but I want to dive into the weeds on this question a little more. So we know that correlation is not causation, but correlation is also not not causation. So whe |
17,947 | Is this causation? | No, you cannot say A causes B. The table you have only describes associations between A and B. Even if you know A accurately predicted B a large percentage of the time , that does not imply that A causes B. It may in fact, be that A causes some other, confounding variable C to occur that is highly correlated with B. | Is this causation? | No, you cannot say A causes B. The table you have only describes associations between A and B. Even if you know A accurately predicted B a large percentage of the time , that does not imply that A c | Is this causation?
No, you cannot say A causes B. The table you have only describes associations between A and B. Even if you know A accurately predicted B a large percentage of the time , that does not imply that A causes B. It may in fact, be that A causes some other, confounding variable C to occur that is highly correlated with B. | Is this causation?
No, you cannot say A causes B. The table you have only describes associations between A and B. Even if you know A accurately predicted B a large percentage of the time , that does not imply that A c |
17,948 | Is this causation? | Prediction means that entropy is reduced. That is, if A predicts B, then the entropy of the distribution of B is greater than the entropy of distribution B conditioned on A.
Prediction is symmetric. If A predicts B, then B predicts A (barring degenerate cases).
Causation is not symmetric. Causation refers to an asymmetric relationship between two events. So it follows that prediction does not mean causation.
In the case that you present, A and B do not predict each other. While the entropy of B given A is low, it's just as low without knowing A. | Is this causation? | Prediction means that entropy is reduced. That is, if A predicts B, then the entropy of the distribution of B is greater than the entropy of distribution B conditioned on A.
Prediction is symmetric. | Is this causation?
Prediction means that entropy is reduced. That is, if A predicts B, then the entropy of the distribution of B is greater than the entropy of distribution B conditioned on A.
Prediction is symmetric. If A predicts B, then B predicts A (barring degenerate cases).
Causation is not symmetric. Causation refers to an asymmetric relationship between two events. So it follows that prediction does not mean causation.
In the case that you present, A and B do not predict each other. While the entropy of B given A is low, it's just as low without knowing A. | Is this causation?
Prediction means that entropy is reduced. That is, if A predicts B, then the entropy of the distribution of B is greater than the entropy of distribution B conditioned on A.
Prediction is symmetric. |
17,949 | Probability of winning a game where you sample an increasing sequence from a uniform distribution | You can solve this combinatorially, without using calculus. All you need to look at is the probability that the first $n$ samples are in a certain order, and for any particular order this is simply $1/n!$
The game ends after exactly $n$ steps if and only if the first $n-1$ samples are in increasing order, and the last sample is not. The last sample can occupy any of the $n$ positions except the highest, so there are $n-1$ such sequences; hence the probability that the game ends after exactly $n$ steps is $\frac{n-1}{n!}$.
And $A$ wins if the game ends after an even number of steps, so $A$'s probability of winning is
$$\begin{align}
\sum_{n=1}^\infty\frac{2n-1}{(2n!)} & = \sum_{n=1}^\infty\left(\frac{1}{(2n-1)!}-\frac{1}{(2n!)}\right) \\
& = \sum_{n=1}^\infty\frac{(-1)^{n+1}}{n!} \\
& = 1-\sum_{n=0}^\infty\frac{(-1)^n}{n!} \\
& = 1 - \frac{1}{e}
\end{align}$$
This assumes nothing about the particular distribution of the samples, except that it is continuous. So the answer is the same whatever the distribution. | Probability of winning a game where you sample an increasing sequence from a uniform distribution | You can solve this combinatorially, without using calculus. All you need to look at is the probability that the first $n$ samples are in a certain order, and for any particular order this is simply $1 | Probability of winning a game where you sample an increasing sequence from a uniform distribution
You can solve this combinatorially, without using calculus. All you need to look at is the probability that the first $n$ samples are in a certain order, and for any particular order this is simply $1/n!$
The game ends after exactly $n$ steps if and only if the first $n-1$ samples are in increasing order, and the last sample is not. The last sample can occupy any of the $n$ positions except the highest, so there are $n-1$ such sequences; hence the probability that the game ends after exactly $n$ steps is $\frac{n-1}{n!}$.
And $A$ wins if the game ends after an even number of steps, so $A$'s probability of winning is
$$\begin{align}
\sum_{n=1}^\infty\frac{2n-1}{(2n!)} & = \sum_{n=1}^\infty\left(\frac{1}{(2n-1)!}-\frac{1}{(2n!)}\right) \\
& = \sum_{n=1}^\infty\frac{(-1)^{n+1}}{n!} \\
& = 1-\sum_{n=0}^\infty\frac{(-1)^n}{n!} \\
& = 1 - \frac{1}{e}
\end{align}$$
This assumes nothing about the particular distribution of the samples, except that it is continuous. So the answer is the same whatever the distribution. | Probability of winning a game where you sample an increasing sequence from a uniform distribution
You can solve this combinatorially, without using calculus. All you need to look at is the probability that the first $n$ samples are in a certain order, and for any particular order this is simply $1 |
17,950 | Probability of winning a game where you sample an increasing sequence from a uniform distribution | You're on the right track.
For $0\le a \le 1,$ let $p(a)$ be the chance of losing when it's your turn and $a$ is the largest value drawn so far. In order to lose (a) you have to draw a number $x$ between $0$ and $a$, making certain your chances of losing, or (b) you draw $x \ge a$ then your opponent, faced with the new value $x,$ must win, which she does with probability $1 - p(x).$ We have to average these possibilities over all the possible values of $x,$ giving the recursion
$$p(a) = \int_0^a \mathrm{d}x + \int_a^1 (1-p(x))\,\mathrm{d}x = 1 - \int_a^1 p(x)\,\mathrm{d}x.\tag{*}$$
At the outset, $a=0,$ it's your turn, and therefore you want to find the chance of winning, which is $1-p(0).$
Let $P(a) = \int_a^1 p(x)\,\mathrm{d}x.$ This is a differentiable function with derivative $P^\prime(a) = -p(a).$ In these terms $(*)$ becomes
$$-P^\prime(a) = 1-P(a).$$
Since $p(a)\lt 1$ for most $a,$ $P(a) \lt 1$ for all $a,$ allowing us to divide both sides by $1-P(a),$ giving
$$\frac{\mathrm d}{\mathrm{d}x} \log(1-P(a)) = \frac{-P^\prime(a)}{1-P(a)} = 1.$$
Integrating both sides and using $P(1)=0$ gives the unique solution $P(a) = 1 - \exp(a-1).$ Taking the derivative yields
$$p(a) = -P^\prime(a) = e^{a-1}.$$
The solution therefore is $1-p(0) = 1-e^{-1} \approx 0.632.$ | Probability of winning a game where you sample an increasing sequence from a uniform distribution | You're on the right track.
For $0\le a \le 1,$ let $p(a)$ be the chance of losing when it's your turn and $a$ is the largest value drawn so far. In order to lose (a) you have to draw a number $x$ bet | Probability of winning a game where you sample an increasing sequence from a uniform distribution
You're on the right track.
For $0\le a \le 1,$ let $p(a)$ be the chance of losing when it's your turn and $a$ is the largest value drawn so far. In order to lose (a) you have to draw a number $x$ between $0$ and $a$, making certain your chances of losing, or (b) you draw $x \ge a$ then your opponent, faced with the new value $x,$ must win, which she does with probability $1 - p(x).$ We have to average these possibilities over all the possible values of $x,$ giving the recursion
$$p(a) = \int_0^a \mathrm{d}x + \int_a^1 (1-p(x))\,\mathrm{d}x = 1 - \int_a^1 p(x)\,\mathrm{d}x.\tag{*}$$
At the outset, $a=0,$ it's your turn, and therefore you want to find the chance of winning, which is $1-p(0).$
Let $P(a) = \int_a^1 p(x)\,\mathrm{d}x.$ This is a differentiable function with derivative $P^\prime(a) = -p(a).$ In these terms $(*)$ becomes
$$-P^\prime(a) = 1-P(a).$$
Since $p(a)\lt 1$ for most $a,$ $P(a) \lt 1$ for all $a,$ allowing us to divide both sides by $1-P(a),$ giving
$$\frac{\mathrm d}{\mathrm{d}x} \log(1-P(a)) = \frac{-P^\prime(a)}{1-P(a)} = 1.$$
Integrating both sides and using $P(1)=0$ gives the unique solution $P(a) = 1 - \exp(a-1).$ Taking the derivative yields
$$p(a) = -P^\prime(a) = e^{a-1}.$$
The solution therefore is $1-p(0) = 1-e^{-1} \approx 0.632.$ | Probability of winning a game where you sample an increasing sequence from a uniform distribution
You're on the right track.
For $0\le a \le 1,$ let $p(a)$ be the chance of losing when it's your turn and $a$ is the largest value drawn so far. In order to lose (a) you have to draw a number $x$ bet |
17,951 | Probability of winning a game where you sample an increasing sequence from a uniform distribution | Here's a basic solution:
$$P[\text{player A wins}] = \sum_{n=1}^\infty P[\text{player A wins on draw } 2n]$$
$P[\text{player A wins on draw } 2n]$ is the probability that a random permutation of a set of $2n$ distinct numbers has the first $2n - 1$ in ascending order and the last is not maximal. How many such permutations exist?
There are $2n - 1$ possibilities for a non-maximal final element and, conditional on that, one ascending ordering of the others, i.e.,
$$P[\text{player A wins on draw } 2n] = \frac{2n - 1}{(2n)!}$$
A little algebra gives
$$P[\text{player A wins}] = \frac{1}{1!} - \frac{1}{2!} + \frac{1}{3!} - \frac{1}{4!} + ...$$
This alternating sum of reciprocal factorials might look familiar:
$$\exp(-1) = 1 - \frac{1}{1!} - \frac{1}{2!} + \frac{1}{3!} - \frac{1}{4!} + ...$$
thus
$$P[\text{player A wins}] = 1 - \exp(-1)$$ | Probability of winning a game where you sample an increasing sequence from a uniform distribution | Here's a basic solution:
$$P[\text{player A wins}] = \sum_{n=1}^\infty P[\text{player A wins on draw } 2n]$$
$P[\text{player A wins on draw } 2n]$ is the probability that a random permutation of a set | Probability of winning a game where you sample an increasing sequence from a uniform distribution
Here's a basic solution:
$$P[\text{player A wins}] = \sum_{n=1}^\infty P[\text{player A wins on draw } 2n]$$
$P[\text{player A wins on draw } 2n]$ is the probability that a random permutation of a set of $2n$ distinct numbers has the first $2n - 1$ in ascending order and the last is not maximal. How many such permutations exist?
There are $2n - 1$ possibilities for a non-maximal final element and, conditional on that, one ascending ordering of the others, i.e.,
$$P[\text{player A wins on draw } 2n] = \frac{2n - 1}{(2n)!}$$
A little algebra gives
$$P[\text{player A wins}] = \frac{1}{1!} - \frac{1}{2!} + \frac{1}{3!} - \frac{1}{4!} + ...$$
This alternating sum of reciprocal factorials might look familiar:
$$\exp(-1) = 1 - \frac{1}{1!} - \frac{1}{2!} + \frac{1}{3!} - \frac{1}{4!} + ...$$
thus
$$P[\text{player A wins}] = 1 - \exp(-1)$$ | Probability of winning a game where you sample an increasing sequence from a uniform distribution
Here's a basic solution:
$$P[\text{player A wins}] = \sum_{n=1}^\infty P[\text{player A wins on draw } 2n]$$
$P[\text{player A wins on draw } 2n]$ is the probability that a random permutation of a set |
17,952 | Probability of winning a game where you sample an increasing sequence from a uniform distribution | You need to make use of the fact that the probability of being greater than the maximum of a set of iid random variables is equivalent to being greater each one of those individual random variables. This allows you to rephrase the maximum as a product. See the (flipped) derivation here https://stats.stackexchange.com/a/32353/60065 | Probability of winning a game where you sample an increasing sequence from a uniform distribution | You need to make use of the fact that the probability of being greater than the maximum of a set of iid random variables is equivalent to being greater each one of those individual random variables. T | Probability of winning a game where you sample an increasing sequence from a uniform distribution
You need to make use of the fact that the probability of being greater than the maximum of a set of iid random variables is equivalent to being greater each one of those individual random variables. This allows you to rephrase the maximum as a product. See the (flipped) derivation here https://stats.stackexchange.com/a/32353/60065 | Probability of winning a game where you sample an increasing sequence from a uniform distribution
You need to make use of the fact that the probability of being greater than the maximum of a set of iid random variables is equivalent to being greater each one of those individual random variables. T |
17,953 | Probability of winning a game where you sample an increasing sequence from a uniform distribution | The solution is provided in Devroye's Non-uniform random variate generation [open access] book as the basis to Von Neumann's exponential random generator: | Probability of winning a game where you sample an increasing sequence from a uniform distribution | The solution is provided in Devroye's Non-uniform random variate generation [open access] book as the basis to Von Neumann's exponential random generator: | Probability of winning a game where you sample an increasing sequence from a uniform distribution
The solution is provided in Devroye's Non-uniform random variate generation [open access] book as the basis to Von Neumann's exponential random generator: | Probability of winning a game where you sample an increasing sequence from a uniform distribution
The solution is provided in Devroye's Non-uniform random variate generation [open access] book as the basis to Von Neumann's exponential random generator: |
17,954 | Is "not independent" the same as "dependent" in English? | In statistics, “dependent” and “not independent” have the same meaning. There is no inherent notion of causation.
In regular English, I would say that “dependent” implies causation. Dinner temperature depends on oven temperature, not the other way around. | Is "not independent" the same as "dependent" in English? | In statistics, “dependent” and “not independent” have the same meaning. There is no inherent notion of causation.
In regular English, I would say that “dependent” implies causation. Dinner temperature | Is "not independent" the same as "dependent" in English?
In statistics, “dependent” and “not independent” have the same meaning. There is no inherent notion of causation.
In regular English, I would say that “dependent” implies causation. Dinner temperature depends on oven temperature, not the other way around. | Is "not independent" the same as "dependent" in English?
In statistics, “dependent” and “not independent” have the same meaning. There is no inherent notion of causation.
In regular English, I would say that “dependent” implies causation. Dinner temperature |
17,955 | Is "not independent" the same as "dependent" in English? | Independence is more properly termed mutual independence which eliminates the use of "$A$ is independent of $B$" and replaces it by "$A$ and $B$ are mutually independent". Thus, there is no such thing as $A$ being independent of $B$ and wonderment if that implies that $B$ is dependent of $A$: independence is mutual. Be aware that "$A$ is independent of $B$ if $P(A\mid B) = P(A)$" is an incomplete statement as a definition: $A$ and $B$ can be independent even if $P(A\mid B)$ is undefined e.g. as when $B$ is an event of probability $0$.
The generally accepted definition of independent events is that
$A$ and $B$ are said to be (mutually) independent events if $P(A\cap B) = P(A)P(B)$,
and as in all definitions, the "if" is understood to be "iff" or "if and only if".
Note the complete absence of "independent of" and the symmetry in the roles of $A$ and $B$. Except for those who do not believe in the commutativity of multiplication of real numbers or the commutativity of set intersection, the definition works equally well if we interchange $A$ and $B$ throughout in the definition.
Finally, turning to the question of whether "not independent" means "dependent", the answer is Yes. | Is "not independent" the same as "dependent" in English? | Independence is more properly termed mutual independence which eliminates the use of "$A$ is independent of $B$" and replaces it by "$A$ and $B$ are mutually independent". Thus, there is no such thing | Is "not independent" the same as "dependent" in English?
Independence is more properly termed mutual independence which eliminates the use of "$A$ is independent of $B$" and replaces it by "$A$ and $B$ are mutually independent". Thus, there is no such thing as $A$ being independent of $B$ and wonderment if that implies that $B$ is dependent of $A$: independence is mutual. Be aware that "$A$ is independent of $B$ if $P(A\mid B) = P(A)$" is an incomplete statement as a definition: $A$ and $B$ can be independent even if $P(A\mid B)$ is undefined e.g. as when $B$ is an event of probability $0$.
The generally accepted definition of independent events is that
$A$ and $B$ are said to be (mutually) independent events if $P(A\cap B) = P(A)P(B)$,
and as in all definitions, the "if" is understood to be "iff" or "if and only if".
Note the complete absence of "independent of" and the symmetry in the roles of $A$ and $B$. Except for those who do not believe in the commutativity of multiplication of real numbers or the commutativity of set intersection, the definition works equally well if we interchange $A$ and $B$ throughout in the definition.
Finally, turning to the question of whether "not independent" means "dependent", the answer is Yes. | Is "not independent" the same as "dependent" in English?
Independence is more properly termed mutual independence which eliminates the use of "$A$ is independent of $B$" and replaces it by "$A$ and $B$ are mutually independent". Thus, there is no such thing |
17,956 | Is "not independent" the same as "dependent" in English? | In probability calculus there is no expression for causal dependency. No one can express with its semantics the popular example that manipulation on barometer do not change the weather, but changes of the weather change barometer measurements. Either two events 'tend to occurs together' (correlate) or not.
The very definition of independence is (probably) derived from the idea, that knowledge if $B$ occurred, do not change probability of occurring event $A$. This is formally written as $P(A) = P(A|B)$.
The contradiction to situation state is lack of independency: the probability of occurring event $B$ increases or decreases probability of occurring event $A$. This is true for barometer and the weather and expressed as $P(A) \neq P(A|B)$.
Mathematicians often know, that their not independency is not always the 'true' dependency and restrain themselves from using causally marked expression. Especially, that in econometrics or causal inference such definition is exist. Therefore at some probability calculus courses you would hear, that no one discussed $dependency$, the discussed ideas were not independency and correlation.
The mathematical tool which analyses dependency in the more natural meaning is do-calculus (by Judea Pearl). This tool extends standard probability calculus with the do operator, which describes intervention in the system. For the barometer and the weather all four statements will be true:
$$P(A) \neq P(A|B)$$
$$P(B) \neq P(B|A)$$
$$P(B) \neq P(B|do(A))$$
$$P(A) = P(A|do(B))$$
In this context I would strongly discourage using word dependent in context of standard probability calculus and statistics. Not independent is good enough, and in fact more precise in context of this 'more advanced' mathematics. | Is "not independent" the same as "dependent" in English? | In probability calculus there is no expression for causal dependency. No one can express with its semantics the popular example that manipulation on barometer do not change the weather, but changes of | Is "not independent" the same as "dependent" in English?
In probability calculus there is no expression for causal dependency. No one can express with its semantics the popular example that manipulation on barometer do not change the weather, but changes of the weather change barometer measurements. Either two events 'tend to occurs together' (correlate) or not.
The very definition of independence is (probably) derived from the idea, that knowledge if $B$ occurred, do not change probability of occurring event $A$. This is formally written as $P(A) = P(A|B)$.
The contradiction to situation state is lack of independency: the probability of occurring event $B$ increases or decreases probability of occurring event $A$. This is true for barometer and the weather and expressed as $P(A) \neq P(A|B)$.
Mathematicians often know, that their not independency is not always the 'true' dependency and restrain themselves from using causally marked expression. Especially, that in econometrics or causal inference such definition is exist. Therefore at some probability calculus courses you would hear, that no one discussed $dependency$, the discussed ideas were not independency and correlation.
The mathematical tool which analyses dependency in the more natural meaning is do-calculus (by Judea Pearl). This tool extends standard probability calculus with the do operator, which describes intervention in the system. For the barometer and the weather all four statements will be true:
$$P(A) \neq P(A|B)$$
$$P(B) \neq P(B|A)$$
$$P(B) \neq P(B|do(A))$$
$$P(A) = P(A|do(B))$$
In this context I would strongly discourage using word dependent in context of standard probability calculus and statistics. Not independent is good enough, and in fact more precise in context of this 'more advanced' mathematics. | Is "not independent" the same as "dependent" in English?
In probability calculus there is no expression for causal dependency. No one can express with its semantics the popular example that manipulation on barometer do not change the weather, but changes of |
17,957 | Is "not independent" the same as "dependent" in English? | "if A is dependent of B then also B is dependent of A"
Grammatically, that is not correct; the correct preposition is "on", not "of".
Mathematically, the term "dependent" is often used in a nonsymmetric sense: if y is being treated as a being a function of x, then y is dependent on x. In experimental setup, the variable we directly control is called the "independent" variable, and the one that results from the independent variable is called the "dependent" variable.
If you wanted to emphasize the symmetrical nature, you could say "x and y are dependent on each other". | Is "not independent" the same as "dependent" in English? | "if A is dependent of B then also B is dependent of A"
Grammatically, that is not correct; the correct preposition is "on", not "of".
Mathematically, the term "dependent" is often used in a nonsymmet | Is "not independent" the same as "dependent" in English?
"if A is dependent of B then also B is dependent of A"
Grammatically, that is not correct; the correct preposition is "on", not "of".
Mathematically, the term "dependent" is often used in a nonsymmetric sense: if y is being treated as a being a function of x, then y is dependent on x. In experimental setup, the variable we directly control is called the "independent" variable, and the one that results from the independent variable is called the "dependent" variable.
If you wanted to emphasize the symmetrical nature, you could say "x and y are dependent on each other". | Is "not independent" the same as "dependent" in English?
"if A is dependent of B then also B is dependent of A"
Grammatically, that is not correct; the correct preposition is "on", not "of".
Mathematically, the term "dependent" is often used in a nonsymmet |
17,958 | Is "not independent" the same as "dependent" in English? | "Not independent" and "dependent" are grammatically same, not only in English but also in other languages as well, including the language of mathematical logic. However, when discussing statistics, one has to use more accurate language. The key observation is that two events can be independent in a unique way, but there are many ways in which they can be dependent (such having a causal relationship, etc.). There is no point in saying that events or random variables are dependent without describing the structure of the dependency. Stating that two events are just "dependent" is meaningless. | Is "not independent" the same as "dependent" in English? | "Not independent" and "dependent" are grammatically same, not only in English but also in other languages as well, including the language of mathematical logic. However, when discussing statistics, on | Is "not independent" the same as "dependent" in English?
"Not independent" and "dependent" are grammatically same, not only in English but also in other languages as well, including the language of mathematical logic. However, when discussing statistics, one has to use more accurate language. The key observation is that two events can be independent in a unique way, but there are many ways in which they can be dependent (such having a causal relationship, etc.). There is no point in saying that events or random variables are dependent without describing the structure of the dependency. Stating that two events are just "dependent" is meaningless. | Is "not independent" the same as "dependent" in English?
"Not independent" and "dependent" are grammatically same, not only in English but also in other languages as well, including the language of mathematical logic. However, when discussing statistics, on |
17,959 | How to perform t-test with huge samples? | chl already mentioned the trap of multiple comparisons when conducting simultaneously 25 tests with the same data set. An easy way to handle that is to adjust the p value threshold by dividing them by the number of tests (in this case 25). The more precise formula is: Adjusted p value = 1 - (1 - p value)^(1/n). However, the two different formulas derive almost the same adjusted p value.
There is another major issue with your hypothesis testing exercise. You will most certainly run into a Type I error (false positive) whereby you will uncover some really trivial differences that are extremely significant at the 99.9999% level. This is because when you deal with a sample of such a large size (n = 1,313,662), you will get a standard error that is very close to 0. That's because the square root of 1,313,662 = 1,146. So, you will divide the standard deviation by 1,146. In short, you will capture minute differences that may be completely immaterial.
I would suggest you move away from this hypothesis testing framework and instead conduct an Effect Size type analysis. Within this framework the measure of statistical distance is the standard deviation. Unlike the standard error, the standard deviation is not artificially shrunk by the size of the sample. And, this approach will give you a better sense of the material differences between your data sets. Effect Size is also much more focused on confidence interval around the mean average difference which is much more informative than the hypothesis testing focus on statistical significance that often is not significant at all. Hope that helps. | How to perform t-test with huge samples? | chl already mentioned the trap of multiple comparisons when conducting simultaneously 25 tests with the same data set. An easy way to handle that is to adjust the p value threshold by dividing them b | How to perform t-test with huge samples?
chl already mentioned the trap of multiple comparisons when conducting simultaneously 25 tests with the same data set. An easy way to handle that is to adjust the p value threshold by dividing them by the number of tests (in this case 25). The more precise formula is: Adjusted p value = 1 - (1 - p value)^(1/n). However, the two different formulas derive almost the same adjusted p value.
There is another major issue with your hypothesis testing exercise. You will most certainly run into a Type I error (false positive) whereby you will uncover some really trivial differences that are extremely significant at the 99.9999% level. This is because when you deal with a sample of such a large size (n = 1,313,662), you will get a standard error that is very close to 0. That's because the square root of 1,313,662 = 1,146. So, you will divide the standard deviation by 1,146. In short, you will capture minute differences that may be completely immaterial.
I would suggest you move away from this hypothesis testing framework and instead conduct an Effect Size type analysis. Within this framework the measure of statistical distance is the standard deviation. Unlike the standard error, the standard deviation is not artificially shrunk by the size of the sample. And, this approach will give you a better sense of the material differences between your data sets. Effect Size is also much more focused on confidence interval around the mean average difference which is much more informative than the hypothesis testing focus on statistical significance that often is not significant at all. Hope that helps. | How to perform t-test with huge samples?
chl already mentioned the trap of multiple comparisons when conducting simultaneously 25 tests with the same data set. An easy way to handle that is to adjust the p value threshold by dividing them b |
17,960 | How to perform t-test with huge samples? | Student's t-distribution becomes closer and closer the the standard normal distribution as the degrees of freedom get larger. With 1313662 + 38704 – 2 = 1352364 degrees of freedom, the t-distribution will be indistinguishable from the standard normal distribution, as can be seen in the picture below (unless perhaps you're in the very extreme tails and you're interested in distinguishing absolutely tiny p-values from even tinier ones). So you can use the table for the standard normal distribution instead of the table for the t-distribution. | How to perform t-test with huge samples? | Student's t-distribution becomes closer and closer the the standard normal distribution as the degrees of freedom get larger. With 1313662 + 38704 – 2 = 1352364 degrees of freedom, the t-distribution | How to perform t-test with huge samples?
Student's t-distribution becomes closer and closer the the standard normal distribution as the degrees of freedom get larger. With 1313662 + 38704 – 2 = 1352364 degrees of freedom, the t-distribution will be indistinguishable from the standard normal distribution, as can be seen in the picture below (unless perhaps you're in the very extreme tails and you're interested in distinguishing absolutely tiny p-values from even tinier ones). So you can use the table for the standard normal distribution instead of the table for the t-distribution. | How to perform t-test with huge samples?
Student's t-distribution becomes closer and closer the the standard normal distribution as the degrees of freedom get larger. With 1313662 + 38704 – 2 = 1352364 degrees of freedom, the t-distribution |
17,961 | How to perform t-test with huge samples? | The $t$ distribution tend to the $z$ (gaussian) distribution when $n$ is large (in fact, when $n>30$, they are almost identical, see the picture provided by @onestop). In your case, I would say that $n$ is VERY large, so that you can just use a $z$-test. As a consequence of the sample size, any VERY small differences will be declared significant. So, it is worth asking yourself if these tests (with the full data set) are really interesting.
Just to be sure, as your data set includes 25 variables, you are making 25 tests? If this is the case, you probably need to correct for multiple comparisons so as not to inflate the type I error rate (see related thread on this site).
BTW, the R software would gives you the p-values you are looking for, no need to rely on Tables:
> x1 <- rnorm(n=38704)
> x2 <- rnorm(n=1313662, mean=.1)
> t.test(x1, x2, var.equal=TRUE)
Two Sample t-test
data: x1 and x2
t = -17.9156, df = 1352364, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.1024183 -0.0822190
sample estimates:
mean of x mean of y
0.007137404 0.099456039 | How to perform t-test with huge samples? | The $t$ distribution tend to the $z$ (gaussian) distribution when $n$ is large (in fact, when $n>30$, they are almost identical, see the picture provided by @onestop). In your case, I would say that $ | How to perform t-test with huge samples?
The $t$ distribution tend to the $z$ (gaussian) distribution when $n$ is large (in fact, when $n>30$, they are almost identical, see the picture provided by @onestop). In your case, I would say that $n$ is VERY large, so that you can just use a $z$-test. As a consequence of the sample size, any VERY small differences will be declared significant. So, it is worth asking yourself if these tests (with the full data set) are really interesting.
Just to be sure, as your data set includes 25 variables, you are making 25 tests? If this is the case, you probably need to correct for multiple comparisons so as not to inflate the type I error rate (see related thread on this site).
BTW, the R software would gives you the p-values you are looking for, no need to rely on Tables:
> x1 <- rnorm(n=38704)
> x2 <- rnorm(n=1313662, mean=.1)
> t.test(x1, x2, var.equal=TRUE)
Two Sample t-test
data: x1 and x2
t = -17.9156, df = 1352364, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.1024183 -0.0822190
sample estimates:
mean of x mean of y
0.007137404 0.099456039 | How to perform t-test with huge samples?
The $t$ distribution tend to the $z$ (gaussian) distribution when $n$ is large (in fact, when $n>30$, they are almost identical, see the picture provided by @onestop). In your case, I would say that $ |
17,962 | How to perform t-test with huge samples? | You can use the following python function which I wrote, that can calculate the size effect. The test is straightforward here
import numpy as np
from scipy.stats import t
def Independent_tTest(x1, x2, std1, std2, n1, n2):
'''Independent t-test between two sample groups
Note:
The test assumptions:
H0: The two samples are not significantly different (from same population)
H1: The two samples are siginficantly different (from two populations)
- Accept the H1 if t-value > t-critical or p-value value < p-value critical
Args:
x1(float): mean of the first sample group.
x2(float): mean of the second sample group.
std1(float): standard deviation of first sample group.
std2(float): standard devation of second sample group.
Return:
degree_of_freedome, t-statistics, p-value
'''
degree_of_freedom = n1 + n2 -2
corrected_degree_of_freedom = (((std1**2/n1) + (std2**2/n2))**2)/(((std1**4)/((n1**2)*(n1-1)))+((std2**4)/((n2**2)*(n2-1))))
poolvar = ((n1-1)*(std1**2)+ (n2-1)*(std2**2))/corrected_degree_of_freedom
t_value = (x1 -x2)/np.sqrt(poolvar*((1/n1)+ (1/n2)))
sig = 2 * (1-(t.cdf(abs(t_value), corrected_degree_of_freedom)))
effect_size = np.sqrt((t_value**2)/(t_value**2+corrected_degree_of_freedom))
return f"corrected degree of freedom {corrected_degree_of_freedom:0.4f} give a t-value = {t_value:0.4f}, with significant = {sig:0.4f} with effectsize ={effect_size:0.4f}" | How to perform t-test with huge samples? | You can use the following python function which I wrote, that can calculate the size effect. The test is straightforward here
import numpy as np
from scipy.stats import t
def Independent_tTest(x1, x | How to perform t-test with huge samples?
You can use the following python function which I wrote, that can calculate the size effect. The test is straightforward here
import numpy as np
from scipy.stats import t
def Independent_tTest(x1, x2, std1, std2, n1, n2):
'''Independent t-test between two sample groups
Note:
The test assumptions:
H0: The two samples are not significantly different (from same population)
H1: The two samples are siginficantly different (from two populations)
- Accept the H1 if t-value > t-critical or p-value value < p-value critical
Args:
x1(float): mean of the first sample group.
x2(float): mean of the second sample group.
std1(float): standard deviation of first sample group.
std2(float): standard devation of second sample group.
Return:
degree_of_freedome, t-statistics, p-value
'''
degree_of_freedom = n1 + n2 -2
corrected_degree_of_freedom = (((std1**2/n1) + (std2**2/n2))**2)/(((std1**4)/((n1**2)*(n1-1)))+((std2**4)/((n2**2)*(n2-1))))
poolvar = ((n1-1)*(std1**2)+ (n2-1)*(std2**2))/corrected_degree_of_freedom
t_value = (x1 -x2)/np.sqrt(poolvar*((1/n1)+ (1/n2)))
sig = 2 * (1-(t.cdf(abs(t_value), corrected_degree_of_freedom)))
effect_size = np.sqrt((t_value**2)/(t_value**2+corrected_degree_of_freedom))
return f"corrected degree of freedom {corrected_degree_of_freedom:0.4f} give a t-value = {t_value:0.4f}, with significant = {sig:0.4f} with effectsize ={effect_size:0.4f}" | How to perform t-test with huge samples?
You can use the following python function which I wrote, that can calculate the size effect. The test is straightforward here
import numpy as np
from scipy.stats import t
def Independent_tTest(x1, x |
17,963 | Examples of a statistic that is not independent of sample's distribution? | That definition is a somewhat awkward way to state it. A "statistic" is any function of the observable values. All that definition means is that a statistic is a function only of the observable values, not a function of the distribution or any of its parameters. For example, if $X_1, X_2, ..., X_n \sim \text{N}(\mu, 1)$ then a statistic would be any function $T(X_1,...,X_n)$ whereas a function $H(X_1,....,X_n, \mu)$ would not be a statistic, since it depends on $\mu$. Here are some further examples:
$$\begin{equation} \begin{aligned}
\text{Statistic} & & & & & \bar{X}_n = \frac{1}{n} \sum_{i=1}^n X_i, \\[12pt]
\text{Statistic} & & & & & S_n^2 = \frac{1}{n} \sum_{i=1}^n (X_i - \bar{X}_n)^2, \\[12pt]
\text{Not a statistic} & & & & & D_n = \bar{X}_n - \mu, \\[12pt]
\text{Not a statistic} & & & & & p_i = \text{N}(x_i | \mu, 1), \\[12pt]
\text{Not a statistic} & & & & & Q = 10 \mu. \\[12pt]
\end{aligned} \end{equation}$$
Every statistic is a function only of the observable values, and not of their distribution or its parameters. So there are no examples of a statistic that is a function of the distribution or its parameters (any such function would not be a statistic). However, it is important to note that the distribution of a statistic (as opposed to the statistic itself) will generally depend on the underlying distribution of the values. (This is true for all statistics other than ancillary statistics.)
What about a function where the parameters are known? In the comments below, Alecos asks an excellent follow-up question. What about a function that uses a fixed hypothesised value of the parameter? For example, what about the statistic $\sqrt{n} (\bar{x} - \mu)$ where $\mu = \mu_0$ is taken to be equal to a known hypothesised value $\mu_0 \in \mathbb{R}$. Here the function is indeed a statistic, so long as it is defined on the appropriately restricted domain. So the function $H_0: \mathbb{R}^n \rightarrow \mathbb{R}$ with $H_0(x_1,...,x_n) = \sqrt{n} (\bar{x} - \mu_0)$ would be a statistic, but the function $H: \mathbb{R}^{n+1} \rightarrow \mathbb{R}$ with $H(x_1,...,x_n, \mu) = \sqrt{n} (\bar{x} - \mu)$ would not be a statistic. | Examples of a statistic that is not independent of sample's distribution? | That definition is a somewhat awkward way to state it. A "statistic" is any function of the observable values. All that definition means is that a statistic is a function only of the observable valu | Examples of a statistic that is not independent of sample's distribution?
That definition is a somewhat awkward way to state it. A "statistic" is any function of the observable values. All that definition means is that a statistic is a function only of the observable values, not a function of the distribution or any of its parameters. For example, if $X_1, X_2, ..., X_n \sim \text{N}(\mu, 1)$ then a statistic would be any function $T(X_1,...,X_n)$ whereas a function $H(X_1,....,X_n, \mu)$ would not be a statistic, since it depends on $\mu$. Here are some further examples:
$$\begin{equation} \begin{aligned}
\text{Statistic} & & & & & \bar{X}_n = \frac{1}{n} \sum_{i=1}^n X_i, \\[12pt]
\text{Statistic} & & & & & S_n^2 = \frac{1}{n} \sum_{i=1}^n (X_i - \bar{X}_n)^2, \\[12pt]
\text{Not a statistic} & & & & & D_n = \bar{X}_n - \mu, \\[12pt]
\text{Not a statistic} & & & & & p_i = \text{N}(x_i | \mu, 1), \\[12pt]
\text{Not a statistic} & & & & & Q = 10 \mu. \\[12pt]
\end{aligned} \end{equation}$$
Every statistic is a function only of the observable values, and not of their distribution or its parameters. So there are no examples of a statistic that is a function of the distribution or its parameters (any such function would not be a statistic). However, it is important to note that the distribution of a statistic (as opposed to the statistic itself) will generally depend on the underlying distribution of the values. (This is true for all statistics other than ancillary statistics.)
What about a function where the parameters are known? In the comments below, Alecos asks an excellent follow-up question. What about a function that uses a fixed hypothesised value of the parameter? For example, what about the statistic $\sqrt{n} (\bar{x} - \mu)$ where $\mu = \mu_0$ is taken to be equal to a known hypothesised value $\mu_0 \in \mathbb{R}$. Here the function is indeed a statistic, so long as it is defined on the appropriately restricted domain. So the function $H_0: \mathbb{R}^n \rightarrow \mathbb{R}$ with $H_0(x_1,...,x_n) = \sqrt{n} (\bar{x} - \mu_0)$ would be a statistic, but the function $H: \mathbb{R}^{n+1} \rightarrow \mathbb{R}$ with $H(x_1,...,x_n, \mu) = \sqrt{n} (\bar{x} - \mu)$ would not be a statistic. | Examples of a statistic that is not independent of sample's distribution?
That definition is a somewhat awkward way to state it. A "statistic" is any function of the observable values. All that definition means is that a statistic is a function only of the observable valu |
17,964 | Examples of a statistic that is not independent of sample's distribution? | I interpret that as saying that you should decide before you see the data what statistic you are going to calculate. So, for instance, if you're going to take out outliers, you should decide before you see the data what constitutes an "outlier". If you decide after you see the data, then your function is dependent on the data. | Examples of a statistic that is not independent of sample's distribution? | I interpret that as saying that you should decide before you see the data what statistic you are going to calculate. So, for instance, if you're going to take out outliers, you should decide before yo | Examples of a statistic that is not independent of sample's distribution?
I interpret that as saying that you should decide before you see the data what statistic you are going to calculate. So, for instance, if you're going to take out outliers, you should decide before you see the data what constitutes an "outlier". If you decide after you see the data, then your function is dependent on the data. | Examples of a statistic that is not independent of sample's distribution?
I interpret that as saying that you should decide before you see the data what statistic you are going to calculate. So, for instance, if you're going to take out outliers, you should decide before yo |
17,965 | James-Stein shrinkage 'in the wild'? | James-Stein estimator is not widely used but it has inspired soft thresholding, hard thresholding which is really widely used.
Wavelet shrinkage estimation (see R package wavethresh) is used a lot in signal processing, shrunken centroid (package pamr under R) for classication is used for DNA micro array, there are a lot of examples of practical efficiency of shrinkage...
For theoretical purpose, see the section of candes's review about shrinkage estimation (p20-> James stein and the section after after that one deals with soft and hard thresholding):
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.161.8881&rep=rep1&type=pdf
EDIT from the comments: why is JS shrinkage less used than Soft/hard Thresh ?
James Stein is more difficult to manipulate (practically and theoretically) and to understand intuitively than hard thresholding but the why question is a good question! | James-Stein shrinkage 'in the wild'? | James-Stein estimator is not widely used but it has inspired soft thresholding, hard thresholding which is really widely used.
Wavelet shrinkage estimation (see R package wavethresh) is used a lot in | James-Stein shrinkage 'in the wild'?
James-Stein estimator is not widely used but it has inspired soft thresholding, hard thresholding which is really widely used.
Wavelet shrinkage estimation (see R package wavethresh) is used a lot in signal processing, shrunken centroid (package pamr under R) for classication is used for DNA micro array, there are a lot of examples of practical efficiency of shrinkage...
For theoretical purpose, see the section of candes's review about shrinkage estimation (p20-> James stein and the section after after that one deals with soft and hard thresholding):
http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.161.8881&rep=rep1&type=pdf
EDIT from the comments: why is JS shrinkage less used than Soft/hard Thresh ?
James Stein is more difficult to manipulate (practically and theoretically) and to understand intuitively than hard thresholding but the why question is a good question! | James-Stein shrinkage 'in the wild'?
James-Stein estimator is not widely used but it has inspired soft thresholding, hard thresholding which is really widely used.
Wavelet shrinkage estimation (see R package wavethresh) is used a lot in |
17,966 | James-Stein shrinkage 'in the wild'? | Ridge regression is a form of shrinkage. See Draper & Van Nostrand (1979).
Shrinkage has also proved useful in estimating seasonal factors for time series. See Miller and Williams (IJF, 2003). | James-Stein shrinkage 'in the wild'? | Ridge regression is a form of shrinkage. See Draper & Van Nostrand (1979).
Shrinkage has also proved useful in estimating seasonal factors for time series. See Miller and Williams (IJF, 2003). | James-Stein shrinkage 'in the wild'?
Ridge regression is a form of shrinkage. See Draper & Van Nostrand (1979).
Shrinkage has also proved useful in estimating seasonal factors for time series. See Miller and Williams (IJF, 2003). | James-Stein shrinkage 'in the wild'?
Ridge regression is a form of shrinkage. See Draper & Van Nostrand (1979).
Shrinkage has also proved useful in estimating seasonal factors for time series. See Miller and Williams (IJF, 2003). |
17,967 | James-Stein shrinkage 'in the wild'? | As mentioned by others, James-Stein is not often used directly, but is really the first paper on shrinkage, which in turn is used pretty much everywhere in single and multiple regression. The link between James-Stein and modern estimation is explained in detail in this paper by E.Candes. Going back to your question, I think James-Stein is an intellectual non-curiosity, in the sense that it was intellectual for sure, but had an incredibly disruptive effect on Statistics, and nobody could dismiss it as a curiosity afterwards. Everyone thought that empirical means were an admissible estimator, and Stein proved them wrong with a counterexample. The rest is history. | James-Stein shrinkage 'in the wild'? | As mentioned by others, James-Stein is not often used directly, but is really the first paper on shrinkage, which in turn is used pretty much everywhere in single and multiple regression. The link bet | James-Stein shrinkage 'in the wild'?
As mentioned by others, James-Stein is not often used directly, but is really the first paper on shrinkage, which in turn is used pretty much everywhere in single and multiple regression. The link between James-Stein and modern estimation is explained in detail in this paper by E.Candes. Going back to your question, I think James-Stein is an intellectual non-curiosity, in the sense that it was intellectual for sure, but had an incredibly disruptive effect on Statistics, and nobody could dismiss it as a curiosity afterwards. Everyone thought that empirical means were an admissible estimator, and Stein proved them wrong with a counterexample. The rest is history. | James-Stein shrinkage 'in the wild'?
As mentioned by others, James-Stein is not often used directly, but is really the first paper on shrinkage, which in turn is used pretty much everywhere in single and multiple regression. The link bet |
17,968 | James-Stein shrinkage 'in the wild'? | See also Jennrich, RJ, Oman, SD "How much does Stein estimation help in multiple linear regression?" Technometrics, 28, 113-121, 1986. | James-Stein shrinkage 'in the wild'? | See also Jennrich, RJ, Oman, SD "How much does Stein estimation help in multiple linear regression?" Technometrics, 28, 113-121, 1986. | James-Stein shrinkage 'in the wild'?
See also Jennrich, RJ, Oman, SD "How much does Stein estimation help in multiple linear regression?" Technometrics, 28, 113-121, 1986. | James-Stein shrinkage 'in the wild'?
See also Jennrich, RJ, Oman, SD "How much does Stein estimation help in multiple linear regression?" Technometrics, 28, 113-121, 1986. |
17,969 | James-Stein shrinkage 'in the wild'? | Korbinian Strimmer uses the James-Stein estimator for infering gene networks. I've used his R packages a few times and it seems to provide a very good and quick answer. | James-Stein shrinkage 'in the wild'? | Korbinian Strimmer uses the James-Stein estimator for infering gene networks. I've used his R packages a few times and it seems to provide a very good and quick answer. | James-Stein shrinkage 'in the wild'?
Korbinian Strimmer uses the James-Stein estimator for infering gene networks. I've used his R packages a few times and it seems to provide a very good and quick answer. | James-Stein shrinkage 'in the wild'?
Korbinian Strimmer uses the James-Stein estimator for infering gene networks. I've used his R packages a few times and it seems to provide a very good and quick answer. |
17,970 | Can we calculate mean of absolute value of a random variable analytically? | In general knowing these 4 properties is not enough to tell you the expectation of the absolute value of a random variable. As proof, here are two discrete distributions $X$ and $Y$ which have mean 0 and the same variance, skew, and kurtosis, but for which $\mathbb{E}(|X|) \ne \mathbb{E}(|Y|)$.
t P(X=t) P(Y=t)
-3 0.100 0.099
-2 0.100 0.106
-1 0.100 0.085
0 0.400 0.420
1 0.100 0.085
2 0.100 0.106
3 0.100 0.099
You can verify that the 1st, 2nd, 3rd, and 4th central moments of these distributions are the same, and that the expectation of the absolute value is different.
Edit: explanation of how I found this example.
For ease of calculation I decided that:
$X$ and $Y$ would both be symmetric about $0$, so that the mean and skew would automatically be $0$.
$X$ and $Y$ would both be discrete taking values on $\{-n, .., +n\}$ for some $n$.
For a given distribution $X$, we want to find another distribution $Y$ satisfying the simultaneous equations $\mathbb{E}(Y^2) = \mathbb{E}(X^2)$ and $\mathbb{E}(Y^4) = \mathbb{E}(X^4)$. We find $n = 2$ isn't enough to provide multiple solutions, because subject to the above constraints we only have 2 degrees of freedom: once we pick $f(2)$ and $f(1)$, the rest of the distribution is fixed, and our two simultaneous equations in two variables have a unique solution, so $Y$ must have the same distribution as $X$. But $n = 3$ gives us 3 degrees of freedom, so should lead to infinite solutions.
Given $X$, our 3 degrees of freedom in picking $Y$ are:
$$f_Y(1) = f_X(1)+p \\
f_Y(2) = f_X(2)+q \\
f_Y(3) = f_X(3)+r$$
Then our simultaneous equations become:
$$
\begin{align}
p + 4q + 9r& = 0 \\
p + 16q + 81r& = 0
\end{align}
$$
The general solution is:
$$
p = 15r \\
q = -6r \\
$$
Finally I arbitrarily picked
$$
\begin{align}
f_X(1) & = 0.1 \\
f_X(2) & = 0.1 \\
f_X(3) & = 0.1 \\
r & = -0.001
\end{align}
$$
giving me the above counterexample. | Can we calculate mean of absolute value of a random variable analytically? | In general knowing these 4 properties is not enough to tell you the expectation of the absolute value of a random variable. As proof, here are two discrete distributions $X$ and $Y$ which have mean 0 | Can we calculate mean of absolute value of a random variable analytically?
In general knowing these 4 properties is not enough to tell you the expectation of the absolute value of a random variable. As proof, here are two discrete distributions $X$ and $Y$ which have mean 0 and the same variance, skew, and kurtosis, but for which $\mathbb{E}(|X|) \ne \mathbb{E}(|Y|)$.
t P(X=t) P(Y=t)
-3 0.100 0.099
-2 0.100 0.106
-1 0.100 0.085
0 0.400 0.420
1 0.100 0.085
2 0.100 0.106
3 0.100 0.099
You can verify that the 1st, 2nd, 3rd, and 4th central moments of these distributions are the same, and that the expectation of the absolute value is different.
Edit: explanation of how I found this example.
For ease of calculation I decided that:
$X$ and $Y$ would both be symmetric about $0$, so that the mean and skew would automatically be $0$.
$X$ and $Y$ would both be discrete taking values on $\{-n, .., +n\}$ for some $n$.
For a given distribution $X$, we want to find another distribution $Y$ satisfying the simultaneous equations $\mathbb{E}(Y^2) = \mathbb{E}(X^2)$ and $\mathbb{E}(Y^4) = \mathbb{E}(X^4)$. We find $n = 2$ isn't enough to provide multiple solutions, because subject to the above constraints we only have 2 degrees of freedom: once we pick $f(2)$ and $f(1)$, the rest of the distribution is fixed, and our two simultaneous equations in two variables have a unique solution, so $Y$ must have the same distribution as $X$. But $n = 3$ gives us 3 degrees of freedom, so should lead to infinite solutions.
Given $X$, our 3 degrees of freedom in picking $Y$ are:
$$f_Y(1) = f_X(1)+p \\
f_Y(2) = f_X(2)+q \\
f_Y(3) = f_X(3)+r$$
Then our simultaneous equations become:
$$
\begin{align}
p + 4q + 9r& = 0 \\
p + 16q + 81r& = 0
\end{align}
$$
The general solution is:
$$
p = 15r \\
q = -6r \\
$$
Finally I arbitrarily picked
$$
\begin{align}
f_X(1) & = 0.1 \\
f_X(2) & = 0.1 \\
f_X(3) & = 0.1 \\
r & = -0.001
\end{align}
$$
giving me the above counterexample. | Can we calculate mean of absolute value of a random variable analytically?
In general knowing these 4 properties is not enough to tell you the expectation of the absolute value of a random variable. As proof, here are two discrete distributions $X$ and $Y$ which have mean 0 |
17,971 | Can we calculate mean of absolute value of a random variable analytically? | It depends on what you mean by an "analytical" calculation. In general, this is just
$$ E(|X|) = \int |x| f(x)\,dx, $$
so you do have a formula. But I assume that "evaluating a (possibly improper) integral" is not what you had in mind.
Then again, probably the simplest non-trivial example would be that of the absolute value of a normal distribution, which is the folded normal distribution. And even here, the expression given by Wikipedia for the expectation involves evaluating $\Phi$, which is the CDF of the standard normal - and here again, you need to evaluate an improper integral.
So if you don't let integral evaluations count, the answer is no in general, even for simple cases like the folded normal. | Can we calculate mean of absolute value of a random variable analytically? | It depends on what you mean by an "analytical" calculation. In general, this is just
$$ E(|X|) = \int |x| f(x)\,dx, $$
so you do have a formula. But I assume that "evaluating a (possibly improper) int | Can we calculate mean of absolute value of a random variable analytically?
It depends on what you mean by an "analytical" calculation. In general, this is just
$$ E(|X|) = \int |x| f(x)\,dx, $$
so you do have a formula. But I assume that "evaluating a (possibly improper) integral" is not what you had in mind.
Then again, probably the simplest non-trivial example would be that of the absolute value of a normal distribution, which is the folded normal distribution. And even here, the expression given by Wikipedia for the expectation involves evaluating $\Phi$, which is the CDF of the standard normal - and here again, you need to evaluate an improper integral.
So if you don't let integral evaluations count, the answer is no in general, even for simple cases like the folded normal. | Can we calculate mean of absolute value of a random variable analytically?
It depends on what you mean by an "analytical" calculation. In general, this is just
$$ E(|X|) = \int |x| f(x)\,dx, $$
so you do have a formula. But I assume that "evaluating a (possibly improper) int |
17,972 | Can we calculate mean of absolute value of a random variable analytically? | $$
\DeclareMathOperator{\Var}{Var}
\DeclareMathOperator{\Skew}{Skew}
\DeclareMathOperator{\Kurt}{Kurt}
\newcommand{\E}{\mathbb{E}}
$$
Intuitive answer:
Shifting the distribution for the random variable $X$ to the left or the right changes the mean $\mu = \E[X]$ ("center") by the exact same amount. However, the variance $\sigma^2$ ("width"), skewness, and kurtosis ("tailedness") do not change because they are calculated based on the distances from the center $\mu$ of the distribution. Thus, they cannot possibly be used to express a function of $\mu$. For similar reasons, they cannot be used to express $\E[X]$, which for a positive-valued random variable $X$ behaves in exactly the same way under right-shifts.
Rigorous answer:
To simplify the problem, let's consider why you can't express $\E[X]$ as a function of $\Var[X]$, $\Skew[X]$, and $\Kurt[X]$.
By definition,
\begin{align}
\mu &= \E[X] \\[1em]
\sigma^2 = \Var[X] &= \E[(X - \mu)^2] \\[1em]
\Skew[X] &= \E\left[\left(\frac{X - \mu}{\sigma}\right)^3\right] \\[1em]
\Kurt[X] &= \E\left[\left(\frac{X - \mu}{\sigma}\right)^4\right].
\end{align}
Notice that if you add a constant shift $\gamma$ to $X$,
$$X' = X + \gamma,$$
then the $\mu'$ associated with $X'$ also shifts by the same amount:
$$\mu' = \E[X'] = \E[X + \gamma] = \E[X] + \E[\gamma] = \E[X] + \gamma = \mu + \gamma.$$
However, variance, skewness, and kurtosis don't change at all:
\begin{align}
\Var[X']
&= \E[(X + \gamma - \mu')^2]
&&= \E[(X - \mu)^2]
&&= \Var[X]
\\[1em]
\Skew[X']
&= \E\left[\left(\frac{X + \gamma - \mu'}{\sigma'}\right)^3\right]
&&= \E\left[\left(\frac{X - \mu}{\sigma}\right)^3\right]
&&= \Skew[X]
\\[1em]
\Kurt[X']
&= \E\left[\left(\frac{X + \gamma - \mu'}{\sigma'}\right)^4\right]
&&= \E\left[\left(\frac{X - \mu}{\sigma}\right)^4\right]
&&= \Kurt[X].
\end{align}
Thus, for any value of $\gamma$, these quantities are invariant. Though the value of $\E[X]$ may change, these values clearly do not! Any function of these variables is thus constant under $\gamma$-shift, and so it cannot possibly express $\E[X]$.
The same proof holds for $\E[|X|]$. | Can we calculate mean of absolute value of a random variable analytically? | $$
\DeclareMathOperator{\Var}{Var}
\DeclareMathOperator{\Skew}{Skew}
\DeclareMathOperator{\Kurt}{Kurt}
\newcommand{\E}{\mathbb{E}}
$$
Intuitive answer:
Shifting the distribution for the random variabl | Can we calculate mean of absolute value of a random variable analytically?
$$
\DeclareMathOperator{\Var}{Var}
\DeclareMathOperator{\Skew}{Skew}
\DeclareMathOperator{\Kurt}{Kurt}
\newcommand{\E}{\mathbb{E}}
$$
Intuitive answer:
Shifting the distribution for the random variable $X$ to the left or the right changes the mean $\mu = \E[X]$ ("center") by the exact same amount. However, the variance $\sigma^2$ ("width"), skewness, and kurtosis ("tailedness") do not change because they are calculated based on the distances from the center $\mu$ of the distribution. Thus, they cannot possibly be used to express a function of $\mu$. For similar reasons, they cannot be used to express $\E[X]$, which for a positive-valued random variable $X$ behaves in exactly the same way under right-shifts.
Rigorous answer:
To simplify the problem, let's consider why you can't express $\E[X]$ as a function of $\Var[X]$, $\Skew[X]$, and $\Kurt[X]$.
By definition,
\begin{align}
\mu &= \E[X] \\[1em]
\sigma^2 = \Var[X] &= \E[(X - \mu)^2] \\[1em]
\Skew[X] &= \E\left[\left(\frac{X - \mu}{\sigma}\right)^3\right] \\[1em]
\Kurt[X] &= \E\left[\left(\frac{X - \mu}{\sigma}\right)^4\right].
\end{align}
Notice that if you add a constant shift $\gamma$ to $X$,
$$X' = X + \gamma,$$
then the $\mu'$ associated with $X'$ also shifts by the same amount:
$$\mu' = \E[X'] = \E[X + \gamma] = \E[X] + \E[\gamma] = \E[X] + \gamma = \mu + \gamma.$$
However, variance, skewness, and kurtosis don't change at all:
\begin{align}
\Var[X']
&= \E[(X + \gamma - \mu')^2]
&&= \E[(X - \mu)^2]
&&= \Var[X]
\\[1em]
\Skew[X']
&= \E\left[\left(\frac{X + \gamma - \mu'}{\sigma'}\right)^3\right]
&&= \E\left[\left(\frac{X - \mu}{\sigma}\right)^3\right]
&&= \Skew[X]
\\[1em]
\Kurt[X']
&= \E\left[\left(\frac{X + \gamma - \mu'}{\sigma'}\right)^4\right]
&&= \E\left[\left(\frac{X - \mu}{\sigma}\right)^4\right]
&&= \Kurt[X].
\end{align}
Thus, for any value of $\gamma$, these quantities are invariant. Though the value of $\E[X]$ may change, these values clearly do not! Any function of these variables is thus constant under $\gamma$-shift, and so it cannot possibly express $\E[X]$.
The same proof holds for $\E[|X|]$. | Can we calculate mean of absolute value of a random variable analytically?
$$
\DeclareMathOperator{\Var}{Var}
\DeclareMathOperator{\Skew}{Skew}
\DeclareMathOperator{\Kurt}{Kurt}
\newcommand{\E}{\mathbb{E}}
$$
Intuitive answer:
Shifting the distribution for the random variabl |
17,973 | What is "one-hot" encoding called in scientific literature? | Statisticians call one-hot encoding as dummy coding. As others suggested (including Scortchi in the comments), this is not exact synonym, but this is the term that would be usually used for the 0-1 encoded categorical variables.
See also: "Dummy variable" versus "indicator variable" for nominal/categorical data | What is "one-hot" encoding called in scientific literature? | Statisticians call one-hot encoding as dummy coding. As others suggested (including Scortchi in the comments), this is not exact synonym, but this is the term that would be usually used for the 0-1 en | What is "one-hot" encoding called in scientific literature?
Statisticians call one-hot encoding as dummy coding. As others suggested (including Scortchi in the comments), this is not exact synonym, but this is the term that would be usually used for the 0-1 encoded categorical variables.
See also: "Dummy variable" versus "indicator variable" for nominal/categorical data | What is "one-hot" encoding called in scientific literature?
Statisticians call one-hot encoding as dummy coding. As others suggested (including Scortchi in the comments), this is not exact synonym, but this is the term that would be usually used for the 0-1 en |
17,974 | What is "one-hot" encoding called in scientific literature? | It depends on your target audience.
As Tim said, statisticians call it dummy coding, and that's what I would expect to see when describing something like a regression model. "Dummy coded variables were included to adjust for the store's location." I think calling it a one-hot encoding would seem slightly strange here.
However, as another Tim also said, one-hot encoding is fairly common in the machine learning literature. It faintly implies the existence of nodes (as in a neural network), physical wires (in a device), or something like that, at least to me.
Formally, I guess you are applying a set of indicator functions $\mathbb{I}_X$, but that's probably way too formal outside of a proof. | What is "one-hot" encoding called in scientific literature? | It depends on your target audience.
As Tim said, statisticians call it dummy coding, and that's what I would expect to see when describing something like a regression model. "Dummy coded variables wer | What is "one-hot" encoding called in scientific literature?
It depends on your target audience.
As Tim said, statisticians call it dummy coding, and that's what I would expect to see when describing something like a regression model. "Dummy coded variables were included to adjust for the store's location." I think calling it a one-hot encoding would seem slightly strange here.
However, as another Tim also said, one-hot encoding is fairly common in the machine learning literature. It faintly implies the existence of nodes (as in a neural network), physical wires (in a device), or something like that, at least to me.
Formally, I guess you are applying a set of indicator functions $\mathbb{I}_X$, but that's probably way too formal outside of a proof. | What is "one-hot" encoding called in scientific literature?
It depends on your target audience.
As Tim said, statisticians call it dummy coding, and that's what I would expect to see when describing something like a regression model. "Dummy coded variables wer |
17,975 | What is "one-hot" encoding called in scientific literature? | The term comes from electronics engineering. Just think who would call 1 "hot"? Only those who work with electricity, where "hot" or "live" means there's electrical potential on the wire. "One hot" refers to the circuit design where discrete electrical signal level on one wire would be decoded into hot/cold on a set of wires. I suppose some machine learning folks with EE background found the analogy compelling.
In econometrics and statistics you may encounter dummy or indicator variables, which are quite similar because these are used to represent distinct categories with their distinct indicators. There's a subtle difference though. For instance, you make K-1 dummies for K categories, because the base category corresponds to all dummies set to 0. In contrast, I think that in one hot encoding you have K wires, where the base category will have its own wire (variable). | What is "one-hot" encoding called in scientific literature? | The term comes from electronics engineering. Just think who would call 1 "hot"? Only those who work with electricity, where "hot" or "live" means there's electrical potential on the wire. "One hot" re | What is "one-hot" encoding called in scientific literature?
The term comes from electronics engineering. Just think who would call 1 "hot"? Only those who work with electricity, where "hot" or "live" means there's electrical potential on the wire. "One hot" refers to the circuit design where discrete electrical signal level on one wire would be decoded into hot/cold on a set of wires. I suppose some machine learning folks with EE background found the analogy compelling.
In econometrics and statistics you may encounter dummy or indicator variables, which are quite similar because these are used to represent distinct categories with their distinct indicators. There's a subtle difference though. For instance, you make K-1 dummies for K categories, because the base category corresponds to all dummies set to 0. In contrast, I think that in one hot encoding you have K wires, where the base category will have its own wire (variable). | What is "one-hot" encoding called in scientific literature?
The term comes from electronics engineering. Just think who would call 1 "hot"? Only those who work with electricity, where "hot" or "live" means there's electrical potential on the wire. "One hot" re |
17,976 | What is "one-hot" encoding called in scientific literature? | I'm statistically trained, and have recently heard of "one-hot encoding" in machine learning/comp sci lit. I've usually just referred to the one-hotted matrix as a design matrix/data matrix/design frame. | What is "one-hot" encoding called in scientific literature? | I'm statistically trained, and have recently heard of "one-hot encoding" in machine learning/comp sci lit. I've usually just referred to the one-hotted matrix as a design matrix/data matrix/design fr | What is "one-hot" encoding called in scientific literature?
I'm statistically trained, and have recently heard of "one-hot encoding" in machine learning/comp sci lit. I've usually just referred to the one-hotted matrix as a design matrix/data matrix/design frame. | What is "one-hot" encoding called in scientific literature?
I'm statistically trained, and have recently heard of "one-hot encoding" in machine learning/comp sci lit. I've usually just referred to the one-hotted matrix as a design matrix/data matrix/design fr |
17,977 | What is "one-hot" encoding called in scientific literature? | In physical sciences and engineering, it's called the (generalized) Kronecker delta.
In simplest form, the Kroneker delta's defined as $$
\begin{align*}
{\delta}_{i,j} {\equiv}
\begin{cases}
1 &\text{if} & i=j \\
0 &\text{else}
\end{cases}
\end{align*},
$$though this trivially generalized to$$
\begin{align*}
{\delta}_{\left[\text{condition}\right]} {\equiv}
\begin{cases}
1 &\text{if} & \left[\text{condition}\right] \\
0 &\text{else}
\end{cases}
\end{align*}.
$$
So, "${\delta}_{i{\in}\text{category}}$" will tend to be read as$$
\begin{align*}
{\delta}_{i{\in}\text{category}} {\equiv}
\begin{cases}
1 &\text{if} & i{\in}\text{category} \\
0 &\text{else}
\end{cases}
\end{align*},
$$
which most authors would tend to truncate to "${\delta}_{i}$", if the category is obvious from context.
The Kronecker delta is really useful in Sigma/Pi/Einstein/etc. notations since it allows for terms to be specified conditionally.
Just to relate this to common programming structures, the Kronecker delta's condition?1:0, where ?: is the conditional operator.
As a tangential note, I'd encourage authors to abandon the old-fashion ${\delta}_{i,j}$ in favor of the generalized equivalent, ${\delta}_{i=j}$. There's no advantage to the old-fashion notation, while the generalized notation's more explicit and extensible. | What is "one-hot" encoding called in scientific literature? | In physical sciences and engineering, it's called the (generalized) Kronecker delta.
In simplest form, the Kroneker delta's defined as $$
\begin{align*}
{\delta}_{i,j} {\equiv}
\begin{cases}
1 &\text | What is "one-hot" encoding called in scientific literature?
In physical sciences and engineering, it's called the (generalized) Kronecker delta.
In simplest form, the Kroneker delta's defined as $$
\begin{align*}
{\delta}_{i,j} {\equiv}
\begin{cases}
1 &\text{if} & i=j \\
0 &\text{else}
\end{cases}
\end{align*},
$$though this trivially generalized to$$
\begin{align*}
{\delta}_{\left[\text{condition}\right]} {\equiv}
\begin{cases}
1 &\text{if} & \left[\text{condition}\right] \\
0 &\text{else}
\end{cases}
\end{align*}.
$$
So, "${\delta}_{i{\in}\text{category}}$" will tend to be read as$$
\begin{align*}
{\delta}_{i{\in}\text{category}} {\equiv}
\begin{cases}
1 &\text{if} & i{\in}\text{category} \\
0 &\text{else}
\end{cases}
\end{align*},
$$
which most authors would tend to truncate to "${\delta}_{i}$", if the category is obvious from context.
The Kronecker delta is really useful in Sigma/Pi/Einstein/etc. notations since it allows for terms to be specified conditionally.
Just to relate this to common programming structures, the Kronecker delta's condition?1:0, where ?: is the conditional operator.
As a tangential note, I'd encourage authors to abandon the old-fashion ${\delta}_{i,j}$ in favor of the generalized equivalent, ${\delta}_{i=j}$. There's no advantage to the old-fashion notation, while the generalized notation's more explicit and extensible. | What is "one-hot" encoding called in scientific literature?
In physical sciences and engineering, it's called the (generalized) Kronecker delta.
In simplest form, the Kroneker delta's defined as $$
\begin{align*}
{\delta}_{i,j} {\equiv}
\begin{cases}
1 &\text |
17,978 | What is "one-hot" encoding called in scientific literature? | Pattern Recognition and Machine Learning by Christopher Bishop uses the term $1$-of-$K$ scheme.
Here is a quote from the book,
Binary variables can be used to describe quantities that can take one of two possible values. Often, however, we encounter discrete variables that can take on one of $K$ possible mutually exclusive states. Although there are various alternative ways to express such variables, we shall see shortly that a particularly convenient representation is the $1$-of-$K$ scheme in which the variable is represented by a $K$-dimensional vector $\textbf{x}$ in which one of the elements $x_k$ equals $1$, and all remaining elements equal $0$. So, for instance if we have a variable that can take $K = 6$ states and a particular observation of the variable happens to correspond to the state where $x_3 = 1$, then $\textbf{x}$
will be represented by,
$\textbf{x} = (0, 0, 1, 0, 0, 0)^{T}$ | What is "one-hot" encoding called in scientific literature? | Pattern Recognition and Machine Learning by Christopher Bishop uses the term $1$-of-$K$ scheme.
Here is a quote from the book,
Binary variables can be used to describe quantities that can take one o | What is "one-hot" encoding called in scientific literature?
Pattern Recognition and Machine Learning by Christopher Bishop uses the term $1$-of-$K$ scheme.
Here is a quote from the book,
Binary variables can be used to describe quantities that can take one of two possible values. Often, however, we encounter discrete variables that can take on one of $K$ possible mutually exclusive states. Although there are various alternative ways to express such variables, we shall see shortly that a particularly convenient representation is the $1$-of-$K$ scheme in which the variable is represented by a $K$-dimensional vector $\textbf{x}$ in which one of the elements $x_k$ equals $1$, and all remaining elements equal $0$. So, for instance if we have a variable that can take $K = 6$ states and a particular observation of the variable happens to correspond to the state where $x_3 = 1$, then $\textbf{x}$
will be represented by,
$\textbf{x} = (0, 0, 1, 0, 0, 0)^{T}$ | What is "one-hot" encoding called in scientific literature?
Pattern Recognition and Machine Learning by Christopher Bishop uses the term $1$-of-$K$ scheme.
Here is a quote from the book,
Binary variables can be used to describe quantities that can take one o |
17,979 | Determine the off - diagonal elements of covariance matrix, given the diagonal elements | You might find it instructive to start with a basic idea: the variance of any random variable cannot be negative. (This is clear, since the variance is the expectation of the square of something and squares cannot be negative.)
Any $2\times 2$ covariance matrix $\mathbb A$ explicitly presents the variances and covariances of a pair of random variables $(X,Y),$ but it also tells you how to find the variance of any linear combination of those variables. This is because whenever $a$ and $b$ are numbers,
$$\operatorname{Var}(aX+bY) = a^2\operatorname{Var}(X) + b^2\operatorname{Var}(Y) + 2ab\operatorname{Cov}(X,Y) = \pmatrix{a&b}\mathbb A\pmatrix{a\\b}.$$
Applying this to your problem we may compute
$$\begin{aligned}
0 \le \operatorname{Var}(aX+bY) &= \pmatrix{a&b}\pmatrix{121&c\\c&81}\pmatrix{a\\b}\\
&= 121 a^2 + 81 b^2 + 2c^2 ab\\
&=(11a)^2+(9b)^2+\frac{2c}{(11)(9)}(11a)(9b)\\
&= \alpha^2 + \beta^2 + \frac{2c}{(11)(9)} \alpha\beta.
\end{aligned}$$
The last few steps in which $\alpha=11a$ and $\beta=9b$ were introduced weren't necessary, but they help to simplify the algebra. In particular, what we need to do next (in order to find bounds for $c$) is complete the square: this is the process emulating the derivation of the quadratic formula to which everyone is introduced in grade school. Writing
$$C = \frac{c}{(11)(9)},\tag{*}$$
we find
$$\alpha^2 + \beta^2 + \frac{2c^2}{(11)(9)} \alpha\beta = \alpha^2 + 2C\alpha\beta + \beta^2 = (\alpha+C\beta)^2+(1-C^2)\beta^2.$$
Because $(\alpha+C\beta)^2$ and $\beta^2$ are both squares, they are not negative. Therefore if $1-C^2$ also is non-negative, the entire right side is not negative and can be a valid variance. Conversely, if $1-C^2$ is negative, you could set $\alpha=-c\beta$ to obtain the value $(1-C^2)\beta^2\lt 0$ on the right hand side, which is invalid.
You therefore deduce (from these perfectly elementary algebraic considerations) that
If $A$ is a valid covariance matrix, then $1-C^2$ cannot be negative.
Equivalently, $|C|\le 1,$ which by $(*)$ means $-(11)(9) \le c \le (11)(9).$
There remains the question whether any such $c$ does correspond to an actual variance matrix. One way to show this is true is to find a random variable $(X,Y)$ with $\mathbb A$ as its covariance matrix. Here is one way (out of many).
I take it as given that you can construct independent random variables $A$ and $B$ having unit variances: that is, $\operatorname{Var}(A)=\operatorname{Var}(B) = 1.$ (For example, let $(A,B)$ take on the four values $(\pm 1, \pm 1)$ with equal probabilities of $1/4$ each.)
The independence implies $\operatorname{Cov}(A,B)=0.$ Given a number $c$ in the range $-(11)(9)$ to $(11)(9),$ define random variables
$$X = \sqrt{11^2-c^2/9^2}A + (c/9)B,\quad Y = 9B$$
(which is possible because $11^2 - c^2/9^2\ge 0$) and compute that the covariance matrix of $(X,Y)$ is precisely $\mathbb A.$
Finally, if you carry out the same analysis for any symmetric matrix $$\mathbb A = \pmatrix{a & b \\ b & d},$$ you will conclude three things:
$a \ge 0.$
$d \ge 0.$
$ad - b^2 \ge 0.$
These conditions characterize symmetric, positive semi-definite matrices. Any $2\times 2$ matrix satisfying these conditions indeed is a variance matrix. (Emulate the preceding construction.) | Determine the off - diagonal elements of covariance matrix, given the diagonal elements | You might find it instructive to start with a basic idea: the variance of any random variable cannot be negative. (This is clear, since the variance is the expectation of the square of something and | Determine the off - diagonal elements of covariance matrix, given the diagonal elements
You might find it instructive to start with a basic idea: the variance of any random variable cannot be negative. (This is clear, since the variance is the expectation of the square of something and squares cannot be negative.)
Any $2\times 2$ covariance matrix $\mathbb A$ explicitly presents the variances and covariances of a pair of random variables $(X,Y),$ but it also tells you how to find the variance of any linear combination of those variables. This is because whenever $a$ and $b$ are numbers,
$$\operatorname{Var}(aX+bY) = a^2\operatorname{Var}(X) + b^2\operatorname{Var}(Y) + 2ab\operatorname{Cov}(X,Y) = \pmatrix{a&b}\mathbb A\pmatrix{a\\b}.$$
Applying this to your problem we may compute
$$\begin{aligned}
0 \le \operatorname{Var}(aX+bY) &= \pmatrix{a&b}\pmatrix{121&c\\c&81}\pmatrix{a\\b}\\
&= 121 a^2 + 81 b^2 + 2c^2 ab\\
&=(11a)^2+(9b)^2+\frac{2c}{(11)(9)}(11a)(9b)\\
&= \alpha^2 + \beta^2 + \frac{2c}{(11)(9)} \alpha\beta.
\end{aligned}$$
The last few steps in which $\alpha=11a$ and $\beta=9b$ were introduced weren't necessary, but they help to simplify the algebra. In particular, what we need to do next (in order to find bounds for $c$) is complete the square: this is the process emulating the derivation of the quadratic formula to which everyone is introduced in grade school. Writing
$$C = \frac{c}{(11)(9)},\tag{*}$$
we find
$$\alpha^2 + \beta^2 + \frac{2c^2}{(11)(9)} \alpha\beta = \alpha^2 + 2C\alpha\beta + \beta^2 = (\alpha+C\beta)^2+(1-C^2)\beta^2.$$
Because $(\alpha+C\beta)^2$ and $\beta^2$ are both squares, they are not negative. Therefore if $1-C^2$ also is non-negative, the entire right side is not negative and can be a valid variance. Conversely, if $1-C^2$ is negative, you could set $\alpha=-c\beta$ to obtain the value $(1-C^2)\beta^2\lt 0$ on the right hand side, which is invalid.
You therefore deduce (from these perfectly elementary algebraic considerations) that
If $A$ is a valid covariance matrix, then $1-C^2$ cannot be negative.
Equivalently, $|C|\le 1,$ which by $(*)$ means $-(11)(9) \le c \le (11)(9).$
There remains the question whether any such $c$ does correspond to an actual variance matrix. One way to show this is true is to find a random variable $(X,Y)$ with $\mathbb A$ as its covariance matrix. Here is one way (out of many).
I take it as given that you can construct independent random variables $A$ and $B$ having unit variances: that is, $\operatorname{Var}(A)=\operatorname{Var}(B) = 1.$ (For example, let $(A,B)$ take on the four values $(\pm 1, \pm 1)$ with equal probabilities of $1/4$ each.)
The independence implies $\operatorname{Cov}(A,B)=0.$ Given a number $c$ in the range $-(11)(9)$ to $(11)(9),$ define random variables
$$X = \sqrt{11^2-c^2/9^2}A + (c/9)B,\quad Y = 9B$$
(which is possible because $11^2 - c^2/9^2\ge 0$) and compute that the covariance matrix of $(X,Y)$ is precisely $\mathbb A.$
Finally, if you carry out the same analysis for any symmetric matrix $$\mathbb A = \pmatrix{a & b \\ b & d},$$ you will conclude three things:
$a \ge 0.$
$d \ge 0.$
$ad - b^2 \ge 0.$
These conditions characterize symmetric, positive semi-definite matrices. Any $2\times 2$ matrix satisfying these conditions indeed is a variance matrix. (Emulate the preceding construction.) | Determine the off - diagonal elements of covariance matrix, given the diagonal elements
You might find it instructive to start with a basic idea: the variance of any random variable cannot be negative. (This is clear, since the variance is the expectation of the square of something and |
17,980 | Determine the off - diagonal elements of covariance matrix, given the diagonal elements | An intuitive method to determine this answer quickly is to just remember that covariance matrices may be interpreted in the form
\begin{equation}
A = \begin{pmatrix}
\sigma_1^2 & \rho_{12}\sigma_1\sigma_2 &\rho_{13}\sigma_1\sigma_3 & \cdots & \rho_{1n}\sigma_1 \sigma_n \\
& \sigma_2^2 & \rho_{23}\sigma_2\sigma_3 & \cdots & \rho_{2n}\sigma_2 \sigma_n \\
& & \sigma_3^2 & \cdots & \rho_{3n}\sigma_3 \sigma_n \\
& & & \ddots & \vdots \\
& & & & \sigma_n^2
\end{pmatrix}
\end{equation}
where $\rho_{ab} \in [-1,1]$ is a Pearson Correlation Coefficient. In your case you have
\begin{align}
\sigma_1^2 = 121 ,~~~ \sigma_2^2 = 81 ~\Longrightarrow ~ |c| \leq \sqrt{121\cdot 81} = 99
\end{align}
i.e. $c \in [-99, 99]$. | Determine the off - diagonal elements of covariance matrix, given the diagonal elements | An intuitive method to determine this answer quickly is to just remember that covariance matrices may be interpreted in the form
\begin{equation}
A = \begin{pmatrix}
\sigma_1^2 & \rho_{12}\sigma_1\sig | Determine the off - diagonal elements of covariance matrix, given the diagonal elements
An intuitive method to determine this answer quickly is to just remember that covariance matrices may be interpreted in the form
\begin{equation}
A = \begin{pmatrix}
\sigma_1^2 & \rho_{12}\sigma_1\sigma_2 &\rho_{13}\sigma_1\sigma_3 & \cdots & \rho_{1n}\sigma_1 \sigma_n \\
& \sigma_2^2 & \rho_{23}\sigma_2\sigma_3 & \cdots & \rho_{2n}\sigma_2 \sigma_n \\
& & \sigma_3^2 & \cdots & \rho_{3n}\sigma_3 \sigma_n \\
& & & \ddots & \vdots \\
& & & & \sigma_n^2
\end{pmatrix}
\end{equation}
where $\rho_{ab} \in [-1,1]$ is a Pearson Correlation Coefficient. In your case you have
\begin{align}
\sigma_1^2 = 121 ,~~~ \sigma_2^2 = 81 ~\Longrightarrow ~ |c| \leq \sqrt{121\cdot 81} = 99
\end{align}
i.e. $c \in [-99, 99]$. | Determine the off - diagonal elements of covariance matrix, given the diagonal elements
An intuitive method to determine this answer quickly is to just remember that covariance matrices may be interpreted in the form
\begin{equation}
A = \begin{pmatrix}
\sigma_1^2 & \rho_{12}\sigma_1\sig |
17,981 | Determine the off - diagonal elements of covariance matrix, given the diagonal elements | $A$ is posdef so by Sylvesters criterion $det(A) = 121 \cdot 81 - c^2 \geq 0$. Thus, any $c \in [-99, 99]$ will produce a valid covariance matrix. | Determine the off - diagonal elements of covariance matrix, given the diagonal elements | $A$ is posdef so by Sylvesters criterion $det(A) = 121 \cdot 81 - c^2 \geq 0$. Thus, any $c \in [-99, 99]$ will produce a valid covariance matrix. | Determine the off - diagonal elements of covariance matrix, given the diagonal elements
$A$ is posdef so by Sylvesters criterion $det(A) = 121 \cdot 81 - c^2 \geq 0$. Thus, any $c \in [-99, 99]$ will produce a valid covariance matrix. | Determine the off - diagonal elements of covariance matrix, given the diagonal elements
$A$ is posdef so by Sylvesters criterion $det(A) = 121 \cdot 81 - c^2 \geq 0$. Thus, any $c \in [-99, 99]$ will produce a valid covariance matrix. |
17,982 | Determine the off - diagonal elements of covariance matrix, given the diagonal elements | There are three main possibilities of note. One is that the variable are uncorrelated, in which case the off-diagonal entries are easy to calculate as 0. Another possibility is that you don't really have two different variables. $y$ is simply a scalar multiple of $x$ (i.e. perfect correlation). If $y= c x$, then $\sigma_{xy} =\sigma_{x}\sigma_{xy}=99$. We get a third possibility in noting that the above assumes $c>0$. For $c<0$, we get $\sigma_{xy} =-99$.
Geometrically, the covariance between two vectors is the product of their lengths times the cosine of the angle between them. Since the cosine varies from $-1$ to $1$, the covariance ranges from the product of their lengths to the negative of the product.
Another approach is to consider $z_1 = \frac{x-\mu_{x}}{\sigma_{x}}$ and $z_2 = \frac{y-\mu_y}{\sigma_{y}}$. $\sigma_{xy} = \sigma_{(\sigma_x z_1)(\sigma_y z_2)}=\sigma_x \sigma_y \sigma_{z_1z_2}=99\sigma_{z_1z_2}$ and $\sigma_{z_1z_2}$ is simply the correlation between $x$ and $y$, which ranges from $-1$ to $1$. | Determine the off - diagonal elements of covariance matrix, given the diagonal elements | There are three main possibilities of note. One is that the variable are uncorrelated, in which case the off-diagonal entries are easy to calculate as 0. Another possibility is that you don't really h | Determine the off - diagonal elements of covariance matrix, given the diagonal elements
There are three main possibilities of note. One is that the variable are uncorrelated, in which case the off-diagonal entries are easy to calculate as 0. Another possibility is that you don't really have two different variables. $y$ is simply a scalar multiple of $x$ (i.e. perfect correlation). If $y= c x$, then $\sigma_{xy} =\sigma_{x}\sigma_{xy}=99$. We get a third possibility in noting that the above assumes $c>0$. For $c<0$, we get $\sigma_{xy} =-99$.
Geometrically, the covariance between two vectors is the product of their lengths times the cosine of the angle between them. Since the cosine varies from $-1$ to $1$, the covariance ranges from the product of their lengths to the negative of the product.
Another approach is to consider $z_1 = \frac{x-\mu_{x}}{\sigma_{x}}$ and $z_2 = \frac{y-\mu_y}{\sigma_{y}}$. $\sigma_{xy} = \sigma_{(\sigma_x z_1)(\sigma_y z_2)}=\sigma_x \sigma_y \sigma_{z_1z_2}=99\sigma_{z_1z_2}$ and $\sigma_{z_1z_2}$ is simply the correlation between $x$ and $y$, which ranges from $-1$ to $1$. | Determine the off - diagonal elements of covariance matrix, given the diagonal elements
There are three main possibilities of note. One is that the variable are uncorrelated, in which case the off-diagonal entries are easy to calculate as 0. Another possibility is that you don't really h |
17,983 | What is the name of this plot using vertical lines to show a distribution? | The first example I have seen them referenced in are Strips displaying empirical distributions: I. textured dot strips (Tukey and Tukey, 1990) although I have never been able to actually get that technical report.
Tim is right: they are often accompanied as the rug on an additional plot to show the location of individual observations, but rug plot is a bit more general and that type of plot is not always on the rug of another plot as your question shows!
Here is an example of using points on the rug instead of lines.
Here is an example of the rug being points and not displaying all of the data, but only data missing in the other dimension of a scatterplot.
So a rug plot is not always a set of lines on the borders of another graph, and that type of plot in your question is not always on the margins of another plot. Here is an example of the lines superimposed on a kernel density instead of on the rug of the plot, called a beanplot. The larger lines I believe are used to visualize different quantiles (a.k.a. letter values) of the distribution.
(source: biomedcentral.com)
In Wilkinson's Grammar of Graphics it may be considered a one-dimensional scatterplot but using line segments instead of the typical default of circles. The point of this is to prevent many of the nearby points from being superimposed. If you have many points and draw them semi-transparently they eventually turn into a density strip, see the final picture in this post.
I've even seen them suggested to use as sparklines (Greenhill et al., 2011) in that example to visualize binary data. Greenhill calls them in that example separation plots, and here is an example taken from the referenced paper (p.995):
So in that example there are values along the entire axis, and color is used to visualize a binary variable. The black line in that plot is the cumulative proportion of red observations. | What is the name of this plot using vertical lines to show a distribution? | The first example I have seen them referenced in are Strips displaying empirical distributions: I. textured dot strips (Tukey and Tukey, 1990) although I have never been able to actually get that tech | What is the name of this plot using vertical lines to show a distribution?
The first example I have seen them referenced in are Strips displaying empirical distributions: I. textured dot strips (Tukey and Tukey, 1990) although I have never been able to actually get that technical report.
Tim is right: they are often accompanied as the rug on an additional plot to show the location of individual observations, but rug plot is a bit more general and that type of plot is not always on the rug of another plot as your question shows!
Here is an example of using points on the rug instead of lines.
Here is an example of the rug being points and not displaying all of the data, but only data missing in the other dimension of a scatterplot.
So a rug plot is not always a set of lines on the borders of another graph, and that type of plot in your question is not always on the margins of another plot. Here is an example of the lines superimposed on a kernel density instead of on the rug of the plot, called a beanplot. The larger lines I believe are used to visualize different quantiles (a.k.a. letter values) of the distribution.
(source: biomedcentral.com)
In Wilkinson's Grammar of Graphics it may be considered a one-dimensional scatterplot but using line segments instead of the typical default of circles. The point of this is to prevent many of the nearby points from being superimposed. If you have many points and draw them semi-transparently they eventually turn into a density strip, see the final picture in this post.
I've even seen them suggested to use as sparklines (Greenhill et al., 2011) in that example to visualize binary data. Greenhill calls them in that example separation plots, and here is an example taken from the referenced paper (p.995):
So in that example there are values along the entire axis, and color is used to visualize a binary variable. The black line in that plot is the cumulative proportion of red observations. | What is the name of this plot using vertical lines to show a distribution?
The first example I have seen them referenced in are Strips displaying empirical distributions: I. textured dot strips (Tukey and Tukey, 1990) although I have never been able to actually get that tech |
17,984 | What is the name of this plot using vertical lines to show a distribution? | It is called a rug plot (see e.g. here or here). In R it can be made with a rug function.
The plot seems to appear also under another name, as strip chart, it is referred like this by Phillip I. Good in Introduction to Statistics through Resampling Methods and R/S-Plus (2005, Wiley). In R it is called by stripchart function.
It seems that the tiny version that often accompanies a larger plot is called rug plot, while the standalone plot made of points or vertical lines is named strip chart. | What is the name of this plot using vertical lines to show a distribution? | It is called a rug plot (see e.g. here or here). In R it can be made with a rug function.
The plot seems to appear also under another name, as strip chart, it is referred like this by Phillip I. Good | What is the name of this plot using vertical lines to show a distribution?
It is called a rug plot (see e.g. here or here). In R it can be made with a rug function.
The plot seems to appear also under another name, as strip chart, it is referred like this by Phillip I. Good in Introduction to Statistics through Resampling Methods and R/S-Plus (2005, Wiley). In R it is called by stripchart function.
It seems that the tiny version that often accompanies a larger plot is called rug plot, while the standalone plot made of points or vertical lines is named strip chart. | What is the name of this plot using vertical lines to show a distribution?
It is called a rug plot (see e.g. here or here). In R it can be made with a rug function.
The plot seems to appear also under another name, as strip chart, it is referred like this by Phillip I. Good |
17,985 | What is the name of this plot using vertical lines to show a distribution? | In commercial tagging of goods, Barcode or
if they are lines of frequency plotted on time, Spectrum.
EDIT1
When in electromagnetic clouds or gas chromatographs strengths are plotted linearly on frequency scale then also we can say Spectrum. | What is the name of this plot using vertical lines to show a distribution? | In commercial tagging of goods, Barcode or
if they are lines of frequency plotted on time, Spectrum.
EDIT1
When in electromagnetic clouds or gas chromatographs strengths are plotted linearly on freque | What is the name of this plot using vertical lines to show a distribution?
In commercial tagging of goods, Barcode or
if they are lines of frequency plotted on time, Spectrum.
EDIT1
When in electromagnetic clouds or gas chromatographs strengths are plotted linearly on frequency scale then also we can say Spectrum. | What is the name of this plot using vertical lines to show a distribution?
In commercial tagging of goods, Barcode or
if they are lines of frequency plotted on time, Spectrum.
EDIT1
When in electromagnetic clouds or gas chromatographs strengths are plotted linearly on freque |
17,986 | What is the name of this plot using vertical lines to show a distribution? | I've same problem: What is the name of "bar code" like visualization for true/false data
My goal is represent a list of true/false array corresponding to an array of words in a fixed place in array. Like representation of "light spectrum" to identify the assorbement of specific a light wave .... in a same situation i want enphatize the missing words and the presents words
I the found on Vega the Strip Plot
https://vega.github.io/vega-lite/examples/tick_strip.html
I think that for my goal is better name to represent my visualization idea | What is the name of this plot using vertical lines to show a distribution? | I've same problem: What is the name of "bar code" like visualization for true/false data
My goal is represent a list of true/false array corresponding to an array of words in a fixed place in array. L | What is the name of this plot using vertical lines to show a distribution?
I've same problem: What is the name of "bar code" like visualization for true/false data
My goal is represent a list of true/false array corresponding to an array of words in a fixed place in array. Like representation of "light spectrum" to identify the assorbement of specific a light wave .... in a same situation i want enphatize the missing words and the presents words
I the found on Vega the Strip Plot
https://vega.github.io/vega-lite/examples/tick_strip.html
I think that for my goal is better name to represent my visualization idea | What is the name of this plot using vertical lines to show a distribution?
I've same problem: What is the name of "bar code" like visualization for true/false data
My goal is represent a list of true/false array corresponding to an array of words in a fixed place in array. L |
17,987 | Assumptions to derive OLS estimator | You can always compute the OLS estimator, apart from the case when you have perfect multicollinearity. In this case, you do have perfect multilinear dependence in your X matrix. Consequently, the full rank assumption is not fulfilled and you cannot compute the OLS estimator, because of invertibility issues.
Technically, you do not need the other OLS assumptions to compute the OLS estimator. However, according to the Gauss–Markov theorem you need to fulfill the OLS assumption (clrm assumptions) in order for your estimator to be BLUE.
You can find an extensive discussion of the Gauss–Markov theorem and its mathematical derivation here:
http://economictheoryblog.com/2015/02/26/markov_theorem/
Furthermore, if you are looking for an overview of the OLS assumption, i.e. how many there are, what they require and what happens if you violate the single OLS assumption may find an elaborate discussion here:
http://economictheoryblog.com/2015/04/01/ols_assumptions/
I hope that helps, cheers! | Assumptions to derive OLS estimator | You can always compute the OLS estimator, apart from the case when you have perfect multicollinearity. In this case, you do have perfect multilinear dependence in your X matrix. Consequently, the full | Assumptions to derive OLS estimator
You can always compute the OLS estimator, apart from the case when you have perfect multicollinearity. In this case, you do have perfect multilinear dependence in your X matrix. Consequently, the full rank assumption is not fulfilled and you cannot compute the OLS estimator, because of invertibility issues.
Technically, you do not need the other OLS assumptions to compute the OLS estimator. However, according to the Gauss–Markov theorem you need to fulfill the OLS assumption (clrm assumptions) in order for your estimator to be BLUE.
You can find an extensive discussion of the Gauss–Markov theorem and its mathematical derivation here:
http://economictheoryblog.com/2015/02/26/markov_theorem/
Furthermore, if you are looking for an overview of the OLS assumption, i.e. how many there are, what they require and what happens if you violate the single OLS assumption may find an elaborate discussion here:
http://economictheoryblog.com/2015/04/01/ols_assumptions/
I hope that helps, cheers! | Assumptions to derive OLS estimator
You can always compute the OLS estimator, apart from the case when you have perfect multicollinearity. In this case, you do have perfect multilinear dependence in your X matrix. Consequently, the full |
17,988 | Assumptions to derive OLS estimator | The following is based on simple cross sections, for time series and panels it is somewhat different.
In the population, and therefore in the sample, the model can be written as:
\begin{align} \newcommand{\Var}{\rm Var} \newcommand{\Cov}{\rm Cov}
Y &= \beta_0 + \beta_1 x_1 + … + \beta_k x_k + u \\
&= X\beta + u
\end{align}
This is the linearity assumption, which is sometimes misunderstood. The model should be linear in the parameters - namely the $\beta_k$. You are free to do whatever you want with the $x_i$ themselves. Logs, squares etc. If this is not the case, then the model cannot be estimated by OLS - you need some other nonlinear estimator.
A random sample (for cross sections)
This is needed for inference, and sample properties. It is somewhat irrelevant for the pure mechanics of OLS.
No perfect Collinearity
This means that there can be no perfect relationship between the $x_i$. This is the assumption that ensures that $(X’X)$ is nonsingular, such that $(X’X)^{-1}$ exists.
Zero conditional mean: $E(u|X) = 0$.
This means that you have properly specified the model such that: there are no omitted variables, and the functional form you estimated is correct relative to the (unknown) population model. This is always the problematic assumption with OLS, since there is no way to ever know if it is actually valid or not.
The variance of the errors term is constant, conditional on the all $X_i$: $\Var(u|X)=\sigma^2$
Again this means nothing for the mechanics of OLS, but it ensure that the usual standard errors are valid.
Normality; the errors term u is independent of the $X_i$, and follows $u \sim N(0,\sigma^2)$.
Again this is irrelevant for the mechanics of OLS, but ensures that the sampling distribution of the $\beta_k$ is normal, $\hat{\beta_k} \sim N(\beta_k , \Var(\hat{\beta_k}))$.
Now for the implications.
Under 1 - 6 (the classical linear model assumptions) OLS is BLUE (best linear unbiased estimator), best in the sense of lowest variance. It is also efficient amongst all linear estimators, as well as all estimators that uses some function of the x. More importantly under 1 - 6, OLS is also the minimum variance unbiased estimator. That means that amongst all unbiased estimators (not just the linear) OLS has the smallest variance. OLS is also consistent.
Under 1 - 5 (the Gauss-Markov assumptions) OLS is BLUE and efficient (as described above).
Under 1 - 4, OLS is unbiased, and consistent.
Actually OLS is also consistent, under a weaker assumption than $(4)$ namely that: $(1)\ E(u) = 0$ and $(2)\ \Cov(x_j , u) = 0$. The difference from assumptions 4 is that, under this assumption, you do not need to nail the functional relationship perfectly. | Assumptions to derive OLS estimator | The following is based on simple cross sections, for time series and panels it is somewhat different.
In the population, and therefore in the sample, the model can be written as:
\begin{align} \newco | Assumptions to derive OLS estimator
The following is based on simple cross sections, for time series and panels it is somewhat different.
In the population, and therefore in the sample, the model can be written as:
\begin{align} \newcommand{\Var}{\rm Var} \newcommand{\Cov}{\rm Cov}
Y &= \beta_0 + \beta_1 x_1 + … + \beta_k x_k + u \\
&= X\beta + u
\end{align}
This is the linearity assumption, which is sometimes misunderstood. The model should be linear in the parameters - namely the $\beta_k$. You are free to do whatever you want with the $x_i$ themselves. Logs, squares etc. If this is not the case, then the model cannot be estimated by OLS - you need some other nonlinear estimator.
A random sample (for cross sections)
This is needed for inference, and sample properties. It is somewhat irrelevant for the pure mechanics of OLS.
No perfect Collinearity
This means that there can be no perfect relationship between the $x_i$. This is the assumption that ensures that $(X’X)$ is nonsingular, such that $(X’X)^{-1}$ exists.
Zero conditional mean: $E(u|X) = 0$.
This means that you have properly specified the model such that: there are no omitted variables, and the functional form you estimated is correct relative to the (unknown) population model. This is always the problematic assumption with OLS, since there is no way to ever know if it is actually valid or not.
The variance of the errors term is constant, conditional on the all $X_i$: $\Var(u|X)=\sigma^2$
Again this means nothing for the mechanics of OLS, but it ensure that the usual standard errors are valid.
Normality; the errors term u is independent of the $X_i$, and follows $u \sim N(0,\sigma^2)$.
Again this is irrelevant for the mechanics of OLS, but ensures that the sampling distribution of the $\beta_k$ is normal, $\hat{\beta_k} \sim N(\beta_k , \Var(\hat{\beta_k}))$.
Now for the implications.
Under 1 - 6 (the classical linear model assumptions) OLS is BLUE (best linear unbiased estimator), best in the sense of lowest variance. It is also efficient amongst all linear estimators, as well as all estimators that uses some function of the x. More importantly under 1 - 6, OLS is also the minimum variance unbiased estimator. That means that amongst all unbiased estimators (not just the linear) OLS has the smallest variance. OLS is also consistent.
Under 1 - 5 (the Gauss-Markov assumptions) OLS is BLUE and efficient (as described above).
Under 1 - 4, OLS is unbiased, and consistent.
Actually OLS is also consistent, under a weaker assumption than $(4)$ namely that: $(1)\ E(u) = 0$ and $(2)\ \Cov(x_j , u) = 0$. The difference from assumptions 4 is that, under this assumption, you do not need to nail the functional relationship perfectly. | Assumptions to derive OLS estimator
The following is based on simple cross sections, for time series and panels it is somewhat different.
In the population, and therefore in the sample, the model can be written as:
\begin{align} \newco |
17,989 | Assumptions to derive OLS estimator | A comment in another question raised doubts about the importance of the condition $E(\mathbf u \mid \mathbf X) =0$, arguing that it can be corrected by the inclusion of a constant term in the regression specification, and so "it can be easily ignored".
This is not so. The inclusion of a constant term in the regression will absorb the possibly non-zero conditional mean of the error term if we assume that this conditional mean is already a constant and not a function of the regressors. This is the crucial assumption that must be made independently of whether we include a constant term or not:
$$E(\mathbf u \mid \mathbf X) =const.$$
If this holds, then the non-zero mean becomes a nuisance which we can simply solve by including a constant term.
But if this doesn't hold, (i.e. if the conditional mean is not a zero or a non-zero constant), the inclusion of the constant term does not solve the problem: what it will "absorb" in this case is a magnitude that depends on the specific sample and realizations of the regressors. In reality the unknown coefficient attached to the series of ones, is not really a constant but variable, depending on the regressors through the non-constant conditional mean of the error term.
What does this imply?
To simplify, assume the simplest case, where $E(u_i \mid \mathbf X_{-i})=0$ ($i$ indexes the observations) but that $E(u_i \mid \mathbf x_{i})=h(\mathbf x_i)$. I.e. that the error term is mean-independent from the regressors except from its contemporaneous ones (in $\mathbf X$ we do not include a series of ones).
Assume that we specify the regression with the inclusion of a constant term (a regressor of a series of ones).
$$\mathbf y = \mathbf a + \mathbf X\mathbf β + \mathbf ε $$
and compacting notation
$$\mathbf y = \mathbf Z\mathbf γ + \mathbf ε $$
where $\mathbf a = (a,a,a...)'$, $\mathbf Z = [\mathbf 1: \mathbf X]$, $\mathbf γ = (a, \mathbf β)'$, $\mathbf ε = \mathbf u - \mathbf a$.
Then the OLS estimator will be
$$\hat {\mathbf γ} = \mathbf γ + \left(\mathbf Z'\mathbf Z\right)^{-1}\mathbf Z'\mathbf ε$$
For unbiasedness we need $E\left[\mathbf ε\mid \mathbf Z\right] =0$. But
$$E\left[ ε_i\mid \mathbf x_i\right] = E\left[u_i-a\mid \mathbf x_i\right] = h(\mathbf x_i)-a$$
which cannot be zero for all $i$, since we examine the case where $h(\mathbf x_i)$ is not a constant function. So
$$E\left[\mathbf ε\mid \mathbf Z\right] \neq 0 \implies E(\hat {\mathbf γ}) \neq \mathbf γ$$
and
If $E(u_i \mid \mathbf x_{i})=h(\mathbf x_i)\neq h(\mathbf x_j)=E(u_j \mid \mathbf x_{j})$, then even if we include a constant term in the regression, the OLS estimator will not be unbiased, meaning also that the Gauss-Markov result on efficiency, is lost.
Moreover, the error term $\mathbf ε$ has a different mean for each $i$, and so also a different variance (i.e. it is conditionally heteroskedastic). So its distribution conditional on the regressors differs across the observations $i$.
But this means that even if the error term $u_i$ is assumed normal, then the distribution of the sampling error $\hat {\mathbf γ} - \mathbf γ$ will be normal but not zero-mean mormal, and with unknown bias. And the variance will differ.
So
If $E(u_i \mid \mathbf x_{i})=h(\mathbf x_i)\neq h(\mathbf x_j)=E(u_j \mid \mathbf x_{j})$, then even if we include a constant term in the regression, Hypothesis testing is no longer valid.
In other words, "finite-sample" properties are all gone.
We are left only with the option to resort to asymptotically valid inference, for which we will have to make additional assumptions.
So simply put, Strict Exogeneity cannot be "easily ignored". | Assumptions to derive OLS estimator | A comment in another question raised doubts about the importance of the condition $E(\mathbf u \mid \mathbf X) =0$, arguing that it can be corrected by the inclusion of a constant term in the regressi | Assumptions to derive OLS estimator
A comment in another question raised doubts about the importance of the condition $E(\mathbf u \mid \mathbf X) =0$, arguing that it can be corrected by the inclusion of a constant term in the regression specification, and so "it can be easily ignored".
This is not so. The inclusion of a constant term in the regression will absorb the possibly non-zero conditional mean of the error term if we assume that this conditional mean is already a constant and not a function of the regressors. This is the crucial assumption that must be made independently of whether we include a constant term or not:
$$E(\mathbf u \mid \mathbf X) =const.$$
If this holds, then the non-zero mean becomes a nuisance which we can simply solve by including a constant term.
But if this doesn't hold, (i.e. if the conditional mean is not a zero or a non-zero constant), the inclusion of the constant term does not solve the problem: what it will "absorb" in this case is a magnitude that depends on the specific sample and realizations of the regressors. In reality the unknown coefficient attached to the series of ones, is not really a constant but variable, depending on the regressors through the non-constant conditional mean of the error term.
What does this imply?
To simplify, assume the simplest case, where $E(u_i \mid \mathbf X_{-i})=0$ ($i$ indexes the observations) but that $E(u_i \mid \mathbf x_{i})=h(\mathbf x_i)$. I.e. that the error term is mean-independent from the regressors except from its contemporaneous ones (in $\mathbf X$ we do not include a series of ones).
Assume that we specify the regression with the inclusion of a constant term (a regressor of a series of ones).
$$\mathbf y = \mathbf a + \mathbf X\mathbf β + \mathbf ε $$
and compacting notation
$$\mathbf y = \mathbf Z\mathbf γ + \mathbf ε $$
where $\mathbf a = (a,a,a...)'$, $\mathbf Z = [\mathbf 1: \mathbf X]$, $\mathbf γ = (a, \mathbf β)'$, $\mathbf ε = \mathbf u - \mathbf a$.
Then the OLS estimator will be
$$\hat {\mathbf γ} = \mathbf γ + \left(\mathbf Z'\mathbf Z\right)^{-1}\mathbf Z'\mathbf ε$$
For unbiasedness we need $E\left[\mathbf ε\mid \mathbf Z\right] =0$. But
$$E\left[ ε_i\mid \mathbf x_i\right] = E\left[u_i-a\mid \mathbf x_i\right] = h(\mathbf x_i)-a$$
which cannot be zero for all $i$, since we examine the case where $h(\mathbf x_i)$ is not a constant function. So
$$E\left[\mathbf ε\mid \mathbf Z\right] \neq 0 \implies E(\hat {\mathbf γ}) \neq \mathbf γ$$
and
If $E(u_i \mid \mathbf x_{i})=h(\mathbf x_i)\neq h(\mathbf x_j)=E(u_j \mid \mathbf x_{j})$, then even if we include a constant term in the regression, the OLS estimator will not be unbiased, meaning also that the Gauss-Markov result on efficiency, is lost.
Moreover, the error term $\mathbf ε$ has a different mean for each $i$, and so also a different variance (i.e. it is conditionally heteroskedastic). So its distribution conditional on the regressors differs across the observations $i$.
But this means that even if the error term $u_i$ is assumed normal, then the distribution of the sampling error $\hat {\mathbf γ} - \mathbf γ$ will be normal but not zero-mean mormal, and with unknown bias. And the variance will differ.
So
If $E(u_i \mid \mathbf x_{i})=h(\mathbf x_i)\neq h(\mathbf x_j)=E(u_j \mid \mathbf x_{j})$, then even if we include a constant term in the regression, Hypothesis testing is no longer valid.
In other words, "finite-sample" properties are all gone.
We are left only with the option to resort to asymptotically valid inference, for which we will have to make additional assumptions.
So simply put, Strict Exogeneity cannot be "easily ignored". | Assumptions to derive OLS estimator
A comment in another question raised doubts about the importance of the condition $E(\mathbf u \mid \mathbf X) =0$, arguing that it can be corrected by the inclusion of a constant term in the regressi |
17,990 | Why don’t we calculate the average of an entire given population instead of computing confidence interval to estimate the population mean? | We’d love to calculate population parameters!
All of inferential statistics is about inferring. In other words, we are using our data at hand to guess about something greater than the data (e.g., the population from which the data are drawn). We can be silly with our guesses, or we can be thoughtful. Good statisticians intend to be thoughtful in order to make good guesses.
Those guesses are the inferences.
If we had the whole population, we wouldn’t have to guess, so inference would not be useful. We would just calculate the population parameters, and that’s the end. Alas, we tend to be interested in something greater than our data, so inferences are necessary.
The specific example you give of doing a z-test with a known variance and unknown standard deviation is a special case. With real data, we never know the true variance. However, such a test is useful as a first example of how to do hypothesis testing, and it serves a useful educational purpose in a “Stat 101”-type of class. | Why don’t we calculate the average of an entire given population instead of computing confidence int | We’d love to calculate population parameters!
All of inferential statistics is about inferring. In other words, we are using our data at hand to guess about something greater than the data (e.g., the | Why don’t we calculate the average of an entire given population instead of computing confidence interval to estimate the population mean?
We’d love to calculate population parameters!
All of inferential statistics is about inferring. In other words, we are using our data at hand to guess about something greater than the data (e.g., the population from which the data are drawn). We can be silly with our guesses, or we can be thoughtful. Good statisticians intend to be thoughtful in order to make good guesses.
Those guesses are the inferences.
If we had the whole population, we wouldn’t have to guess, so inference would not be useful. We would just calculate the population parameters, and that’s the end. Alas, we tend to be interested in something greater than our data, so inferences are necessary.
The specific example you give of doing a z-test with a known variance and unknown standard deviation is a special case. With real data, we never know the true variance. However, such a test is useful as a first example of how to do hypothesis testing, and it serves a useful educational purpose in a “Stat 101”-type of class. | Why don’t we calculate the average of an entire given population instead of computing confidence int
We’d love to calculate population parameters!
All of inferential statistics is about inferring. In other words, we are using our data at hand to guess about something greater than the data (e.g., the |
17,991 | Why don’t we calculate the average of an entire given population instead of computing confidence interval to estimate the population mean? | If we're able to observe the entire population of interest then that's exactly what we'd do! In this case we don't require any statistical inference because we directly observe the entire population of interest. Where statistical inference (including confidence intervals, etc.) comes in is when, for some reason, we are unable to observe the entire population. Often this is because it is too expensive or inconvenient to sample then entire population, but in some cases it might be completely infeasible. | Why don’t we calculate the average of an entire given population instead of computing confidence int | If we're able to observe the entire population of interest then that's exactly what we'd do! In this case we don't require any statistical inference because we directly observe the entire population | Why don’t we calculate the average of an entire given population instead of computing confidence interval to estimate the population mean?
If we're able to observe the entire population of interest then that's exactly what we'd do! In this case we don't require any statistical inference because we directly observe the entire population of interest. Where statistical inference (including confidence intervals, etc.) comes in is when, for some reason, we are unable to observe the entire population. Often this is because it is too expensive or inconvenient to sample then entire population, but in some cases it might be completely infeasible. | Why don’t we calculate the average of an entire given population instead of computing confidence int
If we're able to observe the entire population of interest then that's exactly what we'd do! In this case we don't require any statistical inference because we directly observe the entire population |
17,992 | Why don’t we calculate the average of an entire given population instead of computing confidence interval to estimate the population mean? | Observed populations are realisations of data-generating processes and we want to compare processes rather than historical fact
In some cases we may have captive populations and really good data capture - for instance we may know the exact hospital stay duration of everyone in several university hospitals who has been admitted for a specific condition over 2021-2022. There may be sampling discussions around the consistency of the definitions (e.g. around condition, admitted etc. and whether there are different distributions over time (one hospital may do more emergency admissions at weekends, one might have had fewer admissions during a more strained COVID-related period)) but let's set those aside for now.
We can say what the average stay duration for each hospital was exactly, but if we want to say "people in hospital A were hospitalised for longer and not just by random chance" we actually want to compare the data generating processes. We might start by modelling the process as 'People in hospital X get a random duration distributed N(mu_1, sigma) for hospital A and N(mu_2, sigma) for hospital B', then start adding more complexity to account for other effects such as the level of stress on the hospital, in-week periodicity, different levels of variation, etc. etc.
If you're not interested in healthcare, let's say I rolled a die and got the following results:
table(floor(runif(10000, 1, 7)))
1 2 3 4 5 6
1677 1675 1612 1641 1690 1705
Great, we have perfect observations that over 10,000 rolls we have an average of 3.5107. But that's history now and we can't do much about that. The question we might want to answer is 'is this die fair', and then we're back to comparing the process which generated the observations above with the process which gives us each number with a 1/6 chance. | Why don’t we calculate the average of an entire given population instead of computing confidence int | Observed populations are realisations of data-generating processes and we want to compare processes rather than historical fact
In some cases we may have captive populations and really good data captu | Why don’t we calculate the average of an entire given population instead of computing confidence interval to estimate the population mean?
Observed populations are realisations of data-generating processes and we want to compare processes rather than historical fact
In some cases we may have captive populations and really good data capture - for instance we may know the exact hospital stay duration of everyone in several university hospitals who has been admitted for a specific condition over 2021-2022. There may be sampling discussions around the consistency of the definitions (e.g. around condition, admitted etc. and whether there are different distributions over time (one hospital may do more emergency admissions at weekends, one might have had fewer admissions during a more strained COVID-related period)) but let's set those aside for now.
We can say what the average stay duration for each hospital was exactly, but if we want to say "people in hospital A were hospitalised for longer and not just by random chance" we actually want to compare the data generating processes. We might start by modelling the process as 'People in hospital X get a random duration distributed N(mu_1, sigma) for hospital A and N(mu_2, sigma) for hospital B', then start adding more complexity to account for other effects such as the level of stress on the hospital, in-week periodicity, different levels of variation, etc. etc.
If you're not interested in healthcare, let's say I rolled a die and got the following results:
table(floor(runif(10000, 1, 7)))
1 2 3 4 5 6
1677 1675 1612 1641 1690 1705
Great, we have perfect observations that over 10,000 rolls we have an average of 3.5107. But that's history now and we can't do much about that. The question we might want to answer is 'is this die fair', and then we're back to comparing the process which generated the observations above with the process which gives us each number with a 1/6 chance. | Why don’t we calculate the average of an entire given population instead of computing confidence int
Observed populations are realisations of data-generating processes and we want to compare processes rather than historical fact
In some cases we may have captive populations and really good data captu |
17,993 | Why don’t we calculate the average of an entire given population instead of computing confidence interval to estimate the population mean? | Maybe I am just imagining this, but from what you
say in your question, it seems you might be receptive to the idea of bootstrapping, briefly
illustrated below.
Suppose you have a random sample of size $50$ from some unknown
population. You don't know the population mean or standard deviation, nor do you know the general shape of the distribution. You do assume the population mean $\mu$ exists and want a 95% confidence interval of $\mu.$ A boxplot of the
fifty observations at hand is as shown.
boxplot(x, horizontal=T, col="skyblue2", pch=20)
If you had a good idea of the variability of the
sample means $\bar X$ around the population $\mu$ mean, you could use some
relationship such as $D = \bar X -\mu$ then you might use something like $P(L_D < \bar X-\mu < U_D) = 0.95$ to get the 95% CI $\bar X -U_D, \bar X-L_D.$
Instead, you look at many 're-samples' of 50 observations, sampling with replacement from the
data you have. A typical number of re-samples is $B = 2000,$ a job for a computer.) From them you can often get an idea
of the values of $U_D$ and $L_D,$ and hence an
approximate CI for $\mu.$ One 95% bootstrap CI for $\mu$ turns out to be $(22.0,\, 28.3).$
a.obs = mean(x); a.obs
[1] 25.28573
set.seed(626)
d = replicate(2000, mean(sample(x, 50, rep=T))-a.obs)
UL = quantile(d, c(.975,.025))
a.obs - UL
97.5% 2.5%
22.04661 28.26154 ## Bootstrap CI.
Of course, the randomness in the 'resampling' will
lead to a slightly different bootstrap CI on each run. (My partial cure for that is heavy rounding of endpoints.) Also, there are many different styles of bootstrap CIs.
Note: The data in the example above are sampled in R from a gamma distribution with $\mu = 25.$
set.seed(2022)
x = rgamma(50, 4, 4/25)
If you don't like the bootstrapping idea, then I recommend @Dave's (+1) Answer. | Why don’t we calculate the average of an entire given population instead of computing confidence int | Maybe I am just imagining this, but from what you
say in your question, it seems you might be receptive to the idea of bootstrapping, briefly
illustrated below.
Suppose you have a random sample of siz | Why don’t we calculate the average of an entire given population instead of computing confidence interval to estimate the population mean?
Maybe I am just imagining this, but from what you
say in your question, it seems you might be receptive to the idea of bootstrapping, briefly
illustrated below.
Suppose you have a random sample of size $50$ from some unknown
population. You don't know the population mean or standard deviation, nor do you know the general shape of the distribution. You do assume the population mean $\mu$ exists and want a 95% confidence interval of $\mu.$ A boxplot of the
fifty observations at hand is as shown.
boxplot(x, horizontal=T, col="skyblue2", pch=20)
If you had a good idea of the variability of the
sample means $\bar X$ around the population $\mu$ mean, you could use some
relationship such as $D = \bar X -\mu$ then you might use something like $P(L_D < \bar X-\mu < U_D) = 0.95$ to get the 95% CI $\bar X -U_D, \bar X-L_D.$
Instead, you look at many 're-samples' of 50 observations, sampling with replacement from the
data you have. A typical number of re-samples is $B = 2000,$ a job for a computer.) From them you can often get an idea
of the values of $U_D$ and $L_D,$ and hence an
approximate CI for $\mu.$ One 95% bootstrap CI for $\mu$ turns out to be $(22.0,\, 28.3).$
a.obs = mean(x); a.obs
[1] 25.28573
set.seed(626)
d = replicate(2000, mean(sample(x, 50, rep=T))-a.obs)
UL = quantile(d, c(.975,.025))
a.obs - UL
97.5% 2.5%
22.04661 28.26154 ## Bootstrap CI.
Of course, the randomness in the 'resampling' will
lead to a slightly different bootstrap CI on each run. (My partial cure for that is heavy rounding of endpoints.) Also, there are many different styles of bootstrap CIs.
Note: The data in the example above are sampled in R from a gamma distribution with $\mu = 25.$
set.seed(2022)
x = rgamma(50, 4, 4/25)
If you don't like the bootstrapping idea, then I recommend @Dave's (+1) Answer. | Why don’t we calculate the average of an entire given population instead of computing confidence int
Maybe I am just imagining this, but from what you
say in your question, it seems you might be receptive to the idea of bootstrapping, briefly
illustrated below.
Suppose you have a random sample of siz |
17,994 | Why don’t we calculate the average of an entire given population instead of computing confidence interval to estimate the population mean? | Update
So far neither the question nor any answer has provided a compelling example for the situation "we are given an entire population, so we don't need to do any inferring." I've been wondering whether this occurs outside a “Stat 101”-type of class and I found two examples in Improving Your Statistical Inferences, a great freely available resource by Daniël Lakens. The section Population vs. Sample is well worth the read by anyone who has the same question as the OP.
The first example is definitely a “Stat 101”-type of example, cute and without any relevance to a real-world problem: twelve people have walked on the moon, we know their height (or someone in NASA does), so we know the population average height of humans who have walked on the moon.
The second example is more interesting: [2] is a registy-based study of all children in Norway aged 5—17 between 2008 and 2016 (n = 1,354,393 children). The researchers investigate whether family income is linked to childhood mental illness. Even though technically the research have the entire population, they perform and report inferences, eg, "In the bottom 1% of parental income, 16.9% [95% confidence interval (CI): 15.6, 18.3] of boys had a mental disorder compared with 4.1% (95% CI: 3.3, 4.8) in the top 1%." A good example that there is much more to "inferring" in practice than the population mean which we can compute (if we have the population) or estimate (if we have a sample from the population).
References
[1] Daniël Lakens. Improving Your Statistical Inferences. Available online.
[2] J. M. Kinge et al. Parental income and mental disorders in children and adolescents: prospective register-based study. International Journal of Epidemiology, 50(5):1615–1627, 2021.
The statement "if we are given an entire population, why don't we calculate the average of that population" might be an unhelpful abstraction. For one, we average measurements, not samples or populations.
Statistics is the study of measurements and variation. If we can enumerate all individuals in a population, if we can measure the quantity of interest exactly from each individual and if those measurements and the population never change, then yes, there is no need for statistics because we have the complete data and there is no uncertainty to quantify. Also, this scenario sounds rather contrived.
Instead it might help to think of a non-trivial real-world example where the population is small & well understood, yet we still use statistics: forecasting the US presidential election, a process which involves the 50 US states (+ Washington, D.C.). We know how each state voted in the past and we want to predict how each state will vote in the next elections based on the historical voting patterns, past & current polls and any other relevant information. There is much uncertainty: we cannot measure precisely voting intent today and even if we could, the voting intent today is not exactly the same as the voting intent tomorrow. So even though we know the population of US states and it has only 50 members, to forecast elections, it can help to think of it as a dynamic collection of 50 states, one for each day until the election. | Why don’t we calculate the average of an entire given population instead of computing confidence int | Update
So far neither the question nor any answer has provided a compelling example for the situation "we are given an entire population, so we don't need to do any inferring." I've been wondering whe | Why don’t we calculate the average of an entire given population instead of computing confidence interval to estimate the population mean?
Update
So far neither the question nor any answer has provided a compelling example for the situation "we are given an entire population, so we don't need to do any inferring." I've been wondering whether this occurs outside a “Stat 101”-type of class and I found two examples in Improving Your Statistical Inferences, a great freely available resource by Daniël Lakens. The section Population vs. Sample is well worth the read by anyone who has the same question as the OP.
The first example is definitely a “Stat 101”-type of example, cute and without any relevance to a real-world problem: twelve people have walked on the moon, we know their height (or someone in NASA does), so we know the population average height of humans who have walked on the moon.
The second example is more interesting: [2] is a registy-based study of all children in Norway aged 5—17 between 2008 and 2016 (n = 1,354,393 children). The researchers investigate whether family income is linked to childhood mental illness. Even though technically the research have the entire population, they perform and report inferences, eg, "In the bottom 1% of parental income, 16.9% [95% confidence interval (CI): 15.6, 18.3] of boys had a mental disorder compared with 4.1% (95% CI: 3.3, 4.8) in the top 1%." A good example that there is much more to "inferring" in practice than the population mean which we can compute (if we have the population) or estimate (if we have a sample from the population).
References
[1] Daniël Lakens. Improving Your Statistical Inferences. Available online.
[2] J. M. Kinge et al. Parental income and mental disorders in children and adolescents: prospective register-based study. International Journal of Epidemiology, 50(5):1615–1627, 2021.
The statement "if we are given an entire population, why don't we calculate the average of that population" might be an unhelpful abstraction. For one, we average measurements, not samples or populations.
Statistics is the study of measurements and variation. If we can enumerate all individuals in a population, if we can measure the quantity of interest exactly from each individual and if those measurements and the population never change, then yes, there is no need for statistics because we have the complete data and there is no uncertainty to quantify. Also, this scenario sounds rather contrived.
Instead it might help to think of a non-trivial real-world example where the population is small & well understood, yet we still use statistics: forecasting the US presidential election, a process which involves the 50 US states (+ Washington, D.C.). We know how each state voted in the past and we want to predict how each state will vote in the next elections based on the historical voting patterns, past & current polls and any other relevant information. There is much uncertainty: we cannot measure precisely voting intent today and even if we could, the voting intent today is not exactly the same as the voting intent tomorrow. So even though we know the population of US states and it has only 50 members, to forecast elections, it can help to think of it as a dynamic collection of 50 states, one for each day until the election. | Why don’t we calculate the average of an entire given population instead of computing confidence int
Update
So far neither the question nor any answer has provided a compelling example for the situation "we are given an entire population, so we don't need to do any inferring." I've been wondering whe |
17,995 | Why don’t we calculate the average of an entire given population instead of computing confidence interval to estimate the population mean? | Apart from that the population may be too big or part of it somewhere hidden, part of objects just may not exist at the time of the measurement.
If you craft bolts and measure them, statistical estimates like average length plus minus remain valid for the bolts you will make in the future, if you do not change the technology or intended length.
It is also possible to imagine situations when some objects are already lost/destroyed at the time of the measurements, and you need to make conclusions about them from the objects still available. | Why don’t we calculate the average of an entire given population instead of computing confidence int | Apart from that the population may be too big or part of it somewhere hidden, part of objects just may not exist at the time of the measurement.
If you craft bolts and measure them, statistical estima | Why don’t we calculate the average of an entire given population instead of computing confidence interval to estimate the population mean?
Apart from that the population may be too big or part of it somewhere hidden, part of objects just may not exist at the time of the measurement.
If you craft bolts and measure them, statistical estimates like average length plus minus remain valid for the bolts you will make in the future, if you do not change the technology or intended length.
It is also possible to imagine situations when some objects are already lost/destroyed at the time of the measurements, and you need to make conclusions about them from the objects still available. | Why don’t we calculate the average of an entire given population instead of computing confidence int
Apart from that the population may be too big or part of it somewhere hidden, part of objects just may not exist at the time of the measurement.
If you craft bolts and measure them, statistical estima |
17,996 | Why don’t we calculate the average of an entire given population instead of computing confidence interval to estimate the population mean? | We compute confidence intervals to estimate the true population mean of either a sample (when population standard deviation is unknown)
"Population mean" refers to the mean of the population. There's no population mean of the sample, only sample mean.
But I wonder why, if we are given an entire population, why don't we calculate the average of that population instead of computing confidence interval to estimate the population mean since we already have the entire population?
We aren't given the entire population. However, in hypothesis testing we, as the name suggests, test hypotheses. The hypothesis generally has at the very least a family of distribution, (e.g. "The null hypothesis is that the data is normally distributed"), and generally includes one or more parameters for that family. When a null hypothesis is that the data is normally distributed and specifies a mean and a standard deviation, we use the z-test. When it says it's normally distributed and specifies a mean but not a standard deviation, we use the t-test.
When we have a null hypothesis that has a specific standard deviation, that doesn't mean we were given that standard deviation, any more than having a null hypothesis with a specific mean means that we were given that mean. It just means that we are testing the hypothesis that that is the correct standard deviation. So technically speaking, if we reject the null with a z-test, that means that the null hypothesis' mean is incorrect, OR its standard deviation is wrong, OR the data is not normally distributed.
Reasons we would use a z-test include:
-The sample size is large enough that the difference between z and t is neglible.
-We have some process that we think may have been altered in a manner that would change the mean, but we think the chances of the standard deviation changing are significantly lower.
-We're using a simpler test for the pedagogical motivation of making things simpler when first introducing students to hypothesis testing. | Why don’t we calculate the average of an entire given population instead of computing confidence int | We compute confidence intervals to estimate the true population mean of either a sample (when population standard deviation is unknown)
"Population mean" refers to the mean of the population. There's | Why don’t we calculate the average of an entire given population instead of computing confidence interval to estimate the population mean?
We compute confidence intervals to estimate the true population mean of either a sample (when population standard deviation is unknown)
"Population mean" refers to the mean of the population. There's no population mean of the sample, only sample mean.
But I wonder why, if we are given an entire population, why don't we calculate the average of that population instead of computing confidence interval to estimate the population mean since we already have the entire population?
We aren't given the entire population. However, in hypothesis testing we, as the name suggests, test hypotheses. The hypothesis generally has at the very least a family of distribution, (e.g. "The null hypothesis is that the data is normally distributed"), and generally includes one or more parameters for that family. When a null hypothesis is that the data is normally distributed and specifies a mean and a standard deviation, we use the z-test. When it says it's normally distributed and specifies a mean but not a standard deviation, we use the t-test.
When we have a null hypothesis that has a specific standard deviation, that doesn't mean we were given that standard deviation, any more than having a null hypothesis with a specific mean means that we were given that mean. It just means that we are testing the hypothesis that that is the correct standard deviation. So technically speaking, if we reject the null with a z-test, that means that the null hypothesis' mean is incorrect, OR its standard deviation is wrong, OR the data is not normally distributed.
Reasons we would use a z-test include:
-The sample size is large enough that the difference between z and t is neglible.
-We have some process that we think may have been altered in a manner that would change the mean, but we think the chances of the standard deviation changing are significantly lower.
-We're using a simpler test for the pedagogical motivation of making things simpler when first introducing students to hypothesis testing. | Why don’t we calculate the average of an entire given population instead of computing confidence int
We compute confidence intervals to estimate the true population mean of either a sample (when population standard deviation is unknown)
"Population mean" refers to the mean of the population. There's |
17,997 | Why don’t we calculate the average of an entire given population instead of computing confidence interval to estimate the population mean? | The other answers are good, but to give a concrete example: suppose you want to know the average height of mayors of London. So far there have only been 3 mayors of London, so you take the mean of their heights and find it is 173cm. This is exactly the mean height of all the mayors of London so far. But there will be more mayors of London in future, and they will shift the mean to different values, so it isn't really useful to claim that the population mean is certainly exactly 173cm. | Why don’t we calculate the average of an entire given population instead of computing confidence int | The other answers are good, but to give a concrete example: suppose you want to know the average height of mayors of London. So far there have only been 3 mayors of London, so you take the mean of the | Why don’t we calculate the average of an entire given population instead of computing confidence interval to estimate the population mean?
The other answers are good, but to give a concrete example: suppose you want to know the average height of mayors of London. So far there have only been 3 mayors of London, so you take the mean of their heights and find it is 173cm. This is exactly the mean height of all the mayors of London so far. But there will be more mayors of London in future, and they will shift the mean to different values, so it isn't really useful to claim that the population mean is certainly exactly 173cm. | Why don’t we calculate the average of an entire given population instead of computing confidence int
The other answers are good, but to give a concrete example: suppose you want to know the average height of mayors of London. So far there have only been 3 mayors of London, so you take the mean of the |
17,998 | Intuition of Random Walk having a constant mean | There is a difference between unconditional mean and conditional mean, as there is between unconditional variance and conditional variance.
Mean
For a random walk
$$
Y_t=Y_{t-1}+\varepsilon_t
$$
with $\varepsilon_t\sim i.i.d(0,\sigma_\varepsilon^2)$, the condtional mean is
$$
\mathbb{E}(Y_{t+h}|Y_{t})=Y_t
$$
for $h>0$. This means that given the last observed value $Y_t$, the conditional mean of the process after $h$ periods, $\mathbb{E}(Y_{t+h}|Y_{t})$, is that value, regardless of how much time $h$ has passed. If time starts at $t=0$, then we have the mean conditional on the initial value being $\mathbb{E}(Y_{h}|Y_{0})$. From this we can see that the conditional mean varies with the conditioning information but not the time differential $h$.
Meanwhile, the unconditional mean at any fixed time point $h$ is zero:
$$
\mathbb{E}(Y_{h})=\mathbb{E}(\sum_{i=0}^h\varepsilon_i)=\sum_{i=0}^h\mathbb{E}(\varepsilon_i)=\sum_{i=0}^h(0)=0.
$$
Since it does not vary with $h$, we could say the mean of the process is zero.
Variance
The conditional variance is
$$
\text{Var}(Y_{t+h}|Y_t)=h\sigma_\varepsilon^2.
$$
For a fixed time differential $h$, the conditional variance is not increasing (the fluctuations are not getting wilder) over time, but conditional on some fixed time point the unconditional variance grows linearly with the time difference. Thus contrary to the conditional mean, the conditional variance does not vary with the conditioning information but does vary with (namely, grows linearly in) the time differential $h$.
Meanwhile, the unconditional variance at any fixed time point $h$ is the number of the time point $h$ times the variance of the increment term:
$$
\text{Var}(Y_h)=\text{Var}(\sum_{i=0}^h\varepsilon_i)=\sum_{i=0}^h\text{Var}(\varepsilon_i)=\sum_{i=0}^h(\sigma_\varepsilon^2)=h \sigma_\varepsilon^2
$$
where the second equality uses the independence of the increments $\varepsilon_i$. Note that we can easily define the variance at a fixed time point but it is not as simple otherwise. Without being very rigorous, one could say the variance is undefined for an undefined time point. (This is in contrast to the mean.) | Intuition of Random Walk having a constant mean | There is a difference between unconditional mean and conditional mean, as there is between unconditional variance and conditional variance.
Mean
For a random walk
$$
Y_t=Y_{t-1}+\varepsilon_t
$$
with | Intuition of Random Walk having a constant mean
There is a difference between unconditional mean and conditional mean, as there is between unconditional variance and conditional variance.
Mean
For a random walk
$$
Y_t=Y_{t-1}+\varepsilon_t
$$
with $\varepsilon_t\sim i.i.d(0,\sigma_\varepsilon^2)$, the condtional mean is
$$
\mathbb{E}(Y_{t+h}|Y_{t})=Y_t
$$
for $h>0$. This means that given the last observed value $Y_t$, the conditional mean of the process after $h$ periods, $\mathbb{E}(Y_{t+h}|Y_{t})$, is that value, regardless of how much time $h$ has passed. If time starts at $t=0$, then we have the mean conditional on the initial value being $\mathbb{E}(Y_{h}|Y_{0})$. From this we can see that the conditional mean varies with the conditioning information but not the time differential $h$.
Meanwhile, the unconditional mean at any fixed time point $h$ is zero:
$$
\mathbb{E}(Y_{h})=\mathbb{E}(\sum_{i=0}^h\varepsilon_i)=\sum_{i=0}^h\mathbb{E}(\varepsilon_i)=\sum_{i=0}^h(0)=0.
$$
Since it does not vary with $h$, we could say the mean of the process is zero.
Variance
The conditional variance is
$$
\text{Var}(Y_{t+h}|Y_t)=h\sigma_\varepsilon^2.
$$
For a fixed time differential $h$, the conditional variance is not increasing (the fluctuations are not getting wilder) over time, but conditional on some fixed time point the unconditional variance grows linearly with the time difference. Thus contrary to the conditional mean, the conditional variance does not vary with the conditioning information but does vary with (namely, grows linearly in) the time differential $h$.
Meanwhile, the unconditional variance at any fixed time point $h$ is the number of the time point $h$ times the variance of the increment term:
$$
\text{Var}(Y_h)=\text{Var}(\sum_{i=0}^h\varepsilon_i)=\sum_{i=0}^h\text{Var}(\varepsilon_i)=\sum_{i=0}^h(\sigma_\varepsilon^2)=h \sigma_\varepsilon^2
$$
where the second equality uses the independence of the increments $\varepsilon_i$. Note that we can easily define the variance at a fixed time point but it is not as simple otherwise. Without being very rigorous, one could say the variance is undefined for an undefined time point. (This is in contrast to the mean.) | Intuition of Random Walk having a constant mean
There is a difference between unconditional mean and conditional mean, as there is between unconditional variance and conditional variance.
Mean
For a random walk
$$
Y_t=Y_{t-1}+\varepsilon_t
$$
with |
17,999 | Intuition of Random Walk having a constant mean | To see what is happening you need more than one realisation of the random walk, because the mean and variance are summaries of the distribution of the walk, not of any single realisation.
This code repeats your code to plot 20 random walks
set.seed(1)
ys<-replicate(20,{
TT <- 100
y <- ww <- rnorm(n = TT, mean = 0, sd = 1)
for (t in 2:TT)
{
y[t] <- y[t - 1] + ww[t]
}
y
})
matplot(1:100,ys,type="l",
col=rep(c("black","grey"),c(1,19)), lwd=rep(c(2,1),c(1,19)),lty=1)
to give
Any single realisation of the random walk will randomly walk off up or down the graph. The entire cloud of possible random walks stays centered at zero and spreads out as time passes; some go up, some go down, some stay near the middle. The mean of the cloud stays at zero; the variance increases linearly with time. | Intuition of Random Walk having a constant mean | To see what is happening you need more than one realisation of the random walk, because the mean and variance are summaries of the distribution of the walk, not of any single realisation.
This code re | Intuition of Random Walk having a constant mean
To see what is happening you need more than one realisation of the random walk, because the mean and variance are summaries of the distribution of the walk, not of any single realisation.
This code repeats your code to plot 20 random walks
set.seed(1)
ys<-replicate(20,{
TT <- 100
y <- ww <- rnorm(n = TT, mean = 0, sd = 1)
for (t in 2:TT)
{
y[t] <- y[t - 1] + ww[t]
}
y
})
matplot(1:100,ys,type="l",
col=rep(c("black","grey"),c(1,19)), lwd=rep(c(2,1),c(1,19)),lty=1)
to give
Any single realisation of the random walk will randomly walk off up or down the graph. The entire cloud of possible random walks stays centered at zero and spreads out as time passes; some go up, some go down, some stay near the middle. The mean of the cloud stays at zero; the variance increases linearly with time. | Intuition of Random Walk having a constant mean
To see what is happening you need more than one realisation of the random walk, because the mean and variance are summaries of the distribution of the walk, not of any single realisation.
This code re |
18,000 | Probability of flipping heads after three attempts | There is a slightly easier approach. Since you asked not to be given the answer, here are some hints:
In effect you flip each coin up to three times. If it comes up heads on any of those then you stop with that coin
What is the probability you get three tails with a particular coin?
So what is the probability you get that coin showing heads in the up-to-three attempts?
Each of the three coins is independent of the other.
So what is the probability you get all three coins showing heads in the up-to-three attempts?
As a check, you should have an answer with denominator $2^9=512$ and a final answer close to by not exactly $\frac23$ | Probability of flipping heads after three attempts | There is a slightly easier approach. Since you asked not to be given the answer, here are some hints:
In effect you flip each coin up to three times. If it comes up heads on any of those then you st | Probability of flipping heads after three attempts
There is a slightly easier approach. Since you asked not to be given the answer, here are some hints:
In effect you flip each coin up to three times. If it comes up heads on any of those then you stop with that coin
What is the probability you get three tails with a particular coin?
So what is the probability you get that coin showing heads in the up-to-three attempts?
Each of the three coins is independent of the other.
So what is the probability you get all three coins showing heads in the up-to-three attempts?
As a check, you should have an answer with denominator $2^9=512$ and a final answer close to by not exactly $\frac23$ | Probability of flipping heads after three attempts
There is a slightly easier approach. Since you asked not to be given the answer, here are some hints:
In effect you flip each coin up to three times. If it comes up heads on any of those then you st |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.