idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
44,801
|
Is it confounding variable?
|
Here, smoking is the confounder.
The exposure is coffee drinking and the outcome is heart attack.
To be a confounder the variable has to be a cause, or a proxy for a cause of both the exposure and the outcome. It does not have to be a direct cause.
So here, it is sufficient for there to simply be correlation between coffee drinking and smoking, because they share a common cause.
One of the best way to understand confounding, and causal inference in general is to use Directed Acyclic Graphs (somtimes called Causal diagrams). See work by Judea Perl on causality for details of the underlying theory. To illustrate, consider the following DAG:
This was produced with DAGgity (www.dagitty.net), a free online too which implements DAG theory with a view to explaining confounding and to inform the minimal set of covariates to adjust for in a regression model to obtain the true causal effect. You may want to click on the figure to get a more detailed view. Here E is the exposure and D is the outcome. A is a cause of both E and D, so is obviously A is a confounder, and DAGgity tells us in the top right hand corner that if we adjust for A in a regression model we can obtain the true total causal effect of E on D. It is important to understand that this is the case only is the DAG is "correct" (ie we have included all relevant variables and the directions of causality.
Now, note that in the top left corner it says that variable A is "adjusted" - that means we have observed it. However, in the particular example in your question, we haven't observed it (we may have no idea what it is, only that it exists), instead, we have observed S (smoking) and now we have the following DAG:
So, there is no causal relationship between smoking (S) and our exposure (E), but they will be correlated due them having a common cause (A). Note that in the top left corner we have specified A as unobserved, and in the top right corner DAGgity tells us that we simply need to adjust for S (smoking). So coffee drinking isn't a "true" confounder, it is a proxy for A, which is the true (unobserved) confounder, and that is probably at the heart of the confusion here.
Now, let's introduce another, unobserved, confounder, B:
DAGgity now tells us that we cannot estimate the true causal effect, and that is because we have residual (confounding) due to the unobserved confounder B. Sadly, this is often the case in observational studies, which is why clinical trials are considered the gold standard in terms of causality (this is not to say that trials are always perfect.). This also explains why it is sometimes said that correlation is "poor definition" of a confounder: the correlation between smoking and coffee drinking is not solely due to A, it is distorted by B.
To sum up, the issue has to do with "true" confounders, and "proxy" confounders, and whatever assumptions are made (or not made !) about unobserved variables and the causal relationships.
|
Is it confounding variable?
|
Here, smoking is the confounder.
The exposure is coffee drinking and the outcome is heart attack.
To be a confounder the variable has to be a cause, or a proxy for a cause of both the exposure and the
|
Is it confounding variable?
Here, smoking is the confounder.
The exposure is coffee drinking and the outcome is heart attack.
To be a confounder the variable has to be a cause, or a proxy for a cause of both the exposure and the outcome. It does not have to be a direct cause.
So here, it is sufficient for there to simply be correlation between coffee drinking and smoking, because they share a common cause.
One of the best way to understand confounding, and causal inference in general is to use Directed Acyclic Graphs (somtimes called Causal diagrams). See work by Judea Perl on causality for details of the underlying theory. To illustrate, consider the following DAG:
This was produced with DAGgity (www.dagitty.net), a free online too which implements DAG theory with a view to explaining confounding and to inform the minimal set of covariates to adjust for in a regression model to obtain the true causal effect. You may want to click on the figure to get a more detailed view. Here E is the exposure and D is the outcome. A is a cause of both E and D, so is obviously A is a confounder, and DAGgity tells us in the top right hand corner that if we adjust for A in a regression model we can obtain the true total causal effect of E on D. It is important to understand that this is the case only is the DAG is "correct" (ie we have included all relevant variables and the directions of causality.
Now, note that in the top left corner it says that variable A is "adjusted" - that means we have observed it. However, in the particular example in your question, we haven't observed it (we may have no idea what it is, only that it exists), instead, we have observed S (smoking) and now we have the following DAG:
So, there is no causal relationship between smoking (S) and our exposure (E), but they will be correlated due them having a common cause (A). Note that in the top left corner we have specified A as unobserved, and in the top right corner DAGgity tells us that we simply need to adjust for S (smoking). So coffee drinking isn't a "true" confounder, it is a proxy for A, which is the true (unobserved) confounder, and that is probably at the heart of the confusion here.
Now, let's introduce another, unobserved, confounder, B:
DAGgity now tells us that we cannot estimate the true causal effect, and that is because we have residual (confounding) due to the unobserved confounder B. Sadly, this is often the case in observational studies, which is why clinical trials are considered the gold standard in terms of causality (this is not to say that trials are always perfect.). This also explains why it is sometimes said that correlation is "poor definition" of a confounder: the correlation between smoking and coffee drinking is not solely due to A, it is distorted by B.
To sum up, the issue has to do with "true" confounders, and "proxy" confounders, and whatever assumptions are made (or not made !) about unobserved variables and the causal relationships.
|
Is it confounding variable?
Here, smoking is the confounder.
The exposure is coffee drinking and the outcome is heart attack.
To be a confounder the variable has to be a cause, or a proxy for a cause of both the exposure and the
|
44,802
|
Is it confounding variable?
|
As a somewhat less technical answer, and not necessarily going into strict definitions and associations of what counts as proper causality what does not etc:
The word 'to confound' itself means "to mistake / confuse something for something else".
A confounding variable, therefore, putting strict technical contexts aside, is a variable [whose effect] is mistaken for that of another. In this case, the effects of smoking are unaccounted for and thus wrongly attributed to that of coffee consumption, when in fact it has no [direct] effect once smoking is taken into account. Thus smoking is the 'confounding variable'.
Arguing about definitions generally tends not to be a very fruitful or useful form of argument, unless the discussion is explicitly "what is the strict definition of X given context Y". In the passage you quote, they are effectively defining confounding as per that example. You're effectively saying you disagree with that definition. That's fine. As long as you use it in a context where your definition is that understood by your peers and can be put to use, then it is appropriate.
It may be that in some stricter contexts there is a stricter definition where the above example does not count as causality. But smoking as a proxy for the effects of coffee in cancer is THE textbook example of confounding, and I would say that this is what most people understanding confounding to mean, i.e. "we wrongly attributed the effect to X, when in fact controlling for Y made the effect from X disappear".
|
Is it confounding variable?
|
As a somewhat less technical answer, and not necessarily going into strict definitions and associations of what counts as proper causality what does not etc:
The word 'to confound' itself means "to mi
|
Is it confounding variable?
As a somewhat less technical answer, and not necessarily going into strict definitions and associations of what counts as proper causality what does not etc:
The word 'to confound' itself means "to mistake / confuse something for something else".
A confounding variable, therefore, putting strict technical contexts aside, is a variable [whose effect] is mistaken for that of another. In this case, the effects of smoking are unaccounted for and thus wrongly attributed to that of coffee consumption, when in fact it has no [direct] effect once smoking is taken into account. Thus smoking is the 'confounding variable'.
Arguing about definitions generally tends not to be a very fruitful or useful form of argument, unless the discussion is explicitly "what is the strict definition of X given context Y". In the passage you quote, they are effectively defining confounding as per that example. You're effectively saying you disagree with that definition. That's fine. As long as you use it in a context where your definition is that understood by your peers and can be put to use, then it is appropriate.
It may be that in some stricter contexts there is a stricter definition where the above example does not count as causality. But smoking as a proxy for the effects of coffee in cancer is THE textbook example of confounding, and I would say that this is what most people understanding confounding to mean, i.e. "we wrongly attributed the effect to X, when in fact controlling for Y made the effect from X disappear".
|
Is it confounding variable?
As a somewhat less technical answer, and not necessarily going into strict definitions and associations of what counts as proper causality what does not etc:
The word 'to confound' itself means "to mi
|
44,803
|
Demeaning with two (n) fixed effects in panel regressions
|
If you use demean approach (which is theoretically right), then you have to do demean your data both cross sectionally and time series (irrespective of the order). See how it is works.
Assume following regression model:
$$y_{it} = u_i + \nu_t + \beta X_{it} + e_{it} \,\,\,\,\, i = 1, 2, \dots, n \,\,\,\,\, T = 1, 2, \dots, t \tag{1}$$
First demean cross sectionally. Mean equation is
$$\bar{y_i} = u_i + \bar{v} + \beta \bar{X_i} + \bar{e_i}\,\,\,\,\, i = 1, \dots, n \tag{2}$$
(notice, the $\bar{v}$ will be same for each cross sectional)
Subtracting (1) - (2)
$$y_{it} - \bar{y_i} = \nu_t - \bar{v} + \beta (X_{it} - \bar{X_i}) + (e_{it} - \bar{e_i}) \tag{3}$$
(see there is no fixed effect in equation 3)
Now, take mean of equation 1 for each $t$, and the mean equation is,
$$\bar{y}_t = \bar{u} + \nu_t + \beta \bar{X_t} + \bar{e_t} \,\,\,\,\, T = 1, 2, \dots, t\tag{4}$$
Now, subtract equation 4 from equation 3, we get:
$$y_{it} - \bar{y_i} - \bar{y_t} = \beta (X_{it} - \bar{X}_i - \bar{X}_t) -(\bar{v} + \bar{u}) + (e_{it} - \bar{e_i} - \bar{e_t}) \tag{5}$$
In this way, there is no fixed effects and time effects in equation 5.
|
Demeaning with two (n) fixed effects in panel regressions
|
If you use demean approach (which is theoretically right), then you have to do demean your data both cross sectionally and time series (irrespective of the order). See how it is works.
Assume followi
|
Demeaning with two (n) fixed effects in panel regressions
If you use demean approach (which is theoretically right), then you have to do demean your data both cross sectionally and time series (irrespective of the order). See how it is works.
Assume following regression model:
$$y_{it} = u_i + \nu_t + \beta X_{it} + e_{it} \,\,\,\,\, i = 1, 2, \dots, n \,\,\,\,\, T = 1, 2, \dots, t \tag{1}$$
First demean cross sectionally. Mean equation is
$$\bar{y_i} = u_i + \bar{v} + \beta \bar{X_i} + \bar{e_i}\,\,\,\,\, i = 1, \dots, n \tag{2}$$
(notice, the $\bar{v}$ will be same for each cross sectional)
Subtracting (1) - (2)
$$y_{it} - \bar{y_i} = \nu_t - \bar{v} + \beta (X_{it} - \bar{X_i}) + (e_{it} - \bar{e_i}) \tag{3}$$
(see there is no fixed effect in equation 3)
Now, take mean of equation 1 for each $t$, and the mean equation is,
$$\bar{y}_t = \bar{u} + \nu_t + \beta \bar{X_t} + \bar{e_t} \,\,\,\,\, T = 1, 2, \dots, t\tag{4}$$
Now, subtract equation 4 from equation 3, we get:
$$y_{it} - \bar{y_i} - \bar{y_t} = \beta (X_{it} - \bar{X}_i - \bar{X}_t) -(\bar{v} + \bar{u}) + (e_{it} - \bar{e_i} - \bar{e_t}) \tag{5}$$
In this way, there is no fixed effects and time effects in equation 5.
|
Demeaning with two (n) fixed effects in panel regressions
If you use demean approach (which is theoretically right), then you have to do demean your data both cross sectionally and time series (irrespective of the order). See how it is works.
Assume followi
|
44,804
|
Demeaning with two (n) fixed effects in panel regressions
|
Been trying to figure this out myself and saw the only other answer to this question is wrong here. You need to subtract time and group means but then add the overall mean back in. See Greene (2012) on Fixed Time and Group Effects (section 11.4.4). You can try it out yourself and see that just subtracting the time and group means does not give the correct result.
|
Demeaning with two (n) fixed effects in panel regressions
|
Been trying to figure this out myself and saw the only other answer to this question is wrong here. You need to subtract time and group means but then add the overall mean back in. See Greene (2012) o
|
Demeaning with two (n) fixed effects in panel regressions
Been trying to figure this out myself and saw the only other answer to this question is wrong here. You need to subtract time and group means but then add the overall mean back in. See Greene (2012) on Fixed Time and Group Effects (section 11.4.4). You can try it out yourself and see that just subtracting the time and group means does not give the correct result.
|
Demeaning with two (n) fixed effects in panel regressions
Been trying to figure this out myself and saw the only other answer to this question is wrong here. You need to subtract time and group means but then add the overall mean back in. See Greene (2012) o
|
44,805
|
Demeaning with two (n) fixed effects in panel regressions
|
The point is to rid the process of the individual-specific and time-specific nuisance parameters (in this case the $\alpha_i$ and $\theta_t$ in notation below). This idea dates back all the way to Neyman and Scott's (1948) Econometrica paper. To do so, you should subtract the individual-specific and time-specific means and add back the grand mean, as follows (suppose $x_{it}$ is scalar without loss of generality):
$y_{it} = \alpha_i + \theta_t + \beta x_{it} + u_{it}$
$\bar{y}_{i}:=\frac{1}{T}\sum_{t=1}^Ty_{it} = \alpha_i + \bar{\theta} + \beta \bar{x}_{i} + \bar{u}_{i}$
$\bar{y}_{t}:=\frac{1}{N}\sum_{i=1}^Ty_{it} = \bar{\alpha} + \theta_t + \beta \bar{x}_{t} + \bar{u}_{t}$
$\bar{y}:=\frac{1}{NT}\sum_{t=1}^T\sum_{i=1}^N y_{it} = \bar{\alpha} + \bar{\theta} + \beta \bar{x} + \bar{u}$,
now, define $\dot{y}_{it}:= y_{it}-\bar{y}_{i}-\bar{y}_{t} + \bar{y}$ to obtain an equation which can be readily used to estimate $\beta$ under suitable restrictions which I won't state here as it is beyond OP's scope:
$\dot{y}_{it} = \beta \dot{x}_{it} + \dot{u}_{it}$
The current top answer isn't really appropiate because the nuisance parameters are still in the model, that is why adding back the grand mean is important. For more modern references, see standard graduate texts on panel data, e.g., Analysis of Panel Data by Hsiao pg. 62, Econometric Analysis of Panel Data by Baltagi pg. 27-28.
|
Demeaning with two (n) fixed effects in panel regressions
|
The point is to rid the process of the individual-specific and time-specific nuisance parameters (in this case the $\alpha_i$ and $\theta_t$ in notation below). This idea dates back all the way to Ney
|
Demeaning with two (n) fixed effects in panel regressions
The point is to rid the process of the individual-specific and time-specific nuisance parameters (in this case the $\alpha_i$ and $\theta_t$ in notation below). This idea dates back all the way to Neyman and Scott's (1948) Econometrica paper. To do so, you should subtract the individual-specific and time-specific means and add back the grand mean, as follows (suppose $x_{it}$ is scalar without loss of generality):
$y_{it} = \alpha_i + \theta_t + \beta x_{it} + u_{it}$
$\bar{y}_{i}:=\frac{1}{T}\sum_{t=1}^Ty_{it} = \alpha_i + \bar{\theta} + \beta \bar{x}_{i} + \bar{u}_{i}$
$\bar{y}_{t}:=\frac{1}{N}\sum_{i=1}^Ty_{it} = \bar{\alpha} + \theta_t + \beta \bar{x}_{t} + \bar{u}_{t}$
$\bar{y}:=\frac{1}{NT}\sum_{t=1}^T\sum_{i=1}^N y_{it} = \bar{\alpha} + \bar{\theta} + \beta \bar{x} + \bar{u}$,
now, define $\dot{y}_{it}:= y_{it}-\bar{y}_{i}-\bar{y}_{t} + \bar{y}$ to obtain an equation which can be readily used to estimate $\beta$ under suitable restrictions which I won't state here as it is beyond OP's scope:
$\dot{y}_{it} = \beta \dot{x}_{it} + \dot{u}_{it}$
The current top answer isn't really appropiate because the nuisance parameters are still in the model, that is why adding back the grand mean is important. For more modern references, see standard graduate texts on panel data, e.g., Analysis of Panel Data by Hsiao pg. 62, Econometric Analysis of Panel Data by Baltagi pg. 27-28.
|
Demeaning with two (n) fixed effects in panel regressions
The point is to rid the process of the individual-specific and time-specific nuisance parameters (in this case the $\alpha_i$ and $\theta_t$ in notation below). This idea dates back all the way to Ney
|
44,806
|
Demeaning with two (n) fixed effects in panel regressions
|
There also exists an iteration procedure using the equation shown by Neeraj (Source: https://doi.org/10.1177/1536867X1501500318). This procedure seems also to lead to the same results if there is an unbalanced panel. I assume that the formula shown by Greene is also subject to some restriction as adressed by Helix123.
|
Demeaning with two (n) fixed effects in panel regressions
|
There also exists an iteration procedure using the equation shown by Neeraj (Source: https://doi.org/10.1177/1536867X1501500318). This procedure seems also to lead to the same results if there is an u
|
Demeaning with two (n) fixed effects in panel regressions
There also exists an iteration procedure using the equation shown by Neeraj (Source: https://doi.org/10.1177/1536867X1501500318). This procedure seems also to lead to the same results if there is an unbalanced panel. I assume that the formula shown by Greene is also subject to some restriction as adressed by Helix123.
|
Demeaning with two (n) fixed effects in panel regressions
There also exists an iteration procedure using the equation shown by Neeraj (Source: https://doi.org/10.1177/1536867X1501500318). This procedure seems also to lead to the same results if there is an u
|
44,807
|
Is there a name for a moving average when it is done not across time but some other variable?
|
A Moving Average Filter is a special case for a Finite Impulse Response (FIR) Filter, where equal weights are used that add up to unity.
Note that in the case of time sampled data the result of the averaging is written for the time index of the most recent data point in the averaging window, hence the name filter. If another index is used then this is interpreted as usage of future information, and the procedure is called smoothing.
A Moving Average Filter/Smoother clearly operates based on the assumption that the underlying state changes slowly hence can be recovered by locally averaging in order to reduce the observation noise.
If our indexing is based on another variable then we are not doing something very different. Time indexing can be thought as random sampling from a uniform distribution. Applying a similar local averaging idea in this case corresponds to kernel regression or kernel smoother.
Since there is no time component the filter vs smoother distinction is not very relevant (likewise whether the filter being a causal filter or not).
We are also flexible on the weights. If we use a uniform kernel equal weighting of Moving Average will be imitated. Other kernels are clearly applicable similar to FIR.
The main distinction comes when determining the neighborhood. In the time samples, equal distance sampling is usually assumed. In the regression case a more sophisticated distance metric needs to be employed. For a single independent variable this is not much of a concern (the distance on a line is very intuitive). But if there are many independent variables then the distance calculation severely affects which data points to be included in the averaging.
|
Is there a name for a moving average when it is done not across time but some other variable?
|
A Moving Average Filter is a special case for a Finite Impulse Response (FIR) Filter, where equal weights are used that add up to unity.
Note that in the case of time sampled data the result of the av
|
Is there a name for a moving average when it is done not across time but some other variable?
A Moving Average Filter is a special case for a Finite Impulse Response (FIR) Filter, where equal weights are used that add up to unity.
Note that in the case of time sampled data the result of the averaging is written for the time index of the most recent data point in the averaging window, hence the name filter. If another index is used then this is interpreted as usage of future information, and the procedure is called smoothing.
A Moving Average Filter/Smoother clearly operates based on the assumption that the underlying state changes slowly hence can be recovered by locally averaging in order to reduce the observation noise.
If our indexing is based on another variable then we are not doing something very different. Time indexing can be thought as random sampling from a uniform distribution. Applying a similar local averaging idea in this case corresponds to kernel regression or kernel smoother.
Since there is no time component the filter vs smoother distinction is not very relevant (likewise whether the filter being a causal filter or not).
We are also flexible on the weights. If we use a uniform kernel equal weighting of Moving Average will be imitated. Other kernels are clearly applicable similar to FIR.
The main distinction comes when determining the neighborhood. In the time samples, equal distance sampling is usually assumed. In the regression case a more sophisticated distance metric needs to be employed. For a single independent variable this is not much of a concern (the distance on a line is very intuitive). But if there are many independent variables then the distance calculation severely affects which data points to be included in the averaging.
|
Is there a name for a moving average when it is done not across time but some other variable?
A Moving Average Filter is a special case for a Finite Impulse Response (FIR) Filter, where equal weights are used that add up to unity.
Note that in the case of time sampled data the result of the av
|
44,808
|
Is there a name for a moving average when it is done not across time but some other variable?
|
Terminology can differ between fields even apparently sharing applications. Based on statistical theory and practice in several fields (time series, spatial series, any application where a response may be smoothed as a function of predictors) I propose simply that a moving average is still a moving average outside a time series context.
There is no good reason to include as part of a definition of moving average that the application must be to time series. In practice, that is likely to be the most common application and also the example that people meet first, but neither fact is decisive in principle.
It's not even crucial that you have
at most one non-missing value at each point
regularly spaced values
on the dimension or dimensions you are averaging over. (On dimensions, note that averaging over values at neighbouring points in space is often helpful.)
You can always define sets of weights (kernels) that are general enough to cope with such complications. I assert that weights that decline with time or distance from the point being averaged for are often more useful than equal weights. Whether an average should be asymmetric (e.g. only considering "earlier" points) is up for discussion too.
So, to make a key point now explicit, I see no reason to define moving averages as being based on equal weights. In time series analysis, equal weights are often used, but that is a matter of convention or simplicity at most. Basic theory and practice combine to show that equal weights have unfortunate properties in the frequency domain and are especially sensitive to outliers as averages can jump when an outlier leaves or enters the window, often although not always regarded as undesirable.
Note that we can be flexible about what is an average in this context as well as any other. Prefer medians? trimmed means? Being clear about what you're doing is the main imperative about use of terminology.
The term scatter plot smoother fits some applications that are not time series, but clearly not all.
|
Is there a name for a moving average when it is done not across time but some other variable?
|
Terminology can differ between fields even apparently sharing applications. Based on statistical theory and practice in several fields (time series, spatial series, any application where a response ma
|
Is there a name for a moving average when it is done not across time but some other variable?
Terminology can differ between fields even apparently sharing applications. Based on statistical theory and practice in several fields (time series, spatial series, any application where a response may be smoothed as a function of predictors) I propose simply that a moving average is still a moving average outside a time series context.
There is no good reason to include as part of a definition of moving average that the application must be to time series. In practice, that is likely to be the most common application and also the example that people meet first, but neither fact is decisive in principle.
It's not even crucial that you have
at most one non-missing value at each point
regularly spaced values
on the dimension or dimensions you are averaging over. (On dimensions, note that averaging over values at neighbouring points in space is often helpful.)
You can always define sets of weights (kernels) that are general enough to cope with such complications. I assert that weights that decline with time or distance from the point being averaged for are often more useful than equal weights. Whether an average should be asymmetric (e.g. only considering "earlier" points) is up for discussion too.
So, to make a key point now explicit, I see no reason to define moving averages as being based on equal weights. In time series analysis, equal weights are often used, but that is a matter of convention or simplicity at most. Basic theory and practice combine to show that equal weights have unfortunate properties in the frequency domain and are especially sensitive to outliers as averages can jump when an outlier leaves or enters the window, often although not always regarded as undesirable.
Note that we can be flexible about what is an average in this context as well as any other. Prefer medians? trimmed means? Being clear about what you're doing is the main imperative about use of terminology.
The term scatter plot smoother fits some applications that are not time series, but clearly not all.
|
Is there a name for a moving average when it is done not across time but some other variable?
Terminology can differ between fields even apparently sharing applications. Based on statistical theory and practice in several fields (time series, spatial series, any application where a response ma
|
44,809
|
Chi square test when sample sizes are different?
|
You can use a chi-squared test in your example with different sample sizes. Your "another verb type" would be verbs that are not oral verbs, i.e. all the other verbs
Suppose in your example, $10$ of the $82$ verbs in sample one were oral verbs and $72$ were not, while $20$ of the $89$ verbs in sample two were oral verbs and $69$ were not. Then the table for your four cell chi-squared test could look like
10 72 | 82
20 69 | 89
__ ___ ___
|
30 141 | 171
and in R you might get
chisq.test(rbind(c(10, 72), c(20, 69)))
# Pearson's Chi-squared test with Yates' continuity correction
#
# data: rbind(c(10, 72), c(20, 69))
# X-squared = 2.4459, df = 1, p-value = 0.1178
so this example would not be statistically significant
|
Chi square test when sample sizes are different?
|
You can use a chi-squared test in your example with different sample sizes. Your "another verb type" would be verbs that are not oral verbs, i.e. all the other verbs
Suppose in your example, $10$ of t
|
Chi square test when sample sizes are different?
You can use a chi-squared test in your example with different sample sizes. Your "another verb type" would be verbs that are not oral verbs, i.e. all the other verbs
Suppose in your example, $10$ of the $82$ verbs in sample one were oral verbs and $72$ were not, while $20$ of the $89$ verbs in sample two were oral verbs and $69$ were not. Then the table for your four cell chi-squared test could look like
10 72 | 82
20 69 | 89
__ ___ ___
|
30 141 | 171
and in R you might get
chisq.test(rbind(c(10, 72), c(20, 69)))
# Pearson's Chi-squared test with Yates' continuity correction
#
# data: rbind(c(10, 72), c(20, 69))
# X-squared = 2.4459, df = 1, p-value = 0.1178
so this example would not be statistically significant
|
Chi square test when sample sizes are different?
You can use a chi-squared test in your example with different sample sizes. Your "another verb type" would be verbs that are not oral verbs, i.e. all the other verbs
Suppose in your example, $10$ of t
|
44,810
|
Chi square test when sample sizes are different?
|
Just in case anyone is looking for the Python version of this, you can use scipy ch2_contingency: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2_contingency.html
Using the same example as @Henry
import numpy as np
from scipy.stats import chi2_contingency
obs = np.array([[10, 72], [20, 69]])
chi2, p, dof, ex = chi2_contingency(obs)
print(chi2, dof, p)
> 2.44591778277931 1 0.11783094937852609
Which is the same result as R chisq.test
|
Chi square test when sample sizes are different?
|
Just in case anyone is looking for the Python version of this, you can use scipy ch2_contingency: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2_contingency.html
Using the same
|
Chi square test when sample sizes are different?
Just in case anyone is looking for the Python version of this, you can use scipy ch2_contingency: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2_contingency.html
Using the same example as @Henry
import numpy as np
from scipy.stats import chi2_contingency
obs = np.array([[10, 72], [20, 69]])
chi2, p, dof, ex = chi2_contingency(obs)
print(chi2, dof, p)
> 2.44591778277931 1 0.11783094937852609
Which is the same result as R chisq.test
|
Chi square test when sample sizes are different?
Just in case anyone is looking for the Python version of this, you can use scipy ch2_contingency: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.chi2_contingency.html
Using the same
|
44,811
|
How to generate samples of Poisson-Lognormal distribution
|
You can generate a sample by first generating a normally distributed value, then take the exponent of that, then use that as the parameter in a Poisson distribution and take a sample from that distribution. The resulting samples of this three-step process will be Poisson-Lognormally distributed.
|
How to generate samples of Poisson-Lognormal distribution
|
You can generate a sample by first generating a normally distributed value, then take the exponent of that, then use that as the parameter in a Poisson distribution and take a sample from that distrib
|
How to generate samples of Poisson-Lognormal distribution
You can generate a sample by first generating a normally distributed value, then take the exponent of that, then use that as the parameter in a Poisson distribution and take a sample from that distribution. The resulting samples of this three-step process will be Poisson-Lognormally distributed.
|
How to generate samples of Poisson-Lognormal distribution
You can generate a sample by first generating a normally distributed value, then take the exponent of that, then use that as the parameter in a Poisson distribution and take a sample from that distrib
|
44,812
|
Is causal inference only from data possible?
|
Suppose we are given a dataset but not the capability of performing
some AB testing. We do some regression using X as predictor and Y as
response and get a model. Can we actually say something about the
causal relationship between X and Y?
No you can't, even when all variables are observed, see here for instance. If you are given only distributional information about the data (ie., you know the joint distribution of the observed variables), but no information about how the data was generated (a causal model), causal inference is impossible. In short, you need causal assumptions to get causal conclusions. You can get started on learning causal inference with the references here.
It is easy to understand why that is the case by constructing an example where different causal models entail the same observed joint probability distribution. Consider that you have observed the joint probability distribution $P(x, y)$ of two random variables. Here, imagine you have no sampling uncertainty---so you have perfect knowledge of $P(x, y)$, which entails perfect knowledge of the regression function and so on. To simplify things, consider that, in your data, $P(x,y)$ was found to be jointly normal with mean $0$, variance 1 and covariance $\sigma_{xy}$ (this is without loss of generality, you can always standardize the data). What can you say about the causal effect of $x$ on $y$ or vice-versa?
With only this information, nothing. The reason here is that there are several causal models that would create the same observed distribution, yet have different interventional (and counterfactual) distributions. Here I will show three of such models. Notice all of them gives you the same observed $\sigma_{xy}$, but their causal conclusions are different: in the first model $X$ causes $Y$, in the second model $Y$ causes $X$, and, in the third model, neither causes each other --- $X$ and $Y$ are both common causes of the unobserved variable $Z$.
Model 1
$$
X = u_{x}\\
Y= \sigma_{yx}x + u_{y}
$$
Where $U_{x} \sim \mathcal{N}(0, 1)$ and $U_{y} = \mathcal{N}(0, 1 - \sigma_{xy}^2)$.
Model 2
$$
Y = u_{y}\\
X = \sigma_{yx}y + u_{x}
$$
Where $U_{x} \sim \mathcal{N}(0, 1 - \sigma_{xy}^2)$ and $U_{y} = \mathcal{N}(0, 1)$.
Model 3
$$
Z = U_{z}\\
X = \alpha Z + U_{x}\\
Y = \beta Z + U_{y}
$$
Where $\alpha\beta= \sigma_{xy}$, $U_{z} = \mathcal{N}(0, 1)$, $U_{x} = \mathcal{N(0, 1- \alpha^2)}$ and $U_{y} = \mathcal{N(0, 1- \beta^2)}$.
|
Is causal inference only from data possible?
|
Suppose we are given a dataset but not the capability of performing
some AB testing. We do some regression using X as predictor and Y as
response and get a model. Can we actually say something abo
|
Is causal inference only from data possible?
Suppose we are given a dataset but not the capability of performing
some AB testing. We do some regression using X as predictor and Y as
response and get a model. Can we actually say something about the
causal relationship between X and Y?
No you can't, even when all variables are observed, see here for instance. If you are given only distributional information about the data (ie., you know the joint distribution of the observed variables), but no information about how the data was generated (a causal model), causal inference is impossible. In short, you need causal assumptions to get causal conclusions. You can get started on learning causal inference with the references here.
It is easy to understand why that is the case by constructing an example where different causal models entail the same observed joint probability distribution. Consider that you have observed the joint probability distribution $P(x, y)$ of two random variables. Here, imagine you have no sampling uncertainty---so you have perfect knowledge of $P(x, y)$, which entails perfect knowledge of the regression function and so on. To simplify things, consider that, in your data, $P(x,y)$ was found to be jointly normal with mean $0$, variance 1 and covariance $\sigma_{xy}$ (this is without loss of generality, you can always standardize the data). What can you say about the causal effect of $x$ on $y$ or vice-versa?
With only this information, nothing. The reason here is that there are several causal models that would create the same observed distribution, yet have different interventional (and counterfactual) distributions. Here I will show three of such models. Notice all of them gives you the same observed $\sigma_{xy}$, but their causal conclusions are different: in the first model $X$ causes $Y$, in the second model $Y$ causes $X$, and, in the third model, neither causes each other --- $X$ and $Y$ are both common causes of the unobserved variable $Z$.
Model 1
$$
X = u_{x}\\
Y= \sigma_{yx}x + u_{y}
$$
Where $U_{x} \sim \mathcal{N}(0, 1)$ and $U_{y} = \mathcal{N}(0, 1 - \sigma_{xy}^2)$.
Model 2
$$
Y = u_{y}\\
X = \sigma_{yx}y + u_{x}
$$
Where $U_{x} \sim \mathcal{N}(0, 1 - \sigma_{xy}^2)$ and $U_{y} = \mathcal{N}(0, 1)$.
Model 3
$$
Z = U_{z}\\
X = \alpha Z + U_{x}\\
Y = \beta Z + U_{y}
$$
Where $\alpha\beta= \sigma_{xy}$, $U_{z} = \mathcal{N}(0, 1)$, $U_{x} = \mathcal{N(0, 1- \alpha^2)}$ and $U_{y} = \mathcal{N(0, 1- \beta^2)}$.
|
Is causal inference only from data possible?
Suppose we are given a dataset but not the capability of performing
some AB testing. We do some regression using X as predictor and Y as
response and get a model. Can we actually say something abo
|
44,813
|
Is causal inference only from data possible?
|
From data alone, it's impossible. There could always be some factor outside the model that could influence both $X$ and $Y$ (or one of them). It's impossible to control for literally everything.
The closest we have is a randomized control experiment, but even that has problems with external validity (e.g. we assume that what happened and the conditions during the time of the experiment will persist into the indefinite future).
There is 'Granger causality' (which is not true causality), which basically says if the parameters on the lagged $X$ variables in a regression of $Y(t)$ on $X(t-1), ..., X(t-m), Y(t-1), ..., Y(t-m)$ are jointly significant, then $X$ 'Granger causes' $Y$. See Granger (1969).
|
Is causal inference only from data possible?
|
From data alone, it's impossible. There could always be some factor outside the model that could influence both $X$ and $Y$ (or one of them). It's impossible to control for literally everything.
The c
|
Is causal inference only from data possible?
From data alone, it's impossible. There could always be some factor outside the model that could influence both $X$ and $Y$ (or one of them). It's impossible to control for literally everything.
The closest we have is a randomized control experiment, but even that has problems with external validity (e.g. we assume that what happened and the conditions during the time of the experiment will persist into the indefinite future).
There is 'Granger causality' (which is not true causality), which basically says if the parameters on the lagged $X$ variables in a regression of $Y(t)$ on $X(t-1), ..., X(t-m), Y(t-1), ..., Y(t-m)$ are jointly significant, then $X$ 'Granger causes' $Y$. See Granger (1969).
|
Is causal inference only from data possible?
From data alone, it's impossible. There could always be some factor outside the model that could influence both $X$ and $Y$ (or one of them). It's impossible to control for literally everything.
The c
|
44,814
|
Is causal inference only from data possible?
|
Potentially. Your intuition about the necessity to "resort to some physical/mechanical mechanism" is correct but that does not mean that the explicit definition of such mechanism is required. We can relax this problem.
There is a lot of work on causal inference from observational data where we do not explicitly formulate the causal model in form of a clear parametric equation. There are "ML-flavoured" approaches like: "Learning Representations for Counterfactual Inference" by Johansson et al., "Causal inference by using invariant prediction" by Peters et al., "Causal Forests" by Athey and various collaborators that make significant inroads.
Let's be clear: these approaches require substantial amounts of data and are far from prime-time ready. Nevertheless they offer evidence that while using observational data to answer causal questions is risky, obtaining answers is not impossible.
Final note: we have only recently have started coming up with "causal datasets" - datasets, where we have carefully annotated causal effects. The grand revolution in Computer Vision came through the abundance of available label training data. Causal inference work so far is not enjoying such a data-rich environment to work. Initiatives like the Causality workbench, the Causal Inference challenges, the Tubingen datasets observational samples give us test-beds that were simply unavailable only 10 years ago.
|
Is causal inference only from data possible?
|
Potentially. Your intuition about the necessity to "resort to some physical/mechanical mechanism" is correct but that does not mean that the explicit definition of such mechanism is required. We can r
|
Is causal inference only from data possible?
Potentially. Your intuition about the necessity to "resort to some physical/mechanical mechanism" is correct but that does not mean that the explicit definition of such mechanism is required. We can relax this problem.
There is a lot of work on causal inference from observational data where we do not explicitly formulate the causal model in form of a clear parametric equation. There are "ML-flavoured" approaches like: "Learning Representations for Counterfactual Inference" by Johansson et al., "Causal inference by using invariant prediction" by Peters et al., "Causal Forests" by Athey and various collaborators that make significant inroads.
Let's be clear: these approaches require substantial amounts of data and are far from prime-time ready. Nevertheless they offer evidence that while using observational data to answer causal questions is risky, obtaining answers is not impossible.
Final note: we have only recently have started coming up with "causal datasets" - datasets, where we have carefully annotated causal effects. The grand revolution in Computer Vision came through the abundance of available label training data. Causal inference work so far is not enjoying such a data-rich environment to work. Initiatives like the Causality workbench, the Causal Inference challenges, the Tubingen datasets observational samples give us test-beds that were simply unavailable only 10 years ago.
|
Is causal inference only from data possible?
Potentially. Your intuition about the necessity to "resort to some physical/mechanical mechanism" is correct but that does not mean that the explicit definition of such mechanism is required. We can r
|
44,815
|
How is it that an ML estimator might not be unique or consistent?
|
A multimodal likelihood function can have two modes of exactly the same value. In this case, the MLE may not be unique as there may two possible estimators that can be constructed by using the equation $\partial l(\theta; x) /\partial \theta = 0$.
Example of such a likelihood from Wikipedia:
Here, see that there's no unique value of $\theta$ that maximises the likelihood. The Wikipedia link also gives some conditions on the existence of unique and consistent MLEs although, I believe there are more (a more comprehensive literature search would guide you well).
Edit: This link about MLEs, which I believe are lecture notes from Cambridge, lists a few more regularity conditions for the MLE to exist.
You can find examples of inconsistent ML estimators in this CV question.
|
How is it that an ML estimator might not be unique or consistent?
|
A multimodal likelihood function can have two modes of exactly the same value. In this case, the MLE may not be unique as there may two possible estimators that can be constructed by using the equatio
|
How is it that an ML estimator might not be unique or consistent?
A multimodal likelihood function can have two modes of exactly the same value. In this case, the MLE may not be unique as there may two possible estimators that can be constructed by using the equation $\partial l(\theta; x) /\partial \theta = 0$.
Example of such a likelihood from Wikipedia:
Here, see that there's no unique value of $\theta$ that maximises the likelihood. The Wikipedia link also gives some conditions on the existence of unique and consistent MLEs although, I believe there are more (a more comprehensive literature search would guide you well).
Edit: This link about MLEs, which I believe are lecture notes from Cambridge, lists a few more regularity conditions for the MLE to exist.
You can find examples of inconsistent ML estimators in this CV question.
|
How is it that an ML estimator might not be unique or consistent?
A multimodal likelihood function can have two modes of exactly the same value. In this case, the MLE may not be unique as there may two possible estimators that can be constructed by using the equatio
|
44,816
|
How is it that an ML estimator might not be unique or consistent?
|
One example arises from rank deficiency. Suppose that you're conducting an OLS regression but your design matrix is not full rank. In this case, there are any number of solutions which obtain the maximum likelihood value. This problem isn't unique to OLS regression, but OLS regression is a simple enough example.
Another case arises in the MLE for binary logistic regression. Suppose that the regression exhibits separation; in this case, the likelihood does not have a well-defined maximum, in the sense that arbitrarily large coefficients monotonically increase the likelihood.
In both cases, common regularization methods like ridge penalties can resolve the problem.
|
How is it that an ML estimator might not be unique or consistent?
|
One example arises from rank deficiency. Suppose that you're conducting an OLS regression but your design matrix is not full rank. In this case, there are any number of solutions which obtain the maxi
|
How is it that an ML estimator might not be unique or consistent?
One example arises from rank deficiency. Suppose that you're conducting an OLS regression but your design matrix is not full rank. In this case, there are any number of solutions which obtain the maximum likelihood value. This problem isn't unique to OLS regression, but OLS regression is a simple enough example.
Another case arises in the MLE for binary logistic regression. Suppose that the regression exhibits separation; in this case, the likelihood does not have a well-defined maximum, in the sense that arbitrarily large coefficients monotonically increase the likelihood.
In both cases, common regularization methods like ridge penalties can resolve the problem.
|
How is it that an ML estimator might not be unique or consistent?
One example arises from rank deficiency. Suppose that you're conducting an OLS regression but your design matrix is not full rank. In this case, there are any number of solutions which obtain the maxi
|
44,817
|
How is it that an ML estimator might not be unique or consistent?
|
One additional example of non-uniqueness of MLE estimator:
To estimate the location parameter $\mu$ of the Laplace distribution through ML, you need a value $\hat{\mu}$ such that
$$ \sum_{i=1}^n \frac{|x_i - \hat{\mu}|}{x_i - \hat{\mu}} = \sum_{i=1}^n \mathrm{sgn}\left(x_i - \hat{\mu}\right) = 0,$$
so $\hat{\mu}$ must be below (or above) exactly half of the $x$'s, which means $\hat{\mu}$ is a median of them.
Even though when $n$ is odd we usually take the mean of the two central observations (in ascending order) as the median, it isn't unique. This may be a problem for numerical algorithms, and they can yield inconsistent results or even not converge at all.
|
How is it that an ML estimator might not be unique or consistent?
|
One additional example of non-uniqueness of MLE estimator:
To estimate the location parameter $\mu$ of the Laplace distribution through ML, you need a value $\hat{\mu}$ such that
$$ \sum_{i=1}^n \frac
|
How is it that an ML estimator might not be unique or consistent?
One additional example of non-uniqueness of MLE estimator:
To estimate the location parameter $\mu$ of the Laplace distribution through ML, you need a value $\hat{\mu}$ such that
$$ \sum_{i=1}^n \frac{|x_i - \hat{\mu}|}{x_i - \hat{\mu}} = \sum_{i=1}^n \mathrm{sgn}\left(x_i - \hat{\mu}\right) = 0,$$
so $\hat{\mu}$ must be below (or above) exactly half of the $x$'s, which means $\hat{\mu}$ is a median of them.
Even though when $n$ is odd we usually take the mean of the two central observations (in ascending order) as the median, it isn't unique. This may be a problem for numerical algorithms, and they can yield inconsistent results or even not converge at all.
|
How is it that an ML estimator might not be unique or consistent?
One additional example of non-uniqueness of MLE estimator:
To estimate the location parameter $\mu$ of the Laplace distribution through ML, you need a value $\hat{\mu}$ such that
$$ \sum_{i=1}^n \frac
|
44,818
|
How is it that an ML estimator might not be unique or consistent?
|
Another simple example that shows that the ML Estimator is not always unique is the model $U(\theta, \theta +1)^n$.
If your sample is $(x_1, ..., x_n)$ the likelihood $f(x_1,...x_n|\theta)$ for this sample is 1 if $x_i \in [\theta, \theta +1] \forall i=1...n$ and $0$ otherwise.
|
How is it that an ML estimator might not be unique or consistent?
|
Another simple example that shows that the ML Estimator is not always unique is the model $U(\theta, \theta +1)^n$.
If your sample is $(x_1, ..., x_n)$ the likelihood $f(x_1,...x_n|\theta)$ for this s
|
How is it that an ML estimator might not be unique or consistent?
Another simple example that shows that the ML Estimator is not always unique is the model $U(\theta, \theta +1)^n$.
If your sample is $(x_1, ..., x_n)$ the likelihood $f(x_1,...x_n|\theta)$ for this sample is 1 if $x_i \in [\theta, \theta +1] \forall i=1...n$ and $0$ otherwise.
|
How is it that an ML estimator might not be unique or consistent?
Another simple example that shows that the ML Estimator is not always unique is the model $U(\theta, \theta +1)^n$.
If your sample is $(x_1, ..., x_n)$ the likelihood $f(x_1,...x_n|\theta)$ for this s
|
44,819
|
What is the state of the art in statistics tests for distinguishing good from bad random number generators?
|
In addition to the Dieharder suite that Sephan Kolassa mentioned, other well known test suites include TestU01 and the NIST Statistical Test Suite (STS).
The PractRand library you mentioned rates Dieharder and STS as "bad" and TestU01 as "good". But, unlike the other test suites, PractRand is not as well known, and there do not seem to be any academic papers or external review. So, one would have to use their own judgement in trusting these comparisons (there's a little bit of information here on the PractRand webpage).
I'd recommend having a look at crypto.stackexchange.com. For example, some relevant threads here and here.
An important thing to note is that scientific and cryptographic applications have different requirements for pseudorandom number generators. Statistical randomness is necessary for both. But, it's not sufficient for cryptographic applications, which also need resistance to attacks that try to exploit the internal workings of the random number generator. This cannot be verified by statistical tests, and requires cryptanalysis.
References
L'Ecuyer et al. (2007). TestU01: A C library for empirical testing of random number generators.
Bassham et al. (2010). A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications.
|
What is the state of the art in statistics tests for distinguishing good from bad random number gene
|
In addition to the Dieharder suite that Sephan Kolassa mentioned, other well known test suites include TestU01 and the NIST Statistical Test Suite (STS).
The PractRand library you mentioned rates Dieh
|
What is the state of the art in statistics tests for distinguishing good from bad random number generators?
In addition to the Dieharder suite that Sephan Kolassa mentioned, other well known test suites include TestU01 and the NIST Statistical Test Suite (STS).
The PractRand library you mentioned rates Dieharder and STS as "bad" and TestU01 as "good". But, unlike the other test suites, PractRand is not as well known, and there do not seem to be any academic papers or external review. So, one would have to use their own judgement in trusting these comparisons (there's a little bit of information here on the PractRand webpage).
I'd recommend having a look at crypto.stackexchange.com. For example, some relevant threads here and here.
An important thing to note is that scientific and cryptographic applications have different requirements for pseudorandom number generators. Statistical randomness is necessary for both. But, it's not sufficient for cryptographic applications, which also need resistance to attacks that try to exploit the internal workings of the random number generator. This cannot be verified by statistical tests, and requires cryptanalysis.
References
L'Ecuyer et al. (2007). TestU01: A C library for empirical testing of random number generators.
Bassham et al. (2010). A Statistical Test Suite for Random and Pseudorandom Number Generators for Cryptographic Applications.
|
What is the state of the art in statistics tests for distinguishing good from bad random number gene
In addition to the Dieharder suite that Sephan Kolassa mentioned, other well known test suites include TestU01 and the NIST Statistical Test Suite (STS).
The PractRand library you mentioned rates Dieh
|
44,820
|
What is the state of the art in statistics tests for distinguishing good from bad random number generators?
|
In 1995, the Diehard suite of tests was distributed. This is no longer state of the art - one limitation is that Diehard only uses about 10 million random numbers in each test, but modern uses of random numbers may consume many more, so tests should base their conclusions on larger samples.
A successor to the Diehard suite is the Dieharder suite. I believe this is state of the art, but (disclaimer) I am not an expert in random number testing, so an answer from anyone who actually is an expert and could actually back their reply up with literature would be much appreciated.
|
What is the state of the art in statistics tests for distinguishing good from bad random number gene
|
In 1995, the Diehard suite of tests was distributed. This is no longer state of the art - one limitation is that Diehard only uses about 10 million random numbers in each test, but modern uses of rand
|
What is the state of the art in statistics tests for distinguishing good from bad random number generators?
In 1995, the Diehard suite of tests was distributed. This is no longer state of the art - one limitation is that Diehard only uses about 10 million random numbers in each test, but modern uses of random numbers may consume many more, so tests should base their conclusions on larger samples.
A successor to the Diehard suite is the Dieharder suite. I believe this is state of the art, but (disclaimer) I am not an expert in random number testing, so an answer from anyone who actually is an expert and could actually back their reply up with literature would be much appreciated.
|
What is the state of the art in statistics tests for distinguishing good from bad random number gene
In 1995, the Diehard suite of tests was distributed. This is no longer state of the art - one limitation is that Diehard only uses about 10 million random numbers in each test, but modern uses of rand
|
44,821
|
What is the intuition behind getting a slope distribution in linear regression?
|
Consider the difference between a population and a sample taken from that population.
You are correct that standard linear regression provides a unique best fitting line for the given data: for this one sample from a population of cases.
We are generally, however, interested in the characteristics of the population, not just of the sample. The reported distribution of coefficient values represents how those values might change over repeated sampling from the same population.
And, yes, the residuals have much to do with one way to estimate the distribution of coefficients, as explained for example here, based on certain standard assumptions. Resampling provides another way to estimate that distribution without making those assumptions.
|
What is the intuition behind getting a slope distribution in linear regression?
|
Consider the difference between a population and a sample taken from that population.
You are correct that standard linear regression provides a unique best fitting line for the given data: for this o
|
What is the intuition behind getting a slope distribution in linear regression?
Consider the difference between a population and a sample taken from that population.
You are correct that standard linear regression provides a unique best fitting line for the given data: for this one sample from a population of cases.
We are generally, however, interested in the characteristics of the population, not just of the sample. The reported distribution of coefficient values represents how those values might change over repeated sampling from the same population.
And, yes, the residuals have much to do with one way to estimate the distribution of coefficients, as explained for example here, based on certain standard assumptions. Resampling provides another way to estimate that distribution without making those assumptions.
|
What is the intuition behind getting a slope distribution in linear regression?
Consider the difference between a population and a sample taken from that population.
You are correct that standard linear regression provides a unique best fitting line for the given data: for this o
|
44,822
|
What is the intuition behind getting a slope distribution in linear regression?
|
The true parameter/regression coefficients
Linear regression assumes the model:
$$y_i = \boldsymbol{\beta} \mathbf{x_i} +\epsilon_i$$
where $\boldsymbol\beta$ is assumed fixed and only the residual term $\epsilon_i$ is assumed to be distributed according to some distribution.
So the true parameter/coefficient is assumed fixed, and is not assumed to be related to a distribution (That is in linear regression, one could think of alternative models that do express distributions for the coefficients)
The estimated parameter/regression coefficients
While the true $\boldsymbol{\beta}$ may be fixed, the estimated $\boldsymbol{\hat\beta}$ may be considered to follow some distribution (the estimate depends on a sample/data that varies for every new experiment, thus the estimate can be considered a random variable). This leads to two different way to express the estimation of the parameter, point estimates and interval estimates, and in this difference you may find the intuition for reporting additional estimates as standard error, t-value, confidence interval:
From https://en.wikipedia.org/wiki/Point_estimation
In statistics, point estimation involves the use of sample data to
calculate a single value (known as a point estimate or statistic)
which is to serve as a "best guess" or "best estimate" of an unknown
population parameter (for example, the population mean). More
formally, it is the application of a point estimator to the data to
obtain a point estimate.
From https://en.wikipedia.org/wiki/Interval_estimation
In statistics, interval estimation is the use of sample data to
calculate an interval of plausible values of an unknown population
parameter; this is in contrast to point estimation, which gives a
single value. Jerzy Neyman (1937) identified interval estimation
("estimation by interval") as distinct from point estimation
("estimation by unique estimate"). In doing so, he recognized that
then-recent work quoting results in the form of an estimate
plus-or-minus a standard deviation indicated that interval estimation
was actually the problem statisticians really had in mind.
The interval estimate gives a bit better idea about what information the data carries. It is not only an estimate for a single population parameter, but it also conveys something like the strength of the information that the data carries, ie how far other values than this single estimate, $\boldsymbol{\hat\beta}$ , could still be reasonable alternatives for the unknown parameter $\boldsymbol{\beta}$.
More data, or data with less noise, leads to a smaller deviance of the estimate $\boldsymbol{\hat\beta}$ (and this deviance can be estimated from the data), which means that not every point estimate can be considered the same. With more data or smaller noise levels the estimate is more likely 'close' to the true unknown parameter. Just a single point estimate does not convey this deviance and how 'close' the point estimate likely is.
|
What is the intuition behind getting a slope distribution in linear regression?
|
The true parameter/regression coefficients
Linear regression assumes the model:
$$y_i = \boldsymbol{\beta} \mathbf{x_i} +\epsilon_i$$
where $\boldsymbol\beta$ is assumed fixed and only the residual te
|
What is the intuition behind getting a slope distribution in linear regression?
The true parameter/regression coefficients
Linear regression assumes the model:
$$y_i = \boldsymbol{\beta} \mathbf{x_i} +\epsilon_i$$
where $\boldsymbol\beta$ is assumed fixed and only the residual term $\epsilon_i$ is assumed to be distributed according to some distribution.
So the true parameter/coefficient is assumed fixed, and is not assumed to be related to a distribution (That is in linear regression, one could think of alternative models that do express distributions for the coefficients)
The estimated parameter/regression coefficients
While the true $\boldsymbol{\beta}$ may be fixed, the estimated $\boldsymbol{\hat\beta}$ may be considered to follow some distribution (the estimate depends on a sample/data that varies for every new experiment, thus the estimate can be considered a random variable). This leads to two different way to express the estimation of the parameter, point estimates and interval estimates, and in this difference you may find the intuition for reporting additional estimates as standard error, t-value, confidence interval:
From https://en.wikipedia.org/wiki/Point_estimation
In statistics, point estimation involves the use of sample data to
calculate a single value (known as a point estimate or statistic)
which is to serve as a "best guess" or "best estimate" of an unknown
population parameter (for example, the population mean). More
formally, it is the application of a point estimator to the data to
obtain a point estimate.
From https://en.wikipedia.org/wiki/Interval_estimation
In statistics, interval estimation is the use of sample data to
calculate an interval of plausible values of an unknown population
parameter; this is in contrast to point estimation, which gives a
single value. Jerzy Neyman (1937) identified interval estimation
("estimation by interval") as distinct from point estimation
("estimation by unique estimate"). In doing so, he recognized that
then-recent work quoting results in the form of an estimate
plus-or-minus a standard deviation indicated that interval estimation
was actually the problem statisticians really had in mind.
The interval estimate gives a bit better idea about what information the data carries. It is not only an estimate for a single population parameter, but it also conveys something like the strength of the information that the data carries, ie how far other values than this single estimate, $\boldsymbol{\hat\beta}$ , could still be reasonable alternatives for the unknown parameter $\boldsymbol{\beta}$.
More data, or data with less noise, leads to a smaller deviance of the estimate $\boldsymbol{\hat\beta}$ (and this deviance can be estimated from the data), which means that not every point estimate can be considered the same. With more data or smaller noise levels the estimate is more likely 'close' to the true unknown parameter. Just a single point estimate does not convey this deviance and how 'close' the point estimate likely is.
|
What is the intuition behind getting a slope distribution in linear regression?
The true parameter/regression coefficients
Linear regression assumes the model:
$$y_i = \boldsymbol{\beta} \mathbf{x_i} +\epsilon_i$$
where $\boldsymbol\beta$ is assumed fixed and only the residual te
|
44,823
|
A huge gap between training and validation accuracy, confusion with the concept of Overfitting
|
Sounds like you are severely overfitting. Basically, you need to use a simpler model than the one you are currently using or collect (a lot) more data. Generally, the more data you have, the more complex a model you can fit without overfitting.
I do not think you are going to get meaningful results using a CNN on such a small dataset. Start with a simple decision tree with 1 to 3 levels to establish a benchmark. Maybe try linear models with high regularization. You are looking for poor performance (but better than random) on the training set and similar performance on the validation set. Then you can start trying more complex models that fit the training set better and maybe generalize to the validation set a bit better, too.
|
A huge gap between training and validation accuracy, confusion with the concept of Overfitting
|
Sounds like you are severely overfitting. Basically, you need to use a simpler model than the one you are currently using or collect (a lot) more data. Generally, the more data you have, the more comp
|
A huge gap between training and validation accuracy, confusion with the concept of Overfitting
Sounds like you are severely overfitting. Basically, you need to use a simpler model than the one you are currently using or collect (a lot) more data. Generally, the more data you have, the more complex a model you can fit without overfitting.
I do not think you are going to get meaningful results using a CNN on such a small dataset. Start with a simple decision tree with 1 to 3 levels to establish a benchmark. Maybe try linear models with high regularization. You are looking for poor performance (but better than random) on the training set and similar performance on the validation set. Then you can start trying more complex models that fit the training set better and maybe generalize to the validation set a bit better, too.
|
A huge gap between training and validation accuracy, confusion with the concept of Overfitting
Sounds like you are severely overfitting. Basically, you need to use a simpler model than the one you are currently using or collect (a lot) more data. Generally, the more data you have, the more comp
|
44,824
|
How does eigenvalues measure variance along the principal components in PCA? [duplicate]
|
We start from data covariance matrix
$$ S = \mathbb E(XX^{T})- \mathbb E(X) \mathbb E(X)^{T}$$
Say $\mu$ is a column vector of the same dimension of $X$ and $\mu^{T}\mu = 1$, then
$$\mu^{T}S\mu=\mu^{T}(\mathbb E(XX^{T})- \mathbb E(X) \mathbb E(X)^{T}) \mu = \mathbb E((\mu^{T} X)(\mu^{T} X)^{T})-\mathbb E(\mu^{T} X) \mathbb E(\mu^{T} X)^{T}$$
Note that $\mu^{T}X$ is the projection of $X$ onto $\mu$, such that
$$\mathbb E((\mu^{T} X)(\mu^{T} X)^{T})-\mathbb E(\mu^{T} X) \mathbb E(\mu^{T} X)^{T} = Var(\mu^{T} X)$$
Since $Var(\mu^{T} X)$ is simply a number, so we denote it as $\lambda$, so we have $$\mu^{T}S\mu = \lambda$$
Since $\mu^{T}\mu = 1$, so
$$\mu \mu^{T} S\mu = S\mu= \lambda \mu$$
which means the $\mu$ that we define in the first place is actually an eigenvector of the data covariance matrix, and the eigenvalue of which is the variance that the data has in that direction.
I cannot think of an intuitive way to make sense of this, but once you are familiar with the math, I think you will accept this.
|
How does eigenvalues measure variance along the principal components in PCA? [duplicate]
|
We start from data covariance matrix
$$ S = \mathbb E(XX^{T})- \mathbb E(X) \mathbb E(X)^{T}$$
Say $\mu$ is a column vector of the same dimension of $X$ and $\mu^{T}\mu = 1$, then
$$\mu^{T}S\mu=\mu^{
|
How does eigenvalues measure variance along the principal components in PCA? [duplicate]
We start from data covariance matrix
$$ S = \mathbb E(XX^{T})- \mathbb E(X) \mathbb E(X)^{T}$$
Say $\mu$ is a column vector of the same dimension of $X$ and $\mu^{T}\mu = 1$, then
$$\mu^{T}S\mu=\mu^{T}(\mathbb E(XX^{T})- \mathbb E(X) \mathbb E(X)^{T}) \mu = \mathbb E((\mu^{T} X)(\mu^{T} X)^{T})-\mathbb E(\mu^{T} X) \mathbb E(\mu^{T} X)^{T}$$
Note that $\mu^{T}X$ is the projection of $X$ onto $\mu$, such that
$$\mathbb E((\mu^{T} X)(\mu^{T} X)^{T})-\mathbb E(\mu^{T} X) \mathbb E(\mu^{T} X)^{T} = Var(\mu^{T} X)$$
Since $Var(\mu^{T} X)$ is simply a number, so we denote it as $\lambda$, so we have $$\mu^{T}S\mu = \lambda$$
Since $\mu^{T}\mu = 1$, so
$$\mu \mu^{T} S\mu = S\mu= \lambda \mu$$
which means the $\mu$ that we define in the first place is actually an eigenvector of the data covariance matrix, and the eigenvalue of which is the variance that the data has in that direction.
I cannot think of an intuitive way to make sense of this, but once you are familiar with the math, I think you will accept this.
|
How does eigenvalues measure variance along the principal components in PCA? [duplicate]
We start from data covariance matrix
$$ S = \mathbb E(XX^{T})- \mathbb E(X) \mathbb E(X)^{T}$$
Say $\mu$ is a column vector of the same dimension of $X$ and $\mu^{T}\mu = 1$, then
$$\mu^{T}S\mu=\mu^{
|
44,825
|
How does eigenvalues measure variance along the principal components in PCA? [duplicate]
|
Variance:
Variance is the square of the deviation from zero, so the total variance of a vector is the sum of its squared values.
https://en.wikipedia.org/wiki/Variance
Eigenvectors and Eigenvalues:
Eigenvectors are basis vectors that capture the inherent patterns that make up a dataset. By convention these are unit vectors, with norm = 1 so that they are the linear algebra equivalent of 1. Eigenvectors come in pairs, 'left' and 'right'. http://mathworld.wolfram.com/Eigenvalue.html
Convention (based on preferred orientation of data matrix as column matrix) is that the right eigenvector (I'll use the notation I'm familiar with $L^T$ which is used in various applied fields when talking about PCA) is the basis functions and is what is variously known as principal components, loadings, latent factors amongst many other. The right eigenvector can be projected onto any dataset with the same variables, so is useful for model building.
The left eigenvector ($S$) is the weights that each sample takes for each right eigenvector, and as such can be considered a series of functions for linear transformation of your original matrix. In practice this means that $S$ is the weightings that are given to each sample in order to construct the right eigenvector, which means that by definition they represent the total amplitude that each right eigenvector explains in that particular sample.
$$L^T=S^†D$$
Summing the individual variances explained per sample gives the total variance explained in the dataset.
Eigenvalues are a set of scalars such that a linear transformation S is scaled with the right eigenvectors produces the same result as multiplying the right eigenvectors themselves by that value
$$SL^T = \lambda L^T$$
Since linear algebra multiplication involves summation of the products of the row and column entries in the two multiplicands then multiplication by a scalar that is the total variance of the linear transform gives the same result. This means that eigenvalues are the variance of the by definition.
Eigendecomposition and PCA:
Thinking about it a bit more I realise that while NIPALS (see original answer below) is more intuitive for understanding how PCA is calculated, the SVD method is more intuitive for understanding eigenvalues themselves.
In SVD the data is decomposed into two sets of unit vector matrices with a diagnonal scaling matrix in between.
$$D = U*sv*L^T$$
the scores matrix $S$ in PCA is $U*sv$ and $L$ is the eigenvectors
The singular values are related to eigenvalues as:
$sv = \sqrt{ev}$
Note this relationship between the two is subject to constraints as described in https://math.stackexchange.com/questions/127500/what-is-the-difference-between-singular-value-and-eigenvalue
Original Answer using NIPALS:
For me the best algorithm for understanding PCA in an intuitive way is NIPALS
https://folk.uio.no/henninri/pca_module/pca_nipals.pdf
With the NIPALS approach the following steps are taken
Inner product of data $D$ to get covariance matrix (correlation in scaled appropriately) whose diagonal is the sum of squares
$D^TD$
Project an initial vector of weights $W$ onto the data (various sources recommend using random samples to do this, I prefer the unit vector of the square root of the sum of squares). I will refer to the eigenvectors as $L$
$$L^T_0=W^†D$$
This gives an initial guess at the prinicpal component, which is then projected onto the data to reconstruct it based on the initial guess.
$$D_{recon} = WL^T_0$$
The residual is calculated $D-D_{recon}$ and its sum of squares is calculated. OLS is then preformed until the sum of squares reaches a predefined stopping criteria, each time calculating the unit vector arising from projecting the updated weightings onto the data. This unit vector is the iterated principal component or eigenvector.
In NIPALS we subtract the final reconstructed data from the original data then use the residual to calculate PC2, and always proceed to the next PC with the final residual after iterating. This means all variance accounted for by $PC_i$ is removed from consideration by $PC_{>i}$
Initially the product of $WD$ is a vector with a norm that is not equal to 1, so we calculate the norm of the vector and use it to scale the vector to a unit vector.
The reason for creating unit vectors is that they are numericlly more stable than unconstrained vectors and have the nice property of behaving the same in linear algebra multiplication and inverse matrix operations (basically they are the linear algebra equivalent of the number 1).
How is eigenvalues and variance same for PCA?
So what is this norm that was used to scale the eignevector? It is the square root of the sum of squares of the coefficicents in the vector, i.e. the square root of the variance. The eigenvalue is the square of this value, i.e. it is the sum of squares = total variance.
If the characteristic vectors (the eigenvectors) are not unit vectors then the eigenvalues would not be their variance, but since we define eigenvectors as unit vectors then it falls out naturally that they are the variance of that vector in the data. If we calculate the scores by projecting the eigenvectors onto the data (note the formula below only works because L is comprised of unit vectors)
$$S = LD$$
Then the scores, since they have being multiplied by unit vectors, take on the total variance that is captured within the data by each unit vector. By definition eigenvalues are scalars that map the characteristic vectors of a matrix onto the matrix, i.e. they scale the vectors in order to reconstruct the original data based on the characteristic vectors.
Whats is the intuition behind this being the same.
Whats is the mathematical proof behind this theory?
They are this way by definition as described above
|
How does eigenvalues measure variance along the principal components in PCA? [duplicate]
|
Variance:
Variance is the square of the deviation from zero, so the total variance of a vector is the sum of its squared values.
https://en.wikipedia.org/wiki/Variance
Eigenvectors and Eigenvalues:
Ei
|
How does eigenvalues measure variance along the principal components in PCA? [duplicate]
Variance:
Variance is the square of the deviation from zero, so the total variance of a vector is the sum of its squared values.
https://en.wikipedia.org/wiki/Variance
Eigenvectors and Eigenvalues:
Eigenvectors are basis vectors that capture the inherent patterns that make up a dataset. By convention these are unit vectors, with norm = 1 so that they are the linear algebra equivalent of 1. Eigenvectors come in pairs, 'left' and 'right'. http://mathworld.wolfram.com/Eigenvalue.html
Convention (based on preferred orientation of data matrix as column matrix) is that the right eigenvector (I'll use the notation I'm familiar with $L^T$ which is used in various applied fields when talking about PCA) is the basis functions and is what is variously known as principal components, loadings, latent factors amongst many other. The right eigenvector can be projected onto any dataset with the same variables, so is useful for model building.
The left eigenvector ($S$) is the weights that each sample takes for each right eigenvector, and as such can be considered a series of functions for linear transformation of your original matrix. In practice this means that $S$ is the weightings that are given to each sample in order to construct the right eigenvector, which means that by definition they represent the total amplitude that each right eigenvector explains in that particular sample.
$$L^T=S^†D$$
Summing the individual variances explained per sample gives the total variance explained in the dataset.
Eigenvalues are a set of scalars such that a linear transformation S is scaled with the right eigenvectors produces the same result as multiplying the right eigenvectors themselves by that value
$$SL^T = \lambda L^T$$
Since linear algebra multiplication involves summation of the products of the row and column entries in the two multiplicands then multiplication by a scalar that is the total variance of the linear transform gives the same result. This means that eigenvalues are the variance of the by definition.
Eigendecomposition and PCA:
Thinking about it a bit more I realise that while NIPALS (see original answer below) is more intuitive for understanding how PCA is calculated, the SVD method is more intuitive for understanding eigenvalues themselves.
In SVD the data is decomposed into two sets of unit vector matrices with a diagnonal scaling matrix in between.
$$D = U*sv*L^T$$
the scores matrix $S$ in PCA is $U*sv$ and $L$ is the eigenvectors
The singular values are related to eigenvalues as:
$sv = \sqrt{ev}$
Note this relationship between the two is subject to constraints as described in https://math.stackexchange.com/questions/127500/what-is-the-difference-between-singular-value-and-eigenvalue
Original Answer using NIPALS:
For me the best algorithm for understanding PCA in an intuitive way is NIPALS
https://folk.uio.no/henninri/pca_module/pca_nipals.pdf
With the NIPALS approach the following steps are taken
Inner product of data $D$ to get covariance matrix (correlation in scaled appropriately) whose diagonal is the sum of squares
$D^TD$
Project an initial vector of weights $W$ onto the data (various sources recommend using random samples to do this, I prefer the unit vector of the square root of the sum of squares). I will refer to the eigenvectors as $L$
$$L^T_0=W^†D$$
This gives an initial guess at the prinicpal component, which is then projected onto the data to reconstruct it based on the initial guess.
$$D_{recon} = WL^T_0$$
The residual is calculated $D-D_{recon}$ and its sum of squares is calculated. OLS is then preformed until the sum of squares reaches a predefined stopping criteria, each time calculating the unit vector arising from projecting the updated weightings onto the data. This unit vector is the iterated principal component or eigenvector.
In NIPALS we subtract the final reconstructed data from the original data then use the residual to calculate PC2, and always proceed to the next PC with the final residual after iterating. This means all variance accounted for by $PC_i$ is removed from consideration by $PC_{>i}$
Initially the product of $WD$ is a vector with a norm that is not equal to 1, so we calculate the norm of the vector and use it to scale the vector to a unit vector.
The reason for creating unit vectors is that they are numericlly more stable than unconstrained vectors and have the nice property of behaving the same in linear algebra multiplication and inverse matrix operations (basically they are the linear algebra equivalent of the number 1).
How is eigenvalues and variance same for PCA?
So what is this norm that was used to scale the eignevector? It is the square root of the sum of squares of the coefficicents in the vector, i.e. the square root of the variance. The eigenvalue is the square of this value, i.e. it is the sum of squares = total variance.
If the characteristic vectors (the eigenvectors) are not unit vectors then the eigenvalues would not be their variance, but since we define eigenvectors as unit vectors then it falls out naturally that they are the variance of that vector in the data. If we calculate the scores by projecting the eigenvectors onto the data (note the formula below only works because L is comprised of unit vectors)
$$S = LD$$
Then the scores, since they have being multiplied by unit vectors, take on the total variance that is captured within the data by each unit vector. By definition eigenvalues are scalars that map the characteristic vectors of a matrix onto the matrix, i.e. they scale the vectors in order to reconstruct the original data based on the characteristic vectors.
Whats is the intuition behind this being the same.
Whats is the mathematical proof behind this theory?
They are this way by definition as described above
|
How does eigenvalues measure variance along the principal components in PCA? [duplicate]
Variance:
Variance is the square of the deviation from zero, so the total variance of a vector is the sum of its squared values.
https://en.wikipedia.org/wiki/Variance
Eigenvectors and Eigenvalues:
Ei
|
44,826
|
Regression when output is in a specific interval
|
The appropriate technique depends on your goal.
If you are building a model for inference, you should focus on the properties of the distribution of your target conditional on covariates, $p(y|x)$.
For example, the value $0.5(y+1)$ may be distributed as $Beta(\alpha(x), \beta(x))$. In this case, you may perform maximum likelihood estimation of parameters of the functions $\alpha(x)$ and $\beta(x)$, and find the best form for them (e.g. linear or log-linear). Google "beta regression" for more details.
Instead of $Beta$, you can fit a GLM with any link function you want (indeed, logit link is commonly used). You can also map $y$ into $(-\infty, \infty)$ with any function you want, and use unconstrained regression. The last approach, however, can fail if exact $\pm 1$s are present in your data.
Another trick is to transform your regression into weighted classification. From each training observation $(x, y)$ you can generate two observations $(x, 1)$ and $(x, 0)$ with corresponding weights $\frac{1+y}{2}$ and $\frac{1-y}{2}$, fit a probabilistic classifier (e.g. logistic or probit regression), and then transform predicted probability of $1$ back to $y$.
If you are building a model for prediction, the probabilistic properties may be ignored, you just focus on predicting $y$ as close as possibly, whatever it means. In this case, you may fit any function $y=f(x)$, and just truncate outside $[-1, 1]$. This approach allows you to try lots of different regression algorithms without bothering much about the boundaries on $y$.
Moreover, several machine learning models (e.g. decision trees and their ensembles random forests, k-nearest-neighbor, or any other method which prediction a is weighted average of training samples) are by design unable to predict higher than the highest training value, or lower than the lowest. If you use them, you may never worry about the interval of $y$.
What approach is standard, depends on the domain and on your goal. But fitting a logistic function to continuous data seems to be OK:
it always predicts in $(-1, 1)$
it works even with exact $\pm 1$
generalized linear form gives you a basis for inference and feature selection
it had decent prediction accuracy in the most cases I seen.
Now it's time for an implementation. There is an example of R code that evaluates such a model.
set.seed(1)
data = data.frame(x=1:100)
data$y = 1 / (1 + exp(5-0.1*(data$x) + rnorm(100)))
model = glm(y~x, family = 'binomial', data=data)
summary(model)
plot(x, y)
lines(x, predict(model, data, type = 'response'))
It outputs the following table of estimated coefficients (close to the "true" coefficients I used)
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -4.48814 0.88243 -5.086 3.65e-07 ***
x 0.08713 0.01615 5.394 6.89e-08 ***
and a picture with the training data and the fitted function
Unfortunately, Python's sklearn does not allow logistic regression to run in regression mode, but it is possible with statsmodels - it has a Logit class that allows continuous targets. The interface and output are pretty similar to those in R:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.formula.api as smf
np.random.seed(1)
df = pd.DataFrame({'x': range(100)})
df['y'] = 1 / (1 + np.exp(5-0.1*(df.x) + np.random.normal(size=100)))
model = smf.logit('y~x', data=df).fit()
print(model.params)
plt.scatter(df.x, df.y)
plt.plot(df.x, model.predict(df), color='k')
plt.show()
One more issue worth considering is evaluation metric for your model. Along with standard RMSE and MAE, in such problem rank-based metrics, such as Spearman correlation, may be useful. If you do weighted classification instead of regression, you can also calculate weithted classification metrics, like ROC AUC.
The rationale for such metrics is that in the end you may want not to predict $y$ as accurately as possible, but separate low $y$ from high $y$ as accurately as possible, but you don't know the threshold in advance, or it is variable. Rank-based metrics reflect this process better than difference-based metrics.
|
Regression when output is in a specific interval
|
The appropriate technique depends on your goal.
If you are building a model for inference, you should focus on the properties of the distribution of your target conditional on covariates, $p(y|x)$.
F
|
Regression when output is in a specific interval
The appropriate technique depends on your goal.
If you are building a model for inference, you should focus on the properties of the distribution of your target conditional on covariates, $p(y|x)$.
For example, the value $0.5(y+1)$ may be distributed as $Beta(\alpha(x), \beta(x))$. In this case, you may perform maximum likelihood estimation of parameters of the functions $\alpha(x)$ and $\beta(x)$, and find the best form for them (e.g. linear or log-linear). Google "beta regression" for more details.
Instead of $Beta$, you can fit a GLM with any link function you want (indeed, logit link is commonly used). You can also map $y$ into $(-\infty, \infty)$ with any function you want, and use unconstrained regression. The last approach, however, can fail if exact $\pm 1$s are present in your data.
Another trick is to transform your regression into weighted classification. From each training observation $(x, y)$ you can generate two observations $(x, 1)$ and $(x, 0)$ with corresponding weights $\frac{1+y}{2}$ and $\frac{1-y}{2}$, fit a probabilistic classifier (e.g. logistic or probit regression), and then transform predicted probability of $1$ back to $y$.
If you are building a model for prediction, the probabilistic properties may be ignored, you just focus on predicting $y$ as close as possibly, whatever it means. In this case, you may fit any function $y=f(x)$, and just truncate outside $[-1, 1]$. This approach allows you to try lots of different regression algorithms without bothering much about the boundaries on $y$.
Moreover, several machine learning models (e.g. decision trees and their ensembles random forests, k-nearest-neighbor, or any other method which prediction a is weighted average of training samples) are by design unable to predict higher than the highest training value, or lower than the lowest. If you use them, you may never worry about the interval of $y$.
What approach is standard, depends on the domain and on your goal. But fitting a logistic function to continuous data seems to be OK:
it always predicts in $(-1, 1)$
it works even with exact $\pm 1$
generalized linear form gives you a basis for inference and feature selection
it had decent prediction accuracy in the most cases I seen.
Now it's time for an implementation. There is an example of R code that evaluates such a model.
set.seed(1)
data = data.frame(x=1:100)
data$y = 1 / (1 + exp(5-0.1*(data$x) + rnorm(100)))
model = glm(y~x, family = 'binomial', data=data)
summary(model)
plot(x, y)
lines(x, predict(model, data, type = 'response'))
It outputs the following table of estimated coefficients (close to the "true" coefficients I used)
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -4.48814 0.88243 -5.086 3.65e-07 ***
x 0.08713 0.01615 5.394 6.89e-08 ***
and a picture with the training data and the fitted function
Unfortunately, Python's sklearn does not allow logistic regression to run in regression mode, but it is possible with statsmodels - it has a Logit class that allows continuous targets. The interface and output are pretty similar to those in R:
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import statsmodels.formula.api as smf
np.random.seed(1)
df = pd.DataFrame({'x': range(100)})
df['y'] = 1 / (1 + np.exp(5-0.1*(df.x) + np.random.normal(size=100)))
model = smf.logit('y~x', data=df).fit()
print(model.params)
plt.scatter(df.x, df.y)
plt.plot(df.x, model.predict(df), color='k')
plt.show()
One more issue worth considering is evaluation metric for your model. Along with standard RMSE and MAE, in such problem rank-based metrics, such as Spearman correlation, may be useful. If you do weighted classification instead of regression, you can also calculate weithted classification metrics, like ROC AUC.
The rationale for such metrics is that in the end you may want not to predict $y$ as accurately as possible, but separate low $y$ from high $y$ as accurately as possible, but you don't know the threshold in advance, or it is variable. Rank-based metrics reflect this process better than difference-based metrics.
|
Regression when output is in a specific interval
The appropriate technique depends on your goal.
If you are building a model for inference, you should focus on the properties of the distribution of your target conditional on covariates, $p(y|x)$.
F
|
44,827
|
Regression when output is in a specific interval
|
The simple linear regression theory is more developed for normal variables than for other distributions. When we have to deal with a problem like yours, we can use change of variables. In your case, I will use a change like:
$$ z = \frac {2y} {1-y^2} = \frac {1} {1-y} - \frac {1} {1+y} $$
This function is increasing: if $y$ is greater, $z$ is greater. When $y$ is near $-1$, $z$ is near $-\infty$; when $y$ is near $+1$, $z$ is near $+\infty$. With these trick, you can calculate the linear relationship between the independent variable $x$ and the the dependent variable $z$.
|
Regression when output is in a specific interval
|
The simple linear regression theory is more developed for normal variables than for other distributions. When we have to deal with a problem like yours, we can use change of variables. In your case, I
|
Regression when output is in a specific interval
The simple linear regression theory is more developed for normal variables than for other distributions. When we have to deal with a problem like yours, we can use change of variables. In your case, I will use a change like:
$$ z = \frac {2y} {1-y^2} = \frac {1} {1-y} - \frac {1} {1+y} $$
This function is increasing: if $y$ is greater, $z$ is greater. When $y$ is near $-1$, $z$ is near $-\infty$; when $y$ is near $+1$, $z$ is near $+\infty$. With these trick, you can calculate the linear relationship between the independent variable $x$ and the the dependent variable $z$.
|
Regression when output is in a specific interval
The simple linear regression theory is more developed for normal variables than for other distributions. When we have to deal with a problem like yours, we can use change of variables. In your case, I
|
44,828
|
How to find the expectation of the maximum of independent exponential variables?
|
The answer referenced in the comments is great, because it is based on straightforward probabilistic thinking. But it is possible to obtain the answer through elementary means, beginning from definitions.
Because $x_{(n)}$ is the largest of $n$ independent variables, the event $x_{(n)}\le x$ is the event that all the $x_i \le x.$ Stipulating the $x_i$ have Exponential$(1)$ distributions says that for $x\gt 0,$ these have common probability $1 - e^{-x}$ (and otherwise have zero probability).
Since probabilities of independent events multiply,
$$\Pr(x_{(n)} \le x) = \left(1 - e^{-x}\right)^n.$$
One well-known formula for the expectation of a positive random variable with distribution function $F$ is the integral of $1-F$ from $0$ to $\infty.$ (Take the usual integral for the expectation and integrate by parts.) We are looking, then, to compute
$$E_n = E\left[x_{(n)}\right] = \int_0^\infty 1 - \left(1 - e^{-x}\right)^n\,\mathrm{d}x$$
for $n=1, 2, 3, \ldots.$
That initial "$1$" in the integrand is thorny, because its integral diverges, so we cannot separate it out. However, the differences between these quantities are considerably simpler to compute because the $1$'s cancel:
$$E_{n} - E_{n-1} = \int_0^\infty 1 - \left(1 - e^{-x}\right)^n - \left[1 - \left(1 - e^{-x}\right)^{n-1}\right]\,\mathrm{d}x = \int_0^\infty \left(1 - e^{-x}\right)^{n-1}e^{-x}\,\mathrm{d}x.$$
This is a textbook case for integration by substitution: the natural form to try is $u = 1-e^{-x},$ reducing the integral to
$$E_{n} - E_{n-1} = -\int_1^0 u^{n-1}\,\mathrm{d}u = \frac{1}{n}.$$
Beginning with $E_0=\int (1-1)\mathrm{d}x = 0,$ we obtain recursively
$$\begin{aligned}E_n &= E_0 + (E_1 - E_0) + (E_2 - E_1) + \cdots + (E_n - E_{n-1}) \\&= 0 + \frac{1}{1} + \frac{1}{2} + \cdots + \frac{1}{n} = H(n),
\end{aligned}$$
the $n^\text{th}$ harmonic number.
|
How to find the expectation of the maximum of independent exponential variables?
|
The answer referenced in the comments is great, because it is based on straightforward probabilistic thinking. But it is possible to obtain the answer through elementary means, beginning from definit
|
How to find the expectation of the maximum of independent exponential variables?
The answer referenced in the comments is great, because it is based on straightforward probabilistic thinking. But it is possible to obtain the answer through elementary means, beginning from definitions.
Because $x_{(n)}$ is the largest of $n$ independent variables, the event $x_{(n)}\le x$ is the event that all the $x_i \le x.$ Stipulating the $x_i$ have Exponential$(1)$ distributions says that for $x\gt 0,$ these have common probability $1 - e^{-x}$ (and otherwise have zero probability).
Since probabilities of independent events multiply,
$$\Pr(x_{(n)} \le x) = \left(1 - e^{-x}\right)^n.$$
One well-known formula for the expectation of a positive random variable with distribution function $F$ is the integral of $1-F$ from $0$ to $\infty.$ (Take the usual integral for the expectation and integrate by parts.) We are looking, then, to compute
$$E_n = E\left[x_{(n)}\right] = \int_0^\infty 1 - \left(1 - e^{-x}\right)^n\,\mathrm{d}x$$
for $n=1, 2, 3, \ldots.$
That initial "$1$" in the integrand is thorny, because its integral diverges, so we cannot separate it out. However, the differences between these quantities are considerably simpler to compute because the $1$'s cancel:
$$E_{n} - E_{n-1} = \int_0^\infty 1 - \left(1 - e^{-x}\right)^n - \left[1 - \left(1 - e^{-x}\right)^{n-1}\right]\,\mathrm{d}x = \int_0^\infty \left(1 - e^{-x}\right)^{n-1}e^{-x}\,\mathrm{d}x.$$
This is a textbook case for integration by substitution: the natural form to try is $u = 1-e^{-x},$ reducing the integral to
$$E_{n} - E_{n-1} = -\int_1^0 u^{n-1}\,\mathrm{d}u = \frac{1}{n}.$$
Beginning with $E_0=\int (1-1)\mathrm{d}x = 0,$ we obtain recursively
$$\begin{aligned}E_n &= E_0 + (E_1 - E_0) + (E_2 - E_1) + \cdots + (E_n - E_{n-1}) \\&= 0 + \frac{1}{1} + \frac{1}{2} + \cdots + \frac{1}{n} = H(n),
\end{aligned}$$
the $n^\text{th}$ harmonic number.
|
How to find the expectation of the maximum of independent exponential variables?
The answer referenced in the comments is great, because it is based on straightforward probabilistic thinking. But it is possible to obtain the answer through elementary means, beginning from definit
|
44,829
|
How to find the expectation of the maximum of independent exponential variables?
|
Method of Moments approach
Given a set of $n$ exponentially distributed i.i.d variables $X_i \sim EXP(1)$ the expected value of an ordered statistic $X_{i:n}$ is found in a straighforward fashion with the method of moments which gives the expected value as,
\begin{equation*}
\begin{aligned}[b]
E[X] = \left[\frac{\partial}{\partial t}\int e^{xt}f(x) \right]_{t=0}= \int xf(x)
\end{aligned}
\end{equation*}
Now, for the sake of rigor and clarity, consider the full pdf of the ordered statistic for a general integer $i ; 1<i< n$;
\begin{equation*}
\begin{aligned}[b]
f(x_i) &= \frac{n!}{(i-1)!(n-i)!}[F(x_i)]^{i-1}[1-F(x_i)]^{n-i}f(x_i) \\
&=\frac{n!}{(i-1)!(n-i)!}[1-e^{-x_i}]^{i-1}[e^{-x_i}]^{(n-i+1)x_i}
\end{aligned}
\end{equation*}
Applying the method of moments gives,
\begin{equation*}
\begin{aligned}[b]
E[X] &= \frac{n!}{(i-1)!(n-i)!}\left[\frac{\partial}{\partial t}\int [1-e^{-x_i}]^{i-1}[e^{-x_i}]^{(n-i+1-t)x_i}\right]_{t=0}\\
& = \frac{n!}{(i-1)!(n-i)!}\left[ \frac{(i-1)!(n-i)!}{n!} \sum_{k=1}^i \frac{1}{n-k-t+1} \right]_{t=0}\\
& = \sum_{k=1}^i \frac{1}{n-t+1}
\end{aligned}
\end{equation*}
If I recall correctly in my messing around I found this method works well for the variance too, but I ran into difficulties in attempting to make it work for the covariance, e.g. $\frac{\partial^2}{\partial t_1\partial t_2}e^{x_{i:n}t_1 + x_{j:n}t_2}f(x_{i:n},x_{j:n})$, but to no avail. There might be some esoteric covariance formula for ordered statistics which makes it work though.
Substitution approach
Following a paper entitled ORDER STATISTICS OF UNIFORM, LOGISTIC AND EXPONENTIAL DISTRIBUTIONS by Okoyo Collins and Omondi (see page 100-102) an alternative is to make a clever substitution;
\begin{equation*}
\begin{aligned}[b]
Z_i = (n-i+1)(X_i - X_{i-1}) \qquad \longrightarrow \qquad X_i = \frac{Z_i}{n-i+1} + X_{i-1}
\end{aligned}
\end{equation*}
which can be used to show that,
\begin{equation*}
\begin{aligned}[b]
X_i \sim \frac{Z_1}{n} + \frac{Z_2}{n-1} +...+ \frac{Z_i}{n-i+1}
\end{aligned}
\end{equation*}
The Jacobian of the transformation turns out to be $n!$ (see pg 101 of referenced paper). Also, we conveniently have,
\begin{equation*}
\begin{aligned}[b]
\sum_{i=1}^n x_i = \sum_{i=1}^n z_i
\end{aligned}
\end{equation*}
(Convince yourself of this). The joint pdf then transforms as,
\begin{equation*}
\begin{aligned}[b]
f_{X_1,X_2,...,X_n} = n!e^{-\sum_{i=1}^n x_i } \quad \longrightarrow \quad e^{-\sum_{i=1}^n z_i }
\end{aligned}
\end{equation*}
Writing the subscripts out in full ordered notation, we now have,
\begin{equation*}
\begin{aligned}[b]
E[X_{i:n}] &= E\left[\frac{Z_{1:n}}{n} + \frac{Z_{2:n}}{n-1} +...+ \frac{Z_{i:n}}{n-i+1}\right]
&= \sum_{k=1}^i \frac{1}{n-t+1}
\end{aligned}
\end{equation*}
Because $Z_{i:n} \sim EXP(1)$ as well (?). This warrants additional justification, but I've taken it as far as needed for my purposes which was applied to a different problem.
|
How to find the expectation of the maximum of independent exponential variables?
|
Method of Moments approach
Given a set of $n$ exponentially distributed i.i.d variables $X_i \sim EXP(1)$ the expected value of an ordered statistic $X_{i:n}$ is found in a straighforward fashion with
|
How to find the expectation of the maximum of independent exponential variables?
Method of Moments approach
Given a set of $n$ exponentially distributed i.i.d variables $X_i \sim EXP(1)$ the expected value of an ordered statistic $X_{i:n}$ is found in a straighforward fashion with the method of moments which gives the expected value as,
\begin{equation*}
\begin{aligned}[b]
E[X] = \left[\frac{\partial}{\partial t}\int e^{xt}f(x) \right]_{t=0}= \int xf(x)
\end{aligned}
\end{equation*}
Now, for the sake of rigor and clarity, consider the full pdf of the ordered statistic for a general integer $i ; 1<i< n$;
\begin{equation*}
\begin{aligned}[b]
f(x_i) &= \frac{n!}{(i-1)!(n-i)!}[F(x_i)]^{i-1}[1-F(x_i)]^{n-i}f(x_i) \\
&=\frac{n!}{(i-1)!(n-i)!}[1-e^{-x_i}]^{i-1}[e^{-x_i}]^{(n-i+1)x_i}
\end{aligned}
\end{equation*}
Applying the method of moments gives,
\begin{equation*}
\begin{aligned}[b]
E[X] &= \frac{n!}{(i-1)!(n-i)!}\left[\frac{\partial}{\partial t}\int [1-e^{-x_i}]^{i-1}[e^{-x_i}]^{(n-i+1-t)x_i}\right]_{t=0}\\
& = \frac{n!}{(i-1)!(n-i)!}\left[ \frac{(i-1)!(n-i)!}{n!} \sum_{k=1}^i \frac{1}{n-k-t+1} \right]_{t=0}\\
& = \sum_{k=1}^i \frac{1}{n-t+1}
\end{aligned}
\end{equation*}
If I recall correctly in my messing around I found this method works well for the variance too, but I ran into difficulties in attempting to make it work for the covariance, e.g. $\frac{\partial^2}{\partial t_1\partial t_2}e^{x_{i:n}t_1 + x_{j:n}t_2}f(x_{i:n},x_{j:n})$, but to no avail. There might be some esoteric covariance formula for ordered statistics which makes it work though.
Substitution approach
Following a paper entitled ORDER STATISTICS OF UNIFORM, LOGISTIC AND EXPONENTIAL DISTRIBUTIONS by Okoyo Collins and Omondi (see page 100-102) an alternative is to make a clever substitution;
\begin{equation*}
\begin{aligned}[b]
Z_i = (n-i+1)(X_i - X_{i-1}) \qquad \longrightarrow \qquad X_i = \frac{Z_i}{n-i+1} + X_{i-1}
\end{aligned}
\end{equation*}
which can be used to show that,
\begin{equation*}
\begin{aligned}[b]
X_i \sim \frac{Z_1}{n} + \frac{Z_2}{n-1} +...+ \frac{Z_i}{n-i+1}
\end{aligned}
\end{equation*}
The Jacobian of the transformation turns out to be $n!$ (see pg 101 of referenced paper). Also, we conveniently have,
\begin{equation*}
\begin{aligned}[b]
\sum_{i=1}^n x_i = \sum_{i=1}^n z_i
\end{aligned}
\end{equation*}
(Convince yourself of this). The joint pdf then transforms as,
\begin{equation*}
\begin{aligned}[b]
f_{X_1,X_2,...,X_n} = n!e^{-\sum_{i=1}^n x_i } \quad \longrightarrow \quad e^{-\sum_{i=1}^n z_i }
\end{aligned}
\end{equation*}
Writing the subscripts out in full ordered notation, we now have,
\begin{equation*}
\begin{aligned}[b]
E[X_{i:n}] &= E\left[\frac{Z_{1:n}}{n} + \frac{Z_{2:n}}{n-1} +...+ \frac{Z_{i:n}}{n-i+1}\right]
&= \sum_{k=1}^i \frac{1}{n-t+1}
\end{aligned}
\end{equation*}
Because $Z_{i:n} \sim EXP(1)$ as well (?). This warrants additional justification, but I've taken it as far as needed for my purposes which was applied to a different problem.
|
How to find the expectation of the maximum of independent exponential variables?
Method of Moments approach
Given a set of $n$ exponentially distributed i.i.d variables $X_i \sim EXP(1)$ the expected value of an ordered statistic $X_{i:n}$ is found in a straighforward fashion with
|
44,830
|
How to find the expectation of the maximum of independent exponential variables?
|
A different approach is that we can view the order statistic as a sum statistic. An explanation is given here: https://math.stackexchange.com/a/4283180
If $X_k \sim Exp(1)$ then
$$max(X_1, X_2, \dots , X_n) \qquad \sim \qquad \sum_{k=1}^n Y_k$$
with $Y_k \sim Exp(n+1-k)$.
And then you can compute the expectation value as
$$E\left[max(X_1, X_2, \dots , X_n)\right] = E\left[\sum_{k=1}^n Y_k\right] = \sum_{k=1}^n E\left[Y_k\right] = \sum_{k=1}^n \frac{1}{n+1-k} = \sum_{k=1}^n \frac{1}{k} $$
In general you get for the $m$-th order statistic (of $n$ exponential distributed variables) the expectation:
$$E[X_{(k)}] = \sum_{k=1}^m \frac{1}{n+1-k} $$
|
How to find the expectation of the maximum of independent exponential variables?
|
A different approach is that we can view the order statistic as a sum statistic. An explanation is given here: https://math.stackexchange.com/a/4283180
If $X_k \sim Exp(1)$ then
$$max(X_1, X_2, \dots
|
How to find the expectation of the maximum of independent exponential variables?
A different approach is that we can view the order statistic as a sum statistic. An explanation is given here: https://math.stackexchange.com/a/4283180
If $X_k \sim Exp(1)$ then
$$max(X_1, X_2, \dots , X_n) \qquad \sim \qquad \sum_{k=1}^n Y_k$$
with $Y_k \sim Exp(n+1-k)$.
And then you can compute the expectation value as
$$E\left[max(X_1, X_2, \dots , X_n)\right] = E\left[\sum_{k=1}^n Y_k\right] = \sum_{k=1}^n E\left[Y_k\right] = \sum_{k=1}^n \frac{1}{n+1-k} = \sum_{k=1}^n \frac{1}{k} $$
In general you get for the $m$-th order statistic (of $n$ exponential distributed variables) the expectation:
$$E[X_{(k)}] = \sum_{k=1}^m \frac{1}{n+1-k} $$
|
How to find the expectation of the maximum of independent exponential variables?
A different approach is that we can view the order statistic as a sum statistic. An explanation is given here: https://math.stackexchange.com/a/4283180
If $X_k \sim Exp(1)$ then
$$max(X_1, X_2, \dots
|
44,831
|
Why variance of OLS estimate decreases as sample size increases?
|
If we assume that $\sigma^2$ is known, the variance of the OLS estimator only depends on $X'X$ because we do not need to estimate $\sigma^2$. Here is a purely algebraic proof that the variance of the estimator decreases with any additional observation if $\sigma^2$ is known. Suppose $X$ is your current design matrix and you add one more observation $x$, which has dimension $1\times (p+1)$. Your new design matrix is $$X_{new} = \left(\begin{array}{c}X \\ x \end{array}\right).$$
You can check that $X_{new}'X_{new} = X'X + x'x$. Using the Woodbury identity we get
$$
(X_{new}'X_{new})^{-1} = (X'X + x'x)^{-1} = (X'X)^{-1} - \frac{(X'X)^{-1}x'x(X'X)^{-1}}{1+x(X'X)^{-1}x'}
$$
Because $(X'X)^{-1}x'x(X'X)^{-1}$ is positive semi-definite (it is the multiplication of a matrix with its transpose) and $1+x(X'X)^{-1}x'>0$, the diagonal elements of the subtracting term are greater than or equal to zero. So, the diagonal elements of $(X_{new}'X_{new})^{-1}$ are less than or equal to the diagonal elements of $(X'X)^{-1}$.
|
Why variance of OLS estimate decreases as sample size increases?
|
If we assume that $\sigma^2$ is known, the variance of the OLS estimator only depends on $X'X$ because we do not need to estimate $\sigma^2$. Here is a purely algebraic proof that the variance of the
|
Why variance of OLS estimate decreases as sample size increases?
If we assume that $\sigma^2$ is known, the variance of the OLS estimator only depends on $X'X$ because we do not need to estimate $\sigma^2$. Here is a purely algebraic proof that the variance of the estimator decreases with any additional observation if $\sigma^2$ is known. Suppose $X$ is your current design matrix and you add one more observation $x$, which has dimension $1\times (p+1)$. Your new design matrix is $$X_{new} = \left(\begin{array}{c}X \\ x \end{array}\right).$$
You can check that $X_{new}'X_{new} = X'X + x'x$. Using the Woodbury identity we get
$$
(X_{new}'X_{new})^{-1} = (X'X + x'x)^{-1} = (X'X)^{-1} - \frac{(X'X)^{-1}x'x(X'X)^{-1}}{1+x(X'X)^{-1}x'}
$$
Because $(X'X)^{-1}x'x(X'X)^{-1}$ is positive semi-definite (it is the multiplication of a matrix with its transpose) and $1+x(X'X)^{-1}x'>0$, the diagonal elements of the subtracting term are greater than or equal to zero. So, the diagonal elements of $(X_{new}'X_{new})^{-1}$ are less than or equal to the diagonal elements of $(X'X)^{-1}$.
|
Why variance of OLS estimate decreases as sample size increases?
If we assume that $\sigma^2$ is known, the variance of the OLS estimator only depends on $X'X$ because we do not need to estimate $\sigma^2$. Here is a purely algebraic proof that the variance of the
|
44,832
|
Why variance of OLS estimate decreases as sample size increases?
|
Assumptions:
(1) There exists a population from which infinite draws of $X$ and $y$ may be made, and each of those draws are characterized by the exact same distribution parameters.
(2) $n$ is sufficiently large that the variance of a sample of length $n$ is always the same, or may be approximated as such.
Let's start out like this:
$\hatβ=({X'X})^{-1}X'y$
$\text{Var}(\hatβ)=\text{Var}[({X'X})^{-1}X'y]$
Now, let the columns of $X$ be mutually orthogonal, each with variance $σ^2$ and mean $0$. $X'X$ is then a $(p+1)$-dimensional diagonal matrix whose elements are $nσ^2$. $({X'X})^{-1}$ is just the element-by-element inversion of the diagonals of $X'X$, that is, a $(p+1)$-dimensional diagonal matrix whose elements are $1/{(nσ^2)}$.
That brings us to
$\text{Var}(\hatβ)=[1/{(nσ^2)}]^2I_{p+1}\text{Var}[X'y]$
$\text{Var}(\hatβ)=[1/{(n^2σ^4)}]I_{p+1}\text{Var}[X'y]$
However, if $y$ is just a univariate response with variance $σ^2$ and mean $0$, then there's no need for the identity matrix in specifying its variance; its variance is a scalar. As specified in the first paragraph, each of the columns of $X$ also has variance $σ^2$ and mean $0$, so the variance of $X'y$ is given by a $(p+1)$-by-$1$ column vector whose elements are $nσ^4$, i.e., $nσ^41_{p+1}$. The presence of the $n$ term seems strange until you realize that we are actually talking about the variance of the sum of $n$ random variables, each with variance $σ^4$ (the product of two random variables each with variance $σ^2$ and mean $0$). That is,
$\text{Var}(\hatβ)=[1/{(n^2σ^4)}]I_{p+1}nσ^41_{p+1}$
So we have a $(p+1)$-by-$(p+1)$ diagonal matrix multiplying a $(p+1)$-by-$1$ vector, each of whose elements are
$\text{Var}(\hatβ_i)=[1/{(n^2σ^4)}]nσ^4=1/n$
Note the absence of $σ^2$, which is due to our specification that all the vectors have the same variance. The summation of the $p+1$ elements of the variance vector therefore scales linearly with $p+1$, which we also expect. This is essentially the variance of $\hat{y}$, which tends to exhibit proportionality to $(p+1)/n$.
Here is a resource I've found useful, and extends this explanation to regularized (ridge) regression.
|
Why variance of OLS estimate decreases as sample size increases?
|
Assumptions:
(1) There exists a population from which infinite draws of $X$ and $y$ may be made, and each of those draws are characterized by the exact same distribution parameters.
(2) $n$ is suffici
|
Why variance of OLS estimate decreases as sample size increases?
Assumptions:
(1) There exists a population from which infinite draws of $X$ and $y$ may be made, and each of those draws are characterized by the exact same distribution parameters.
(2) $n$ is sufficiently large that the variance of a sample of length $n$ is always the same, or may be approximated as such.
Let's start out like this:
$\hatβ=({X'X})^{-1}X'y$
$\text{Var}(\hatβ)=\text{Var}[({X'X})^{-1}X'y]$
Now, let the columns of $X$ be mutually orthogonal, each with variance $σ^2$ and mean $0$. $X'X$ is then a $(p+1)$-dimensional diagonal matrix whose elements are $nσ^2$. $({X'X})^{-1}$ is just the element-by-element inversion of the diagonals of $X'X$, that is, a $(p+1)$-dimensional diagonal matrix whose elements are $1/{(nσ^2)}$.
That brings us to
$\text{Var}(\hatβ)=[1/{(nσ^2)}]^2I_{p+1}\text{Var}[X'y]$
$\text{Var}(\hatβ)=[1/{(n^2σ^4)}]I_{p+1}\text{Var}[X'y]$
However, if $y$ is just a univariate response with variance $σ^2$ and mean $0$, then there's no need for the identity matrix in specifying its variance; its variance is a scalar. As specified in the first paragraph, each of the columns of $X$ also has variance $σ^2$ and mean $0$, so the variance of $X'y$ is given by a $(p+1)$-by-$1$ column vector whose elements are $nσ^4$, i.e., $nσ^41_{p+1}$. The presence of the $n$ term seems strange until you realize that we are actually talking about the variance of the sum of $n$ random variables, each with variance $σ^4$ (the product of two random variables each with variance $σ^2$ and mean $0$). That is,
$\text{Var}(\hatβ)=[1/{(n^2σ^4)}]I_{p+1}nσ^41_{p+1}$
So we have a $(p+1)$-by-$(p+1)$ diagonal matrix multiplying a $(p+1)$-by-$1$ vector, each of whose elements are
$\text{Var}(\hatβ_i)=[1/{(n^2σ^4)}]nσ^4=1/n$
Note the absence of $σ^2$, which is due to our specification that all the vectors have the same variance. The summation of the $p+1$ elements of the variance vector therefore scales linearly with $p+1$, which we also expect. This is essentially the variance of $\hat{y}$, which tends to exhibit proportionality to $(p+1)/n$.
Here is a resource I've found useful, and extends this explanation to regularized (ridge) regression.
|
Why variance of OLS estimate decreases as sample size increases?
Assumptions:
(1) There exists a population from which infinite draws of $X$ and $y$ may be made, and each of those draws are characterized by the exact same distribution parameters.
(2) $n$ is suffici
|
44,833
|
Mixture models vs Mixed models
|
Tim gives a good answer describing the conceptual differences between the two model classes. In the interest of completeness, since you asked for a practical example, here is some R code for generating data from a mixture model. More specifically, this is a Gaussian mixture model with two components; to adapt this to Tim's notation so you can see the relationship, we have that:
$$
g(x) = \sum_{k=1}^K \pi_k f_k(x; \vartheta_k)
$$
Where $K=2$ and $f_k(x; \vartheta_k) \sim N(\mu_k,\sigma^2_k)$. That is, $g(x)$ is distributed as a finite mixture of two normal (Gaussian) distributions, each with its own mean and variance; where $\pi$ is a parameter that governs the degree of mixing. Now, let's visualize what this all actually means. Let's start by setting some of these parameters to fixed values.
Let's say that $\pi=0.5$, $\mu_1=1$, $\sigma^2_1=1$, $\mu_2=6$, and $\sigma^2_2=2$. This gives us:
$$
g(x) = 0.5\times N(1,1) + 0.5\times N(6,2)
$$
So you can see that this is just a weighted sum of two normal distributions. Let's see what happens when we generate this data in R:
# Set our sample size
N <- 1000
# Set our values of pi
pi <- sample(1:2,prob=c(0.5,0.5),size=N,replace=TRUE)
# Set the parameters of our two normal distributions
mus <- c(1,6)
sds <- sqrt(c(1,2))
# Note that above I parameterized our normals in terms of their
# variance, but the rnorm function below requires standard
# deviations, thus why I'm taking the square root.
# Generate our data
mixture_model <- rnorm(n=N,mean=mus[pi],sd=sds[pi])
# Histogram
hist(mixture_model)
You can see that we have created a distribution that is bimodal, with each mode corresponding to one of our component means (1 and 6). I will leave it as an exercise to you to see what happens as you change the mixing proportions ($\pi$) or the parameters of each component, and how that impacts the mixture distribution. It is also possible to define mixtures with more than two components (in fact, there is even a literature on "infinite" mixtures, where the number of components is in itself a random variable!) and with distributions other than normal ones (indeed, in general, there is no need even for each component of the mixture to be the same distribution, and I have even seen nested mixture models, where one component of the mixture is in and of itself another mixture model!).
(As a sidebar, the term "mixture model" has occasionally been used to describe a different class of models that are more properly referred to as "compound probability distributions". For example, if we have a Poisson distribution whose rate parameter is assumed to be a random variable following a Gamma distribution, the resulting "Poisson-Gamma mixture," as it is occasionally called, is actually a compound probability distribution that can be shown to follow a negative binomial distribution. There is a relationship here with the notion of prior/posterior distributions in Bayesian models, and with the notion of finite mixture models I described above, but that's beyond the scope of this question.)
Now, what about mixed models (i.e. mixed-effect models)? Well, as alluded to by Tim, mixed models are really regression models, where we make specific assumptions about the nature of the regression parameters (i.e. fixed vs. random effects). In general, a mixed model is any regression model that contains both fixed and random effects, where we assume the random effects follow some distribution. See the link in Tim's answer for a more thorough discussion on what this actually means.
The main conceptual difference between the approaches is that a mixture model is really just a way of specifying the distribution of a random variable (as being a mixture of other distributions), while mixed models are a way of specifying the relationship between a set of covariates and an outcome variable. Indeed, it is possible to have a mixed(-effect) mixture model, where we have the outcome variable following a mixture of distributions and we try to relate a set of covariates to that mixture.
|
Mixture models vs Mixed models
|
Tim gives a good answer describing the conceptual differences between the two model classes. In the interest of completeness, since you asked for a practical example, here is some R code for generatin
|
Mixture models vs Mixed models
Tim gives a good answer describing the conceptual differences between the two model classes. In the interest of completeness, since you asked for a practical example, here is some R code for generating data from a mixture model. More specifically, this is a Gaussian mixture model with two components; to adapt this to Tim's notation so you can see the relationship, we have that:
$$
g(x) = \sum_{k=1}^K \pi_k f_k(x; \vartheta_k)
$$
Where $K=2$ and $f_k(x; \vartheta_k) \sim N(\mu_k,\sigma^2_k)$. That is, $g(x)$ is distributed as a finite mixture of two normal (Gaussian) distributions, each with its own mean and variance; where $\pi$ is a parameter that governs the degree of mixing. Now, let's visualize what this all actually means. Let's start by setting some of these parameters to fixed values.
Let's say that $\pi=0.5$, $\mu_1=1$, $\sigma^2_1=1$, $\mu_2=6$, and $\sigma^2_2=2$. This gives us:
$$
g(x) = 0.5\times N(1,1) + 0.5\times N(6,2)
$$
So you can see that this is just a weighted sum of two normal distributions. Let's see what happens when we generate this data in R:
# Set our sample size
N <- 1000
# Set our values of pi
pi <- sample(1:2,prob=c(0.5,0.5),size=N,replace=TRUE)
# Set the parameters of our two normal distributions
mus <- c(1,6)
sds <- sqrt(c(1,2))
# Note that above I parameterized our normals in terms of their
# variance, but the rnorm function below requires standard
# deviations, thus why I'm taking the square root.
# Generate our data
mixture_model <- rnorm(n=N,mean=mus[pi],sd=sds[pi])
# Histogram
hist(mixture_model)
You can see that we have created a distribution that is bimodal, with each mode corresponding to one of our component means (1 and 6). I will leave it as an exercise to you to see what happens as you change the mixing proportions ($\pi$) or the parameters of each component, and how that impacts the mixture distribution. It is also possible to define mixtures with more than two components (in fact, there is even a literature on "infinite" mixtures, where the number of components is in itself a random variable!) and with distributions other than normal ones (indeed, in general, there is no need even for each component of the mixture to be the same distribution, and I have even seen nested mixture models, where one component of the mixture is in and of itself another mixture model!).
(As a sidebar, the term "mixture model" has occasionally been used to describe a different class of models that are more properly referred to as "compound probability distributions". For example, if we have a Poisson distribution whose rate parameter is assumed to be a random variable following a Gamma distribution, the resulting "Poisson-Gamma mixture," as it is occasionally called, is actually a compound probability distribution that can be shown to follow a negative binomial distribution. There is a relationship here with the notion of prior/posterior distributions in Bayesian models, and with the notion of finite mixture models I described above, but that's beyond the scope of this question.)
Now, what about mixed models (i.e. mixed-effect models)? Well, as alluded to by Tim, mixed models are really regression models, where we make specific assumptions about the nature of the regression parameters (i.e. fixed vs. random effects). In general, a mixed model is any regression model that contains both fixed and random effects, where we assume the random effects follow some distribution. See the link in Tim's answer for a more thorough discussion on what this actually means.
The main conceptual difference between the approaches is that a mixture model is really just a way of specifying the distribution of a random variable (as being a mixture of other distributions), while mixed models are a way of specifying the relationship between a set of covariates and an outcome variable. Indeed, it is possible to have a mixed(-effect) mixture model, where we have the outcome variable following a mixture of distributions and we try to relate a set of covariates to that mixture.
|
Mixture models vs Mixed models
Tim gives a good answer describing the conceptual differences between the two model classes. In the interest of completeness, since you asked for a practical example, here is some R code for generatin
|
44,834
|
Mixture models vs Mixed models
|
Besides similar sounding names, they are completely different kind of models.
Finite mixture models are models that describe your data in terms of mixture distribution,
$$
g(x) = \sum_{k=1}^K \pi_k f_k(x; \vartheta_k)
$$
where the final distribution $g$ is a mixture of $K$ component-distributions $f_k$ parametrized by own parameters $\vartheta_k$ and mixing proportion $\pi_k \ge 0$ such that $\sum_{k=1}^K \pi_k = 1$. They can be used for many different pourposes like clustering, but also there are more complicated useages like cluster-wise regression. There are also infinite mixtures, where there is no fixed $K$, but this is a longer story.
Mixed effects models and generalized mixed effects models are similar to linear regression and generalized linear models, but as regression and GLM's include only fixed effects, LMM's and GLMM's include also random effects. For more details see
What is the difference between fixed effect, random effect and mixed effect models?
|
Mixture models vs Mixed models
|
Besides similar sounding names, they are completely different kind of models.
Finite mixture models are models that describe your data in terms of mixture distribution,
$$
g(x) = \sum_{k=1}^K \pi_k f
|
Mixture models vs Mixed models
Besides similar sounding names, they are completely different kind of models.
Finite mixture models are models that describe your data in terms of mixture distribution,
$$
g(x) = \sum_{k=1}^K \pi_k f_k(x; \vartheta_k)
$$
where the final distribution $g$ is a mixture of $K$ component-distributions $f_k$ parametrized by own parameters $\vartheta_k$ and mixing proportion $\pi_k \ge 0$ such that $\sum_{k=1}^K \pi_k = 1$. They can be used for many different pourposes like clustering, but also there are more complicated useages like cluster-wise regression. There are also infinite mixtures, where there is no fixed $K$, but this is a longer story.
Mixed effects models and generalized mixed effects models are similar to linear regression and generalized linear models, but as regression and GLM's include only fixed effects, LMM's and GLMM's include also random effects. For more details see
What is the difference between fixed effect, random effect and mixed effect models?
|
Mixture models vs Mixed models
Besides similar sounding names, they are completely different kind of models.
Finite mixture models are models that describe your data in terms of mixture distribution,
$$
g(x) = \sum_{k=1}^K \pi_k f
|
44,835
|
Which mean to use in a one sample t-test on transformed data
|
I don't think squaring will necessarily do what you want even if it makes things look normal.
If you want to test equality of a population mean to a hypothesized mean then by testing a transformed variable you can be highly likely to reject when the original population mean is the one given in the null (that is, you will be likely to reject true nulls).
Consider some random variable $X$ which has some distribution with $\mu=\mu_0$ and non-zero variance.
Let $Y=X^2$.
$E(Y)=E(X^2) = E(X)^2 +\text{Var}(X)=\mu_0^2+\sigma^2_X$
Consequently, a test of $H_0^*:\mu_Y=\mu_0^2$ should reject (and in large samples will become essentially certain to, even though the original hypothesis $H_0:\mu_X=\mu_0$ was true.
Beware of mixing hypothesis tests and transformations unless you actually understand how they behave!
Illustration
Here's a sample from a somewhat left-skew distribution with population mean 5:
By chance, the sample mean came out really close to the population mean:
> mean(y)
[2] 5.000247
Now we square it. How does the mean compare with 25?
> mean(y^2)
[1] 27.97773
Almost 28 (the population variance of Y was about 3, so this is to be expected)
So if we test whether the population mean of $Y^2$ is 25 ... we're likely to reject. (In this particular sample the p-value would only be about 0.08)
Code was requested; unfortunately I didn't keep the code I used to generate
the example; this is vaguely similar to the example in that it's left skew with mean 5 and variance is substantial (though not as large as in the original):
n=100;x=ifelse(runif(n)<.5,pmax(runif(n),runif(n),runif(n))*5,runif(n,5,7.5))
Here's the results from a sample of 1000 rather than 100 with that code:
> mean(x);var(x);mean(x^2)
[1] 4.985436
[1] 2.35402
[1] 27.20623
> mean(x)^2+var(x)*(1-1/length(x)) # adjust for Bessel's correction
[1] 27.20623
(The adjustment to undo Bessel's correction on samples makes it work like the algebra for the population)
[How relevant would this be to a two sample case? If the two populations from which the samples were drawn don't have the same variance, the means of their squares will be different. This is quite different from the usual issue with different variance and the equal-variance t-test -- the test in this case is much more impacted.]
So what to do? We have to start with the precise hypothesis of interest and figure out a reasonable way to (at least to a good approximation) test that.
It appears the null is definitely equality of means.
There are several options I see:
Use the t-test as is; depending on how skewed and heavy-tailed the distribution is, significance level and power may not be so badly impacted.
Come up with a suitable parametric model for the variables in question.
A permutation test is possible but may present difficulties; under the usual assumptions it would be necessary to assume symmetry under the null (this doesn't imply that the sample should look symmetric, only that if the null were true that it should be expected to be symmetric).
A form of bootstrap test might be employed; it may be reasonable if sample sizes were fairly large for the two variables.
|
Which mean to use in a one sample t-test on transformed data
|
I don't think squaring will necessarily do what you want even if it makes things look normal.
If you want to test equality of a population mean to a hypothesized mean then by testing a transformed va
|
Which mean to use in a one sample t-test on transformed data
I don't think squaring will necessarily do what you want even if it makes things look normal.
If you want to test equality of a population mean to a hypothesized mean then by testing a transformed variable you can be highly likely to reject when the original population mean is the one given in the null (that is, you will be likely to reject true nulls).
Consider some random variable $X$ which has some distribution with $\mu=\mu_0$ and non-zero variance.
Let $Y=X^2$.
$E(Y)=E(X^2) = E(X)^2 +\text{Var}(X)=\mu_0^2+\sigma^2_X$
Consequently, a test of $H_0^*:\mu_Y=\mu_0^2$ should reject (and in large samples will become essentially certain to, even though the original hypothesis $H_0:\mu_X=\mu_0$ was true.
Beware of mixing hypothesis tests and transformations unless you actually understand how they behave!
Illustration
Here's a sample from a somewhat left-skew distribution with population mean 5:
By chance, the sample mean came out really close to the population mean:
> mean(y)
[2] 5.000247
Now we square it. How does the mean compare with 25?
> mean(y^2)
[1] 27.97773
Almost 28 (the population variance of Y was about 3, so this is to be expected)
So if we test whether the population mean of $Y^2$ is 25 ... we're likely to reject. (In this particular sample the p-value would only be about 0.08)
Code was requested; unfortunately I didn't keep the code I used to generate
the example; this is vaguely similar to the example in that it's left skew with mean 5 and variance is substantial (though not as large as in the original):
n=100;x=ifelse(runif(n)<.5,pmax(runif(n),runif(n),runif(n))*5,runif(n,5,7.5))
Here's the results from a sample of 1000 rather than 100 with that code:
> mean(x);var(x);mean(x^2)
[1] 4.985436
[1] 2.35402
[1] 27.20623
> mean(x)^2+var(x)*(1-1/length(x)) # adjust for Bessel's correction
[1] 27.20623
(The adjustment to undo Bessel's correction on samples makes it work like the algebra for the population)
[How relevant would this be to a two sample case? If the two populations from which the samples were drawn don't have the same variance, the means of their squares will be different. This is quite different from the usual issue with different variance and the equal-variance t-test -- the test in this case is much more impacted.]
So what to do? We have to start with the precise hypothesis of interest and figure out a reasonable way to (at least to a good approximation) test that.
It appears the null is definitely equality of means.
There are several options I see:
Use the t-test as is; depending on how skewed and heavy-tailed the distribution is, significance level and power may not be so badly impacted.
Come up with a suitable parametric model for the variables in question.
A permutation test is possible but may present difficulties; under the usual assumptions it would be necessary to assume symmetry under the null (this doesn't imply that the sample should look symmetric, only that if the null were true that it should be expected to be symmetric).
A form of bootstrap test might be employed; it may be reasonable if sample sizes were fairly large for the two variables.
|
Which mean to use in a one sample t-test on transformed data
I don't think squaring will necessarily do what you want even if it makes things look normal.
If you want to test equality of a population mean to a hypothesized mean then by testing a transformed va
|
44,836
|
Which mean to use in a one sample t-test on transformed data
|
As @user20637 points on the the comment below, the result of a t-test of your squared data against the squared US population mean will not necessarily imply that your data are shifted relative to the US population. You cannot assess that from what you have. Instead, you are just testing if your mean is above a fixed point. Beyond that, you are just making assumptions.
If you have enough data, and can assume that the distribution of your data is a good representation of the population distribution from which they were drawn, you could bootstrap your mean to get a better test.
Another possibility would be to run a set of sensitivity analyses and report the range of results. For example, what if the reported value is the population mean, but the population distribution were as skewed as yours? Other possibilities exist.
You could also be upfront about the assumptions you are making about the population by using a Bayesian analysis.
|
Which mean to use in a one sample t-test on transformed data
|
As @user20637 points on the the comment below, the result of a t-test of your squared data against the squared US population mean will not necessarily imply that your data are shifted relative to the
|
Which mean to use in a one sample t-test on transformed data
As @user20637 points on the the comment below, the result of a t-test of your squared data against the squared US population mean will not necessarily imply that your data are shifted relative to the US population. You cannot assess that from what you have. Instead, you are just testing if your mean is above a fixed point. Beyond that, you are just making assumptions.
If you have enough data, and can assume that the distribution of your data is a good representation of the population distribution from which they were drawn, you could bootstrap your mean to get a better test.
Another possibility would be to run a set of sensitivity analyses and report the range of results. For example, what if the reported value is the population mean, but the population distribution were as skewed as yours? Other possibilities exist.
You could also be upfront about the assumptions you are making about the population by using a Bayesian analysis.
|
Which mean to use in a one sample t-test on transformed data
As @user20637 points on the the comment below, the result of a t-test of your squared data against the squared US population mean will not necessarily imply that your data are shifted relative to the
|
44,837
|
Function with multiple local minima
|
Regarding example of functions with multiple local minima I would suggest visiting a website like the Virtual Library of Simulation Experiments: Test Functions and Datasets - Optimization Test Problems from Simon Fraser University. It contains many examples of functions with many local minima. A trivial two-factor example would be something like: $x \sin(w_1 x+w_2)$. In real-life terms most functions that might reflect some seasonality/periodicity will potentially have multiple local minima relating to that seasonal/periodic effect.
The most straightforward way to asses if a particular function has multiple local minima is to use calculus. Multiple local minima would relate to multiple instances of first derivatives being zero and second derivatives being positive. As Neil mentioned: "in two dimensions (like the plot he's drawn), the second derivative is a matrix, in which case a minimum corresponds to a positive definite second derivative matrix." Moving to multivariate functions will be reflected in dimensions of the function's derivatives. The object we use in that case is the Hessian matrix (which has mentioned we want to be at least positive semi-definite).
The branch of mathematics dealing with topic is called Mathematical Optimisation. Real-life examples of optimisation tasks are extensively involved in the field of Operational Research.
|
Function with multiple local minima
|
Regarding example of functions with multiple local minima I would suggest visiting a website like the Virtual Library of Simulation Experiments: Test Functions and Datasets - Optimization Test Problem
|
Function with multiple local minima
Regarding example of functions with multiple local minima I would suggest visiting a website like the Virtual Library of Simulation Experiments: Test Functions and Datasets - Optimization Test Problems from Simon Fraser University. It contains many examples of functions with many local minima. A trivial two-factor example would be something like: $x \sin(w_1 x+w_2)$. In real-life terms most functions that might reflect some seasonality/periodicity will potentially have multiple local minima relating to that seasonal/periodic effect.
The most straightforward way to asses if a particular function has multiple local minima is to use calculus. Multiple local minima would relate to multiple instances of first derivatives being zero and second derivatives being positive. As Neil mentioned: "in two dimensions (like the plot he's drawn), the second derivative is a matrix, in which case a minimum corresponds to a positive definite second derivative matrix." Moving to multivariate functions will be reflected in dimensions of the function's derivatives. The object we use in that case is the Hessian matrix (which has mentioned we want to be at least positive semi-definite).
The branch of mathematics dealing with topic is called Mathematical Optimisation. Real-life examples of optimisation tasks are extensively involved in the field of Operational Research.
|
Function with multiple local minima
Regarding example of functions with multiple local minima I would suggest visiting a website like the Virtual Library of Simulation Experiments: Test Functions and Datasets - Optimization Test Problem
|
44,838
|
Function with multiple local minima
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
The functions that you are looking for are known as test functions or artificial landscapes.
Wikipedia has a very nice list of test functions for optimization. I recommend looking at the references from the link directly.
|
Function with multiple local minima
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
Function with multiple local minima
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
The functions that you are looking for are known as test functions or artificial landscapes.
Wikipedia has a very nice list of test functions for optimization. I recommend looking at the references from the link directly.
|
Function with multiple local minima
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
44,839
|
Function with multiple local minima
|
If they are only a few and you can estimate a range where they will lay, you can try descent methods with different starting points that will converge to each of them.
This practice works some of the time, but as we increase the number of dimensions (or as we know less about the shape of the function), this simplistic approach will no longer be practical (see previous answers)
|
Function with multiple local minima
|
If they are only a few and you can estimate a range where they will lay, you can try descent methods with different starting points that will converge to each of them.
This practice works some of the
|
Function with multiple local minima
If they are only a few and you can estimate a range where they will lay, you can try descent methods with different starting points that will converge to each of them.
This practice works some of the time, but as we increase the number of dimensions (or as we know less about the shape of the function), this simplistic approach will no longer be practical (see previous answers)
|
Function with multiple local minima
If they are only a few and you can estimate a range where they will lay, you can try descent methods with different starting points that will converge to each of them.
This practice works some of the
|
44,840
|
If we primarily use LSTMs over RNNs to solve the vanishing gradient problem, why can't we just use ReLUs/leaky ReLUs with RNNs instead?
|
I think there's some confusion here. The reason you have vanishing gradients in neural networks (with say, softmax) is wholly different from RNNs. With neural networks, you get vanishing gradients because most initial conditions make your outputs end up on either the far left or far right of your softmax layer, giving it a vanishingly small gradient. In general it's difficult to select proper initial conditions, so people opted to use leaky ReLu's because they don't have the above problems.
Whereas with RNN's, the problem is that you are repeatedly applying your RNN to itself, which tends to cause either exponential blowup or shrinkage. See this paper for example:
On the difficulty of training recurrent neural networks: https://arxiv.org/abs/1211.5063
The suggestions of the above paper are: if the gradient is too large, then clip it to a smaller value. If the gradient is too small, regularize it via a soft constraint to not vanish.
There's a lot of research on LSTMs, and plenty of theories on why LSTMs tend to outperform RNNs. Here's a nice explanation: http://r2rt.com/written-memories-understanding-deriving-and-extending-the-lstm.html#fnref2
|
If we primarily use LSTMs over RNNs to solve the vanishing gradient problem, why can't we just use R
|
I think there's some confusion here. The reason you have vanishing gradients in neural networks (with say, softmax) is wholly different from RNNs. With neural networks, you get vanishing gradients bec
|
If we primarily use LSTMs over RNNs to solve the vanishing gradient problem, why can't we just use ReLUs/leaky ReLUs with RNNs instead?
I think there's some confusion here. The reason you have vanishing gradients in neural networks (with say, softmax) is wholly different from RNNs. With neural networks, you get vanishing gradients because most initial conditions make your outputs end up on either the far left or far right of your softmax layer, giving it a vanishingly small gradient. In general it's difficult to select proper initial conditions, so people opted to use leaky ReLu's because they don't have the above problems.
Whereas with RNN's, the problem is that you are repeatedly applying your RNN to itself, which tends to cause either exponential blowup or shrinkage. See this paper for example:
On the difficulty of training recurrent neural networks: https://arxiv.org/abs/1211.5063
The suggestions of the above paper are: if the gradient is too large, then clip it to a smaller value. If the gradient is too small, regularize it via a soft constraint to not vanish.
There's a lot of research on LSTMs, and plenty of theories on why LSTMs tend to outperform RNNs. Here's a nice explanation: http://r2rt.com/written-memories-understanding-deriving-and-extending-the-lstm.html#fnref2
|
If we primarily use LSTMs over RNNs to solve the vanishing gradient problem, why can't we just use R
I think there's some confusion here. The reason you have vanishing gradients in neural networks (with say, softmax) is wholly different from RNNs. With neural networks, you get vanishing gradients bec
|
44,841
|
Difference between confounding and interaction
|
A confounding variable is a variable that correlates with both your regressor and the dependent variable. In some way, this second predictor variable explains all or part of the dependent variable and also is reflected in the independent variable. In essence they share a common quality that means when both are included that quality is over-represented.
In an ecological system, something like a disease that kills both predator and prey is acting on the populations of both, yet has nothing to do with the effect of predation on the decline of prey or growth of predators. It confounds the true predator-prey relationship, particularly if it is disproportionate in its virulence between species.
Interaction is much more complicated because it means that two separate regressors work together to create an outcome variable. They do not overlap, they in some way coalesce in an effect that is not simply additive. Their relationship, as it acts on your dependent variable, is sometimes difficult to figure out.
If you have a situation where two proteins work together to accomplish some kind of chemical process in the human body with only one pathway. Removing one or the other will break your model. Though it may be difficult from the model to exactly quantify their relationship if there are other components which create the appropriate environment for the reaction or regulate the presence of the resulting product (like reuptake or conversion).
With confounding variables, you can often leave one or the other out and get a more accurate model (although not always). With an interaction, leaving one or the other out will likely make it worse.
|
Difference between confounding and interaction
|
A confounding variable is a variable that correlates with both your regressor and the dependent variable. In some way, this second predictor variable explains all or part of the dependent variable and
|
Difference between confounding and interaction
A confounding variable is a variable that correlates with both your regressor and the dependent variable. In some way, this second predictor variable explains all or part of the dependent variable and also is reflected in the independent variable. In essence they share a common quality that means when both are included that quality is over-represented.
In an ecological system, something like a disease that kills both predator and prey is acting on the populations of both, yet has nothing to do with the effect of predation on the decline of prey or growth of predators. It confounds the true predator-prey relationship, particularly if it is disproportionate in its virulence between species.
Interaction is much more complicated because it means that two separate regressors work together to create an outcome variable. They do not overlap, they in some way coalesce in an effect that is not simply additive. Their relationship, as it acts on your dependent variable, is sometimes difficult to figure out.
If you have a situation where two proteins work together to accomplish some kind of chemical process in the human body with only one pathway. Removing one or the other will break your model. Though it may be difficult from the model to exactly quantify their relationship if there are other components which create the appropriate environment for the reaction or regulate the presence of the resulting product (like reuptake or conversion).
With confounding variables, you can often leave one or the other out and get a more accurate model (although not always). With an interaction, leaving one or the other out will likely make it worse.
|
Difference between confounding and interaction
A confounding variable is a variable that correlates with both your regressor and the dependent variable. In some way, this second predictor variable explains all or part of the dependent variable and
|
44,842
|
Why is a Normal Mixture Model not identifiable and why does it matter?
|
Consider the case where $\theta_1 = (w_1=0.5, \mu_1 = 0, \sigma_1^2 = 1)$ and $\theta_2 = (w_2=0.5, \mu_2 = 1, \sigma_2^2 = 1)$ We get exactly the same fit to the data if $\theta_1 = (w_1=0.5, \mu_1 = 1, \sigma_1^2 = 1)$ and $\theta_2 = (w_2=0.5, \mu_2 = 0, \sigma_2^2 = 1)$ Thus, there is no way to empirically learn the value of $\mu_1$ regardless of the amount of data (i.e., it is not identified).
In this case, the absence of identifiability is not "bad", as the real problem to be solved is estimating the parameters, and whether we choose one component of the mixture the first or second component is immaterial.
|
Why is a Normal Mixture Model not identifiable and why does it matter?
|
Consider the case where $\theta_1 = (w_1=0.5, \mu_1 = 0, \sigma_1^2 = 1)$ and $\theta_2 = (w_2=0.5, \mu_2 = 1, \sigma_2^2 = 1)$ We get exactly the same fit to the data if $\theta_1 = (w_1=0.5, \mu_1 =
|
Why is a Normal Mixture Model not identifiable and why does it matter?
Consider the case where $\theta_1 = (w_1=0.5, \mu_1 = 0, \sigma_1^2 = 1)$ and $\theta_2 = (w_2=0.5, \mu_2 = 1, \sigma_2^2 = 1)$ We get exactly the same fit to the data if $\theta_1 = (w_1=0.5, \mu_1 = 1, \sigma_1^2 = 1)$ and $\theta_2 = (w_2=0.5, \mu_2 = 0, \sigma_2^2 = 1)$ Thus, there is no way to empirically learn the value of $\mu_1$ regardless of the amount of data (i.e., it is not identified).
In this case, the absence of identifiability is not "bad", as the real problem to be solved is estimating the parameters, and whether we choose one component of the mixture the first or second component is immaterial.
|
Why is a Normal Mixture Model not identifiable and why does it matter?
Consider the case where $\theta_1 = (w_1=0.5, \mu_1 = 0, \sigma_1^2 = 1)$ and $\theta_2 = (w_2=0.5, \mu_2 = 1, \sigma_2^2 = 1)$ We get exactly the same fit to the data if $\theta_1 = (w_1=0.5, \mu_1 =
|
44,843
|
Why is a Normal Mixture Model not identifiable and why does it matter?
|
The mixture sum is well-defined and almost always identifiable, while
the elements of the sum can be switched with one another without
changing the sum, by mere commutativity: a+b=b+a.
For instance, here is the surface of a log-likelihood associated with the mixture$$\frac{1}{2}\mathrm{N}(\mu_1,1)+\frac{1}{2}\mathrm{N}(\mu_2,1)$$ and 250 observations from this distribution where I used $\mu_1=0$ and $\mu_2=2$. (Or equivalently $\mu_2=0$ and $\mu_1=2$.) The paths on the surface are EM steps with different starting points, but the figure shows clearly the symmetry of the likelihood function along the diagonal, which is an illustration of this lack of identifiability of $\mu_1$ as such.
[The picture is taken from our book Introduction Monte Carlo Method with R, written with my late friend George Casella.]
The lack of identifiability is not an issue at the level of the
distribution but it creates inference problems, from a multimodal
likelihood to complex limiting distributions for the estimators, to
exploration troubles for numerical solutions.
Here is for instance the output of four STAN chains ending up in different modes of the posterior, taken from a recent discussion on the identifiability of mixtures by Michael Betancourt:
|
Why is a Normal Mixture Model not identifiable and why does it matter?
|
The mixture sum is well-defined and almost always identifiable, while
the elements of the sum can be switched with one another without
changing the sum, by mere commutativity: a+b=b+a.
For instan
|
Why is a Normal Mixture Model not identifiable and why does it matter?
The mixture sum is well-defined and almost always identifiable, while
the elements of the sum can be switched with one another without
changing the sum, by mere commutativity: a+b=b+a.
For instance, here is the surface of a log-likelihood associated with the mixture$$\frac{1}{2}\mathrm{N}(\mu_1,1)+\frac{1}{2}\mathrm{N}(\mu_2,1)$$ and 250 observations from this distribution where I used $\mu_1=0$ and $\mu_2=2$. (Or equivalently $\mu_2=0$ and $\mu_1=2$.) The paths on the surface are EM steps with different starting points, but the figure shows clearly the symmetry of the likelihood function along the diagonal, which is an illustration of this lack of identifiability of $\mu_1$ as such.
[The picture is taken from our book Introduction Monte Carlo Method with R, written with my late friend George Casella.]
The lack of identifiability is not an issue at the level of the
distribution but it creates inference problems, from a multimodal
likelihood to complex limiting distributions for the estimators, to
exploration troubles for numerical solutions.
Here is for instance the output of four STAN chains ending up in different modes of the posterior, taken from a recent discussion on the identifiability of mixtures by Michael Betancourt:
|
Why is a Normal Mixture Model not identifiable and why does it matter?
The mixture sum is well-defined and almost always identifiable, while
the elements of the sum can be switched with one another without
changing the sum, by mere commutativity: a+b=b+a.
For instan
|
44,844
|
Deriving exponential distribution from sum of two squared normal random variables
|
First use the joint probability density function of $X$ and $Y$ and switch to polar coordinates, then
$$ \mathbb{P}(Z\leq z)=\mathbb{P}(X^2+Y^2\leq z)=\frac{1}{\pi}\int_{x^2+y^2\leq z}e^{-x^2-y^2}\;dxdy=\frac{1}{\pi}\int_{0}^{2\pi}\int_0^{\sqrt{z}}e^{-r^2}r\;drd\theta$$
$$=2\int_0^{\sqrt{z}}re^{-r^2}\;dr $$
Now if we set $u=r^2$ then we get
$$ \mathbb{P}(Z\leq z)=\int_0^ze^{-u}\;du$$
so $Z$ is exponentially distributed with rate parameter $\lambda = 1$.
|
Deriving exponential distribution from sum of two squared normal random variables
|
First use the joint probability density function of $X$ and $Y$ and switch to polar coordinates, then
$$ \mathbb{P}(Z\leq z)=\mathbb{P}(X^2+Y^2\leq z)=\frac{1}{\pi}\int_{x^2+y^2\leq z}e^{-x^2-y^2}\;dx
|
Deriving exponential distribution from sum of two squared normal random variables
First use the joint probability density function of $X$ and $Y$ and switch to polar coordinates, then
$$ \mathbb{P}(Z\leq z)=\mathbb{P}(X^2+Y^2\leq z)=\frac{1}{\pi}\int_{x^2+y^2\leq z}e^{-x^2-y^2}\;dxdy=\frac{1}{\pi}\int_{0}^{2\pi}\int_0^{\sqrt{z}}e^{-r^2}r\;drd\theta$$
$$=2\int_0^{\sqrt{z}}re^{-r^2}\;dr $$
Now if we set $u=r^2$ then we get
$$ \mathbb{P}(Z\leq z)=\int_0^ze^{-u}\;du$$
so $Z$ is exponentially distributed with rate parameter $\lambda = 1$.
|
Deriving exponential distribution from sum of two squared normal random variables
First use the joint probability density function of $X$ and $Y$ and switch to polar coordinates, then
$$ \mathbb{P}(Z\leq z)=\mathbb{P}(X^2+Y^2\leq z)=\frac{1}{\pi}\int_{x^2+y^2\leq z}e^{-x^2-y^2}\;dx
|
44,845
|
Deriving exponential distribution from sum of two squared normal random variables
|
$Z$ has a chi-square distribution with the number of degrees of freedom to make it the special case of the exponential. Here $X$ and $Y$ are required to be independent.
|
Deriving exponential distribution from sum of two squared normal random variables
|
$Z$ has a chi-square distribution with the number of degrees of freedom to make it the special case of the exponential. Here $X$ and $Y$ are required to be independent.
|
Deriving exponential distribution from sum of two squared normal random variables
$Z$ has a chi-square distribution with the number of degrees of freedom to make it the special case of the exponential. Here $X$ and $Y$ are required to be independent.
|
Deriving exponential distribution from sum of two squared normal random variables
$Z$ has a chi-square distribution with the number of degrees of freedom to make it the special case of the exponential. Here $X$ and $Y$ are required to be independent.
|
44,846
|
A p-value greater than 0.05 means that my results are meaningless?
|
A p-value above 0.05 doesn't necessarily say 'your correlation is meaningless'.
However, there's more than a 5% chance that you could see a sample correlation at least as far from zero when the population correlation is zero.
Loosely this means you can't confidently distinguish the population correlation your sample was drawn from, from one that is zero (assuming you do mean to set your significance level to 5%)
|
A p-value greater than 0.05 means that my results are meaningless?
|
A p-value above 0.05 doesn't necessarily say 'your correlation is meaningless'.
However, there's more than a 5% chance that you could see a sample correlation at least as far from zero when the popul
|
A p-value greater than 0.05 means that my results are meaningless?
A p-value above 0.05 doesn't necessarily say 'your correlation is meaningless'.
However, there's more than a 5% chance that you could see a sample correlation at least as far from zero when the population correlation is zero.
Loosely this means you can't confidently distinguish the population correlation your sample was drawn from, from one that is zero (assuming you do mean to set your significance level to 5%)
|
A p-value greater than 0.05 means that my results are meaningless?
A p-value above 0.05 doesn't necessarily say 'your correlation is meaningless'.
However, there's more than a 5% chance that you could see a sample correlation at least as far from zero when the popul
|
44,847
|
A p-value greater than 0.05 means that my results are meaningless?
|
It depends on what you are trying to do. I frequently estimate models where I literally do not care about the "p" values because I believe my model. The best estimate of the model is the estimate, not the value that the estimate may or may not be significantly different from.
On the other hand if the purpose is the test a binary hypothesis and not to fit a model, then your results still may or not be "meaningless". The following is a non-comprehensive list of scenarios that I've experienced:
Textbook interpretation: Taking your "p" value as the "true" p value then you can interpret your results as "meaningless" at a 5% level but significant at a 10% level.
Your "p" value may be inaccurate. It was created using a set of assumptions that may or may not be satisfied in your test. Your actual "p" value may differ depending on whether the assumptions are satisfied.
Your entire model may be incorrectly specified and any "p" values (whether "significant" or not) are meaningless because your model doesn't actually approximate the data generating process.
On a last note: no data is meaningless. However, I interpret your use of the word "meaningless" to mean "not significant".
|
A p-value greater than 0.05 means that my results are meaningless?
|
It depends on what you are trying to do. I frequently estimate models where I literally do not care about the "p" values because I believe my model. The best estimate of the model is the estimate, no
|
A p-value greater than 0.05 means that my results are meaningless?
It depends on what you are trying to do. I frequently estimate models where I literally do not care about the "p" values because I believe my model. The best estimate of the model is the estimate, not the value that the estimate may or may not be significantly different from.
On the other hand if the purpose is the test a binary hypothesis and not to fit a model, then your results still may or not be "meaningless". The following is a non-comprehensive list of scenarios that I've experienced:
Textbook interpretation: Taking your "p" value as the "true" p value then you can interpret your results as "meaningless" at a 5% level but significant at a 10% level.
Your "p" value may be inaccurate. It was created using a set of assumptions that may or may not be satisfied in your test. Your actual "p" value may differ depending on whether the assumptions are satisfied.
Your entire model may be incorrectly specified and any "p" values (whether "significant" or not) are meaningless because your model doesn't actually approximate the data generating process.
On a last note: no data is meaningless. However, I interpret your use of the word "meaningless" to mean "not significant".
|
A p-value greater than 0.05 means that my results are meaningless?
It depends on what you are trying to do. I frequently estimate models where I literally do not care about the "p" values because I believe my model. The best estimate of the model is the estimate, no
|
44,848
|
A p-value greater than 0.05 means that my results are meaningless?
|
The p-value is a measure of the evidence against the null hypothesis provided
by the data: the smaller the p-value, the stronger the evidence against the null. Typically, researchers use the following evidence scale:
p(X) < 0.01 very strong evidence,
p(X) ∈ (0.01, 0.05) strong evidence,
p(X) ∈ (0.05, 0.1) weak evidence,
p(X) > 0.1 little or no evidence.
Using to this "classification" as a benchmark, I would not call your results "meaningless", since there is some evidence against the null. I would try to collect and incorporate more data into the analysis.
|
A p-value greater than 0.05 means that my results are meaningless?
|
The p-value is a measure of the evidence against the null hypothesis provided
by the data: the smaller the p-value, the stronger the evidence against the null. Typically, researchers use the following
|
A p-value greater than 0.05 means that my results are meaningless?
The p-value is a measure of the evidence against the null hypothesis provided
by the data: the smaller the p-value, the stronger the evidence against the null. Typically, researchers use the following evidence scale:
p(X) < 0.01 very strong evidence,
p(X) ∈ (0.01, 0.05) strong evidence,
p(X) ∈ (0.05, 0.1) weak evidence,
p(X) > 0.1 little or no evidence.
Using to this "classification" as a benchmark, I would not call your results "meaningless", since there is some evidence against the null. I would try to collect and incorporate more data into the analysis.
|
A p-value greater than 0.05 means that my results are meaningless?
The p-value is a measure of the evidence against the null hypothesis provided
by the data: the smaller the p-value, the stronger the evidence against the null. Typically, researchers use the following
|
44,849
|
A p-value greater than 0.05 means that my results are meaningless?
|
You can use a expression like "marginally significant under 0.06 significance level". 0.05 is popular but not absolute.
|
A p-value greater than 0.05 means that my results are meaningless?
|
You can use a expression like "marginally significant under 0.06 significance level". 0.05 is popular but not absolute.
|
A p-value greater than 0.05 means that my results are meaningless?
You can use a expression like "marginally significant under 0.06 significance level". 0.05 is popular but not absolute.
|
A p-value greater than 0.05 means that my results are meaningless?
You can use a expression like "marginally significant under 0.06 significance level". 0.05 is popular but not absolute.
|
44,850
|
Removing intercept from GLM for multiple factorial predictors only works for first factor in model
|
That trick of getting a parameter for each level of the factor by removing the intercept only works when there is only one factor, as you have seen. You can understand why by counting degrees of freedom: Let factor $a$ have $a$ levels, factor $b$ with $b$ levels. Then factor $a$ have $a-1$degrees of freedom, which means that the indicator matrix with $a$ columns representing with, with a $1$ in each row for the level present at that row, has rank $a-1$. Likewise factor $b$ has $b-1$ degrees of freedom. The intercept has one degree of freedom. So the model formula $ ~ a + b$ (which really is $ ~ a + b + 1$) has $1 + a-1 + b-1 = a+b-1$ degrees of freedom. Removing the intercept (model formula $ ~ a + b - 1$) represents the same model, only the parametrization changed. So it must also have $ a + b - 1 $ degrees of freedom. That $-1$ shows that that there cannot be $a+b$ parameters, so one of the factors still must get one parameter less than number of levels.
That explains what you have seen. But still you can get a coefficient for the missing level of $b$, which should be zero, simply. (depending on the contrasts you are using).
To make this a bit more explicit let us see at an example. I will use R for the matrix algebra. To make design matrices (in R parlance "model matrices") from factors, we need to define contrast functions. I use the default:
> options("contrasts")
$contrasts
unordered ordered
"contr.treatment" "contr.poly"
First we make two factors for a simple, fully crossed design:
a <- factor(rep(letters[1:3], 3))
b <- factor(rep(letters[1:3], each=3))
Then design matrices for each of them:
> X1 <- model.matrix( ~ a-1)
> X2 <- model.matrix( ~b-1)
> X1
aa ab ac
1 1 0 0
2 0 1 0
3 0 0 1
4 1 0 0
5 0 1 0
6 0 0 1
7 1 0 0
8 0 1 0
9 0 0 1
attr(,"assign")
[1] 1 1 1
attr(,"contrasts")
attr(,"contrasts")$a
[1] "contr.treatment"
> X2
ba bb bc
1 1 0 0
2 1 0 0
3 1 0 0
4 0 1 0
5 0 1 0
6 0 1 0
7 0 0 1
8 0 0 1
9 0 0 1
attr(,"assign")
[1] 1 1 1
attr(,"contrasts")
attr(,"contrasts")$b
[1] "contr.treatment"
Each of them, separately, is of full rank:
library(MASS)
library(Matrix)
> Matrix::rankMatrix(X1)
[1] 3
attr(,"method")
[1] "tolNorm2"
attr(,"useGrad")
[1] FALSE
attr(,"tol")
[1] 1.998401e-15
> Matrix::rankMatrix(X2)
[1] 3
attr(,"method")
[1] "tolNorm2"
attr(,"useGrad")
[1] FALSE
attr(,"tol")
[1] 1.998401e-15
But when combined there is a rank deficit, so they must have one dimension "in common":
rankMatrix(cbind(X1, X2))
[1] 5
attr(,"method")
[1] "tolNorm2"
attr(,"useGrad")
[1] FALSE
attr(,"tol")
[1] 1.998401e-15
To identify the common dimension we use the Null() function from package MASS, calculating the null space:
Null(t(cbind(X1, X2)))
[,1]
[1,] -0.4082483
[2,] -0.4082483
[3,] -0.4082483
[4,] 0.4082483
[5,] 0.4082483
[6,] 0.4082483
Yes, the common dimension is the constant vector.
|
Removing intercept from GLM for multiple factorial predictors only works for first factor in model
|
That trick of getting a parameter for each level of the factor by removing the intercept only works when there is only one factor, as you have seen. You can understand why by counting degrees of free
|
Removing intercept from GLM for multiple factorial predictors only works for first factor in model
That trick of getting a parameter for each level of the factor by removing the intercept only works when there is only one factor, as you have seen. You can understand why by counting degrees of freedom: Let factor $a$ have $a$ levels, factor $b$ with $b$ levels. Then factor $a$ have $a-1$degrees of freedom, which means that the indicator matrix with $a$ columns representing with, with a $1$ in each row for the level present at that row, has rank $a-1$. Likewise factor $b$ has $b-1$ degrees of freedom. The intercept has one degree of freedom. So the model formula $ ~ a + b$ (which really is $ ~ a + b + 1$) has $1 + a-1 + b-1 = a+b-1$ degrees of freedom. Removing the intercept (model formula $ ~ a + b - 1$) represents the same model, only the parametrization changed. So it must also have $ a + b - 1 $ degrees of freedom. That $-1$ shows that that there cannot be $a+b$ parameters, so one of the factors still must get one parameter less than number of levels.
That explains what you have seen. But still you can get a coefficient for the missing level of $b$, which should be zero, simply. (depending on the contrasts you are using).
To make this a bit more explicit let us see at an example. I will use R for the matrix algebra. To make design matrices (in R parlance "model matrices") from factors, we need to define contrast functions. I use the default:
> options("contrasts")
$contrasts
unordered ordered
"contr.treatment" "contr.poly"
First we make two factors for a simple, fully crossed design:
a <- factor(rep(letters[1:3], 3))
b <- factor(rep(letters[1:3], each=3))
Then design matrices for each of them:
> X1 <- model.matrix( ~ a-1)
> X2 <- model.matrix( ~b-1)
> X1
aa ab ac
1 1 0 0
2 0 1 0
3 0 0 1
4 1 0 0
5 0 1 0
6 0 0 1
7 1 0 0
8 0 1 0
9 0 0 1
attr(,"assign")
[1] 1 1 1
attr(,"contrasts")
attr(,"contrasts")$a
[1] "contr.treatment"
> X2
ba bb bc
1 1 0 0
2 1 0 0
3 1 0 0
4 0 1 0
5 0 1 0
6 0 1 0
7 0 0 1
8 0 0 1
9 0 0 1
attr(,"assign")
[1] 1 1 1
attr(,"contrasts")
attr(,"contrasts")$b
[1] "contr.treatment"
Each of them, separately, is of full rank:
library(MASS)
library(Matrix)
> Matrix::rankMatrix(X1)
[1] 3
attr(,"method")
[1] "tolNorm2"
attr(,"useGrad")
[1] FALSE
attr(,"tol")
[1] 1.998401e-15
> Matrix::rankMatrix(X2)
[1] 3
attr(,"method")
[1] "tolNorm2"
attr(,"useGrad")
[1] FALSE
attr(,"tol")
[1] 1.998401e-15
But when combined there is a rank deficit, so they must have one dimension "in common":
rankMatrix(cbind(X1, X2))
[1] 5
attr(,"method")
[1] "tolNorm2"
attr(,"useGrad")
[1] FALSE
attr(,"tol")
[1] 1.998401e-15
To identify the common dimension we use the Null() function from package MASS, calculating the null space:
Null(t(cbind(X1, X2)))
[,1]
[1,] -0.4082483
[2,] -0.4082483
[3,] -0.4082483
[4,] 0.4082483
[5,] 0.4082483
[6,] 0.4082483
Yes, the common dimension is the constant vector.
|
Removing intercept from GLM for multiple factorial predictors only works for first factor in model
That trick of getting a parameter for each level of the factor by removing the intercept only works when there is only one factor, as you have seen. You can understand why by counting degrees of free
|
44,851
|
Removing intercept from GLM for multiple factorial predictors only works for first factor in model
|
@kjetil b halvorsen has done a good job outlining the main ideas here. Let me add a couple supplementary points.
With a categorical variable, suppressing the intercept results in level means coding, instead of the default reference level coding. I explain this in greater detail here: How can logistic regression have a factorial predictor and no intercept?
You can use level means coding with multiple categorical variables, but in essence you have to fit the full interaction. In your case, you only wanted to fit the additive model (y~a+b); that is what you cannot do, as previously explained.
Should you be committed to using level means coding, the procedure is fairly straightforward. You first create a new, single variable as the Cartesian product (the combinations) of all possible levels of your various categorical variables. For example, in place of your original two categorical variables (a, with 4 levels, and b, with 9), you would have a single variable with 36 levels (a1b1, a1b2, a1b3, a1b4, a1b5, a1b6, a1b7, a1b8, a1b9, a2b1, ..., a4b9). Then you fit your model using level means coding (i.e., suppressing the intercept) with the new variable:
mod <- glm(y~0+ab, family=binomial(logit), data=pretend)
summary(mod)
Note again that this is equivalent to glm(y~a*b, ...); it is only that the output will be presented differently.
|
Removing intercept from GLM for multiple factorial predictors only works for first factor in model
|
@kjetil b halvorsen has done a good job outlining the main ideas here. Let me add a couple supplementary points.
With a categorical variable, suppressing the intercept results in level means coding
|
Removing intercept from GLM for multiple factorial predictors only works for first factor in model
@kjetil b halvorsen has done a good job outlining the main ideas here. Let me add a couple supplementary points.
With a categorical variable, suppressing the intercept results in level means coding, instead of the default reference level coding. I explain this in greater detail here: How can logistic regression have a factorial predictor and no intercept?
You can use level means coding with multiple categorical variables, but in essence you have to fit the full interaction. In your case, you only wanted to fit the additive model (y~a+b); that is what you cannot do, as previously explained.
Should you be committed to using level means coding, the procedure is fairly straightforward. You first create a new, single variable as the Cartesian product (the combinations) of all possible levels of your various categorical variables. For example, in place of your original two categorical variables (a, with 4 levels, and b, with 9), you would have a single variable with 36 levels (a1b1, a1b2, a1b3, a1b4, a1b5, a1b6, a1b7, a1b8, a1b9, a2b1, ..., a4b9). Then you fit your model using level means coding (i.e., suppressing the intercept) with the new variable:
mod <- glm(y~0+ab, family=binomial(logit), data=pretend)
summary(mod)
Note again that this is equivalent to glm(y~a*b, ...); it is only that the output will be presented differently.
|
Removing intercept from GLM for multiple factorial predictors only works for first factor in model
@kjetil b halvorsen has done a good job outlining the main ideas here. Let me add a couple supplementary points.
With a categorical variable, suppressing the intercept results in level means coding
|
44,852
|
Robust methods and penalized regression
|
The majority of the development of Ridge and LASSO relates to estimation of OLS parameters. Recent work has expanded this to GLMs for exponential families, unified under the notion that it's the likelihood that's penalized.
In robust statistics, one views maximum likelihood as a special case of the general optimization problem for a general loss function: $\sum_i^n \rho(\mathbf{X}_i\beta - Y_i)$. Different loss functions can be defined to give robust estimators. For instance, $\rho(r) = \frac{r^2}{1+r^2}$ is locally quadratic but bounded above by 1, having the effect of "downweighting" large residuals. This is a good alternative to the OLS estimator which uses unbounded quadratic loss. With minimax estimation, one can often show these estimators are the "best" according to some arbitrarily defined risk functions.
I don't see any reason why the general minimax estimator would not be amenable to $\mathcal{L}_1$ or $\mathcal{L}_2$ penalties. As far as I know, however, substantive research hasn't been done with the problem of estimation in this case.
$$\hat{\beta} = \mbox{argmin}_\beta \left\{ \sum_i^n \rho(\mathbf{X}_i\beta - Y_i) - \lambda \| \beta\| \right\}$$
So play around with it with your own optimization functions and see what you find. Make good use of the R functions optim and nlm. I don't think there are any packages for this.
|
Robust methods and penalized regression
|
The majority of the development of Ridge and LASSO relates to estimation of OLS parameters. Recent work has expanded this to GLMs for exponential families, unified under the notion that it's the likel
|
Robust methods and penalized regression
The majority of the development of Ridge and LASSO relates to estimation of OLS parameters. Recent work has expanded this to GLMs for exponential families, unified under the notion that it's the likelihood that's penalized.
In robust statistics, one views maximum likelihood as a special case of the general optimization problem for a general loss function: $\sum_i^n \rho(\mathbf{X}_i\beta - Y_i)$. Different loss functions can be defined to give robust estimators. For instance, $\rho(r) = \frac{r^2}{1+r^2}$ is locally quadratic but bounded above by 1, having the effect of "downweighting" large residuals. This is a good alternative to the OLS estimator which uses unbounded quadratic loss. With minimax estimation, one can often show these estimators are the "best" according to some arbitrarily defined risk functions.
I don't see any reason why the general minimax estimator would not be amenable to $\mathcal{L}_1$ or $\mathcal{L}_2$ penalties. As far as I know, however, substantive research hasn't been done with the problem of estimation in this case.
$$\hat{\beta} = \mbox{argmin}_\beta \left\{ \sum_i^n \rho(\mathbf{X}_i\beta - Y_i) - \lambda \| \beta\| \right\}$$
So play around with it with your own optimization functions and see what you find. Make good use of the R functions optim and nlm. I don't think there are any packages for this.
|
Robust methods and penalized regression
The majority of the development of Ridge and LASSO relates to estimation of OLS parameters. Recent work has expanded this to GLMs for exponential families, unified under the notion that it's the likel
|
44,853
|
Robust methods and penalized regression
|
Sure, you can combine $l_1$ (or $l_2$) penalty with robust regression.
Consider for example Alfons et al. 2013 [0] which combines $l_1$ sparsity penalty with the LTS loss function (and a FastLTS like algorithm). Their Lasso-LTS estimator is defined as:
$$(1)\quad\hat{\pmb\beta}_{\text{LLTS}} = \arg\min_{\pmb\beta}\sum_{i=1}^h S(\pmb y - \pmb X\pmb\beta)_{(i)}+h\lambda||\pmb\beta||_1$$
where $n$ indicates the number of observations, $p$ the number of design variables, $S$ is a symmetric, smooth, positive loss function, $h$ an integer larger than $[n/2]+1$ and for any $n$ vector $\pmb y$,
$S(\pmb y)$ is the $n$-vector obtained by applying $S$ to the entries of $\pmb y$ element-wise, $S(\pmb y)_i$ and $S(\pmb y)_{(i)}$ are respectively the $i$-th and the $i$-th largest entry of this vector so that $\sum_{i=1}^hS(\pmb y)_{(i)}$ is the sum of the $h$ smallest entries of $S(\pmb y)$.
In this notation, $h$ is a parameter that governs the desired robustness of the estimator to the presence of outliers in the data and $S$ is your usual loss function (for example squared loss). The robustness to outliers comes from the partial sum in $(1)$. Since $h>[n/2]+1$, the use of a partial sum prevents observations inconsistent with the multivariate pattern of the bulk of the data from influencing the fit (for a online, R code based treatment of robust regression see this tutorial, a more textbook-y treatment of robust estimation can be found in [2]).
Here is a link to a high quality R implementation of the Lasso-LTS method.
The Lasso-LTS method is robust in the sense of having a bounded loss function and a positive and high breakdown point. The (finite sample) breakdown point of an estimator is a pragmatic measure of its robustness to the presence of outliers in the data [1]. Informally, it is the smallest proportion of the original data that needs to be replaced by outliers to drive the estimates arbitrary far away from the values they would have had on the original data. Intuitively, the higher the breakdown point the more robust the estimator (to give you an idea, the breakdown point of the classical lasso is essentially the same as that of the univariate mean: 0).
For the Lasso-LTS, the finite sample breakdown point is:
$$\varepsilon^*_n(\hat{\pmb\beta}_{\text{LLTS}})=\frac{n-h+1}{n}\approx0.5,$$
(for comparison this is essentially similar to the breakdown point of the univariate median)
Typically the robustification of the fit penalty (the first term) of the total loss function renders it highly non convex and very complex (though smooth) in the sense of having a large number of local minimae. Moreover, these minimae are typically disconnected from one another (they don't say all lie on a manifold). This makes the search for a solution to $(1)$ a delicate affair. For this reasons, special algorithms have been developed to obtain (stochastic) approximations to the corresponding optimae. A big issue here revolves around data based procedures to pick good starting points. I wouldn't advice trying to rewrite your own version of them (or trying to reinvent the wheel).
[0] A. Alfons, C. Croux, S. Gelper (2013). Sparse least trimmed squares regression for analyzing high-dimensional large data sets. The Annals of Applied Statistics, 7(1), 226-248. Ungated Link
[1] Donoho, D. L., Breakdown properties of multivariate location estimators, Dept. Statistics, Harvard Univ. 1982. Ungated Link
[2] Maronna R. A., Martin R. D. and Yohai V. J. (2006). Robust Statistics: Theory and Methods. Wiley, New York.
|
Robust methods and penalized regression
|
Sure, you can combine $l_1$ (or $l_2$) penalty with robust regression.
Consider for example Alfons et al. 2013 [0] which combines $l_1$ sparsity penalty with the LTS loss function (and a FastLTS like
|
Robust methods and penalized regression
Sure, you can combine $l_1$ (or $l_2$) penalty with robust regression.
Consider for example Alfons et al. 2013 [0] which combines $l_1$ sparsity penalty with the LTS loss function (and a FastLTS like algorithm). Their Lasso-LTS estimator is defined as:
$$(1)\quad\hat{\pmb\beta}_{\text{LLTS}} = \arg\min_{\pmb\beta}\sum_{i=1}^h S(\pmb y - \pmb X\pmb\beta)_{(i)}+h\lambda||\pmb\beta||_1$$
where $n$ indicates the number of observations, $p$ the number of design variables, $S$ is a symmetric, smooth, positive loss function, $h$ an integer larger than $[n/2]+1$ and for any $n$ vector $\pmb y$,
$S(\pmb y)$ is the $n$-vector obtained by applying $S$ to the entries of $\pmb y$ element-wise, $S(\pmb y)_i$ and $S(\pmb y)_{(i)}$ are respectively the $i$-th and the $i$-th largest entry of this vector so that $\sum_{i=1}^hS(\pmb y)_{(i)}$ is the sum of the $h$ smallest entries of $S(\pmb y)$.
In this notation, $h$ is a parameter that governs the desired robustness of the estimator to the presence of outliers in the data and $S$ is your usual loss function (for example squared loss). The robustness to outliers comes from the partial sum in $(1)$. Since $h>[n/2]+1$, the use of a partial sum prevents observations inconsistent with the multivariate pattern of the bulk of the data from influencing the fit (for a online, R code based treatment of robust regression see this tutorial, a more textbook-y treatment of robust estimation can be found in [2]).
Here is a link to a high quality R implementation of the Lasso-LTS method.
The Lasso-LTS method is robust in the sense of having a bounded loss function and a positive and high breakdown point. The (finite sample) breakdown point of an estimator is a pragmatic measure of its robustness to the presence of outliers in the data [1]. Informally, it is the smallest proportion of the original data that needs to be replaced by outliers to drive the estimates arbitrary far away from the values they would have had on the original data. Intuitively, the higher the breakdown point the more robust the estimator (to give you an idea, the breakdown point of the classical lasso is essentially the same as that of the univariate mean: 0).
For the Lasso-LTS, the finite sample breakdown point is:
$$\varepsilon^*_n(\hat{\pmb\beta}_{\text{LLTS}})=\frac{n-h+1}{n}\approx0.5,$$
(for comparison this is essentially similar to the breakdown point of the univariate median)
Typically the robustification of the fit penalty (the first term) of the total loss function renders it highly non convex and very complex (though smooth) in the sense of having a large number of local minimae. Moreover, these minimae are typically disconnected from one another (they don't say all lie on a manifold). This makes the search for a solution to $(1)$ a delicate affair. For this reasons, special algorithms have been developed to obtain (stochastic) approximations to the corresponding optimae. A big issue here revolves around data based procedures to pick good starting points. I wouldn't advice trying to rewrite your own version of them (or trying to reinvent the wheel).
[0] A. Alfons, C. Croux, S. Gelper (2013). Sparse least trimmed squares regression for analyzing high-dimensional large data sets. The Annals of Applied Statistics, 7(1), 226-248. Ungated Link
[1] Donoho, D. L., Breakdown properties of multivariate location estimators, Dept. Statistics, Harvard Univ. 1982. Ungated Link
[2] Maronna R. A., Martin R. D. and Yohai V. J. (2006). Robust Statistics: Theory and Methods. Wiley, New York.
|
Robust methods and penalized regression
Sure, you can combine $l_1$ (or $l_2$) penalty with robust regression.
Consider for example Alfons et al. 2013 [0] which combines $l_1$ sparsity penalty with the LTS loss function (and a FastLTS like
|
44,854
|
What is the name for the distribution shape of a histogram with this kind of curvature?
|
It could be a bimodal distribution
Or then it could just be the a run of the mill normal distribution as the dip in the middle doesn't appear to be that big.
Image Bimodal.png by Maksim, from Wikimedia commons; licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license
|
What is the name for the distribution shape of a histogram with this kind of curvature?
|
It could be a bimodal distribution
Or then it could just be the a run of the mill normal distribution as the dip in the middle doesn't appear to be that big.
Image Bimodal.png by Maksim, from Wikimed
|
What is the name for the distribution shape of a histogram with this kind of curvature?
It could be a bimodal distribution
Or then it could just be the a run of the mill normal distribution as the dip in the middle doesn't appear to be that big.
Image Bimodal.png by Maksim, from Wikimedia commons; licensed under the Creative Commons Attribution-Share Alike 3.0 Unported license
|
What is the name for the distribution shape of a histogram with this kind of curvature?
It could be a bimodal distribution
Or then it could just be the a run of the mill normal distribution as the dip in the middle doesn't appear to be that big.
Image Bimodal.png by Maksim, from Wikimed
|
44,855
|
What is the name for the distribution shape of a histogram with this kind of curvature?
|
A bimodal distribution. You could also say it's an almost-normal curve with negative kurtosis. (Kurtosis refers to the spikiness of a normal curve; a bell-shaped curve that is very tall and elongated in height would have positive kurtosis). Your curve also appears to have a long right tail, so it is skewed to the right.
|
What is the name for the distribution shape of a histogram with this kind of curvature?
|
A bimodal distribution. You could also say it's an almost-normal curve with negative kurtosis. (Kurtosis refers to the spikiness of a normal curve; a bell-shaped curve that is very tall and elongate
|
What is the name for the distribution shape of a histogram with this kind of curvature?
A bimodal distribution. You could also say it's an almost-normal curve with negative kurtosis. (Kurtosis refers to the spikiness of a normal curve; a bell-shaped curve that is very tall and elongated in height would have positive kurtosis). Your curve also appears to have a long right tail, so it is skewed to the right.
|
What is the name for the distribution shape of a histogram with this kind of curvature?
A bimodal distribution. You could also say it's an almost-normal curve with negative kurtosis. (Kurtosis refers to the spikiness of a normal curve; a bell-shaped curve that is very tall and elongate
|
44,856
|
How to interpret Quadratic Terms
|
Lets consider an example (here I use Stata, but the logic works the same in any other package):
. sysuse nlsw88, clear
(NLSW, 1988 extract)
. reg wage c.tenure##c.tenure grade i.race
Source | SS df MS Number of obs = 2,229
-------------+---------------------------------- F(5, 2223) = 66.51
Model | 9640.89034 5 1928.17807 Prob > F = 0.0000
Residual | 64447.0774 2,223 28.991038 R-squared = 0.1301
-------------+---------------------------------- Adj R-squared = 0.1282
Total | 74087.9678 2,228 33.2531274 Root MSE = 5.3843
------------------------------------------------------------------------------
wage | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
tenure | .2773182 .0677307 4.09 0.000 .1444962 .4101402
|
c.tenure#|
c.tenure | -.0070752 .0036278 -1.95 0.051 -.0141894 .0000389
|
grade | .6792721 .0461853 14.71 0.000 .5887013 .7698429
|
race |
black | -.7517506 .2649033 -2.84 0.005 -1.271234 -.2322669
other | .6315991 1.06455 0.59 0.553 -1.456017 2.719215
|
_cons | -2.106807 .6357411 -3.31 0.001 -3.353516 -.8600988
------------------------------------------------------------------------------
Adding the quadratic term tenure$^2$ (c.tenure#c.tenure) to the model means that the effect of tenure changes when you get more tenure. When you have 0 years of tenure, the slope is such that your hourly wage would increase by 28 cents for an additional year of tenure if the slope would remain unchanged, which it doesn't. (Hourly wage is in dollars, so a .28 dollar change is a 28 cents change.) Each additional year of tenure reduces the slope by .7 cents. In this case the coefficient of the square term is negative, so the relationship is concave. It usually helps to see this relationship as a graph:
. qui margins, at(grade=12 race=1 tenure=(0/26))
. marginsplot
Variables that uniquely identify margins: tenure
Initially you get a higher wage as you get more tenure, but the gain decreases and even becomes negative after say 20 years of tenure. We can be more precise about when this occurs:
. nlcom -_b[tenure]/(2*_b[c.tenure#c.tenure])
_nl_1: -_b[tenure]/(2*_b[c.tenure#c.tenure])
------------------------------------------------------------------------------
wage | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
_nl_1 | 19.59777 5.692054 3.44 0.001 8.441549 30.75399
------------------------------------------------------------------------------
notice the huge confidence interval, this is quite typical, so be careful about interpreting the position of the maximum.
|
How to interpret Quadratic Terms
|
Lets consider an example (here I use Stata, but the logic works the same in any other package):
. sysuse nlsw88, clear
(NLSW, 1988 extract)
. reg wage c.tenure##c.tenure grade i.race
Source |
|
How to interpret Quadratic Terms
Lets consider an example (here I use Stata, but the logic works the same in any other package):
. sysuse nlsw88, clear
(NLSW, 1988 extract)
. reg wage c.tenure##c.tenure grade i.race
Source | SS df MS Number of obs = 2,229
-------------+---------------------------------- F(5, 2223) = 66.51
Model | 9640.89034 5 1928.17807 Prob > F = 0.0000
Residual | 64447.0774 2,223 28.991038 R-squared = 0.1301
-------------+---------------------------------- Adj R-squared = 0.1282
Total | 74087.9678 2,228 33.2531274 Root MSE = 5.3843
------------------------------------------------------------------------------
wage | Coef. Std. Err. t P>|t| [95% Conf. Interval]
-------------+----------------------------------------------------------------
tenure | .2773182 .0677307 4.09 0.000 .1444962 .4101402
|
c.tenure#|
c.tenure | -.0070752 .0036278 -1.95 0.051 -.0141894 .0000389
|
grade | .6792721 .0461853 14.71 0.000 .5887013 .7698429
|
race |
black | -.7517506 .2649033 -2.84 0.005 -1.271234 -.2322669
other | .6315991 1.06455 0.59 0.553 -1.456017 2.719215
|
_cons | -2.106807 .6357411 -3.31 0.001 -3.353516 -.8600988
------------------------------------------------------------------------------
Adding the quadratic term tenure$^2$ (c.tenure#c.tenure) to the model means that the effect of tenure changes when you get more tenure. When you have 0 years of tenure, the slope is such that your hourly wage would increase by 28 cents for an additional year of tenure if the slope would remain unchanged, which it doesn't. (Hourly wage is in dollars, so a .28 dollar change is a 28 cents change.) Each additional year of tenure reduces the slope by .7 cents. In this case the coefficient of the square term is negative, so the relationship is concave. It usually helps to see this relationship as a graph:
. qui margins, at(grade=12 race=1 tenure=(0/26))
. marginsplot
Variables that uniquely identify margins: tenure
Initially you get a higher wage as you get more tenure, but the gain decreases and even becomes negative after say 20 years of tenure. We can be more precise about when this occurs:
. nlcom -_b[tenure]/(2*_b[c.tenure#c.tenure])
_nl_1: -_b[tenure]/(2*_b[c.tenure#c.tenure])
------------------------------------------------------------------------------
wage | Coef. Std. Err. z P>|z| [95% Conf. Interval]
-------------+----------------------------------------------------------------
_nl_1 | 19.59777 5.692054 3.44 0.001 8.441549 30.75399
------------------------------------------------------------------------------
notice the huge confidence interval, this is quite typical, so be careful about interpreting the position of the maximum.
|
How to interpret Quadratic Terms
Lets consider an example (here I use Stata, but the logic works the same in any other package):
. sysuse nlsw88, clear
(NLSW, 1988 extract)
. reg wage c.tenure##c.tenure grade i.race
Source |
|
44,857
|
How to interpret Quadratic Terms
|
1) Adding quadratic terms allows for non-linearity (in a linear model). If you think that the relation between your target variable and a feature is possibly non-linear, you can add quadratic terms. (Or, you could consider log transformation.)
2) Significance of quadratic terms could signal that the relation is non-linear. The sign merely represents the type of non linearity. A positive quadratic term could suggest that your relation is exponential. A negative relation suggests that for low values of your feature, the relation might be positive, but for high values the relation becomes negative.
3) Correct. Apparently the fitted function is such that a maximum value of 20 can be predicted. After that the non-linear term dominates, if it's sign is negative.
Is this of any help?
|
How to interpret Quadratic Terms
|
1) Adding quadratic terms allows for non-linearity (in a linear model). If you think that the relation between your target variable and a feature is possibly non-linear, you can add quadratic terms. (
|
How to interpret Quadratic Terms
1) Adding quadratic terms allows for non-linearity (in a linear model). If you think that the relation between your target variable and a feature is possibly non-linear, you can add quadratic terms. (Or, you could consider log transformation.)
2) Significance of quadratic terms could signal that the relation is non-linear. The sign merely represents the type of non linearity. A positive quadratic term could suggest that your relation is exponential. A negative relation suggests that for low values of your feature, the relation might be positive, but for high values the relation becomes negative.
3) Correct. Apparently the fitted function is such that a maximum value of 20 can be predicted. After that the non-linear term dominates, if it's sign is negative.
Is this of any help?
|
How to interpret Quadratic Terms
1) Adding quadratic terms allows for non-linearity (in a linear model). If you think that the relation between your target variable and a feature is possibly non-linear, you can add quadratic terms. (
|
44,858
|
Rescale predictions of regression model fitted on scaled predictors
|
The scale function stores the scale and center values it uses to scale the data in an attribute. These can be used to convert predictions on the scaled data back to the original data scale.
# Scale cars data:
scars <- scale(cars)
# Save scaled attibutes:
scaleList <- list(scale = attr(scars, "scaled:scale"),
center = attr(scars, "scaled:center"))
# scars is a matrix, make it a data frame like cars for modeling:
scars <- as.data.frame(scars)
smod <- lm(speed ~ dist, data = scars)
# Predictions on scaled data:
sp <- predict(smod, scars)
# Fit the same model to the original cars data:
omod <- lm(speed ~ dist, data = cars)
op <- predict(omod, cars)
# Convert scaled prediction to original data scale:
usp <- sp * scaleList$scale["speed"] + scaleList$center["speed"]
# Compare predictions:
all.equal(op, usp)
If you want to use the model to predict new data with the smod model object, you will need to scale the newdata values using the appropriate values from the scaleList object (do not call the scale function on the newdata directly).
|
Rescale predictions of regression model fitted on scaled predictors
|
The scale function stores the scale and center values it uses to scale the data in an attribute. These can be used to convert predictions on the scaled data back to the original data scale.
# Scale ca
|
Rescale predictions of regression model fitted on scaled predictors
The scale function stores the scale and center values it uses to scale the data in an attribute. These can be used to convert predictions on the scaled data back to the original data scale.
# Scale cars data:
scars <- scale(cars)
# Save scaled attibutes:
scaleList <- list(scale = attr(scars, "scaled:scale"),
center = attr(scars, "scaled:center"))
# scars is a matrix, make it a data frame like cars for modeling:
scars <- as.data.frame(scars)
smod <- lm(speed ~ dist, data = scars)
# Predictions on scaled data:
sp <- predict(smod, scars)
# Fit the same model to the original cars data:
omod <- lm(speed ~ dist, data = cars)
op <- predict(omod, cars)
# Convert scaled prediction to original data scale:
usp <- sp * scaleList$scale["speed"] + scaleList$center["speed"]
# Compare predictions:
all.equal(op, usp)
If you want to use the model to predict new data with the smod model object, you will need to scale the newdata values using the appropriate values from the scaleList object (do not call the scale function on the newdata directly).
|
Rescale predictions of regression model fitted on scaled predictors
The scale function stores the scale and center values it uses to scale the data in an attribute. These can be used to convert predictions on the scaled data back to the original data scale.
# Scale ca
|
44,859
|
Rescale predictions of regression model fitted on scaled predictors
|
I have built on skaluzny's answer, if you want a more intuitive way to do this, without saving the scale attributes, but rather using knowledge of what the scale() function does by default (you really only need the last couple lines of this answer).
The scale function centers (subtracts mean value), and then scales (divides by standard deviation of data):
sdist <- scale(cars$dist)
head(sdist)
[,1]
[1,] -1.5902596
[2,] -1.2798136
[3,] -1.5126481
[4,] -0.8141446
[5,] -1.0469791
[6,] -1.2798136
sdist2<-(cars$dist-mean(cars$dist))/sd(cars$dist)
head(sdist2)
[1] -1.5902596 -1.2798136 -1.5126481 -0.8141446 -1.0469791 -1.2798136
# Note this only is oriented the other way because scale() function outputs a matrix:
sdist2<-as.matrix(sdist2)
head(sdist2)
# The output now looks identical
[,1]
[1,] -1.5902596
[2,] -1.2798136
[3,] -1.5126481
[4,] -0.8141446
[5,] -1.0469791
[6,] -1.2798136
So instead of storing things as a list, we can actually use the mean and standard deviation of the original data.
# Scale cars data:
scars <- scale(cars)
# Save scaled attibutes:
scaleList <- list(scale = attr(scars, "scaled:scale"),
center = attr(scars, "scaled:center"))
scaleList
$`scale`
speed dist
5.287644 25.769377
$center
speed dist
15.40 42.98
> sapply(cars,mean) # note that these values are the same as the `center` values above
speed dist
15.40 42.98
> sapply(cars,sd) # note that these values are the same as the `scale` values above
speed dist
5.287644 25.769377
So now we can check if the predicted values would all be the same if we just use mean() and sd() rather than scale attributes:
# scars is a matrix, make it a data frame like cars for modeling:
scars <- as.data.frame(scars)
smod <- lm(speed ~ dist, data = scars)
# Predictions on scaled data:
sp <- predict(smod, scars)
# Fit the same model to the original cars data:
omod <- lm(speed ~ dist, data = cars)
op <- predict(omod, cars)
# Now the original answer was to use these stored attributes to modify the predictions:
usp1 <- sp * scaleList$scale["speed"] + scaleList$center["speed"]
# We can also simply use the standard deviation and mean from the original dataset:
usp2 <- sp * sd(cars$speed) + mean(cars$speed)
identical(usp1,usp2)
[1] TRUE
all.equal(op, usp1, usp2)
[1] TRUE
This might be faster / more efficient if you do it this way since there is no need to create extra dataframes / objects:
Mod <- lm(scale(speed) ~ scale(dist), data = cars) # add scale() function directly to model
Unscaled_Pred <- predict(Mod, cars) * sd(cars$speed) + mean(cars$speed)
all.equal(op, Unscaled_Pred)
[1] TRUE # predictions are the same as the model that was never scaled
|
Rescale predictions of regression model fitted on scaled predictors
|
I have built on skaluzny's answer, if you want a more intuitive way to do this, without saving the scale attributes, but rather using knowledge of what the scale() function does by default (you really
|
Rescale predictions of regression model fitted on scaled predictors
I have built on skaluzny's answer, if you want a more intuitive way to do this, without saving the scale attributes, but rather using knowledge of what the scale() function does by default (you really only need the last couple lines of this answer).
The scale function centers (subtracts mean value), and then scales (divides by standard deviation of data):
sdist <- scale(cars$dist)
head(sdist)
[,1]
[1,] -1.5902596
[2,] -1.2798136
[3,] -1.5126481
[4,] -0.8141446
[5,] -1.0469791
[6,] -1.2798136
sdist2<-(cars$dist-mean(cars$dist))/sd(cars$dist)
head(sdist2)
[1] -1.5902596 -1.2798136 -1.5126481 -0.8141446 -1.0469791 -1.2798136
# Note this only is oriented the other way because scale() function outputs a matrix:
sdist2<-as.matrix(sdist2)
head(sdist2)
# The output now looks identical
[,1]
[1,] -1.5902596
[2,] -1.2798136
[3,] -1.5126481
[4,] -0.8141446
[5,] -1.0469791
[6,] -1.2798136
So instead of storing things as a list, we can actually use the mean and standard deviation of the original data.
# Scale cars data:
scars <- scale(cars)
# Save scaled attibutes:
scaleList <- list(scale = attr(scars, "scaled:scale"),
center = attr(scars, "scaled:center"))
scaleList
$`scale`
speed dist
5.287644 25.769377
$center
speed dist
15.40 42.98
> sapply(cars,mean) # note that these values are the same as the `center` values above
speed dist
15.40 42.98
> sapply(cars,sd) # note that these values are the same as the `scale` values above
speed dist
5.287644 25.769377
So now we can check if the predicted values would all be the same if we just use mean() and sd() rather than scale attributes:
# scars is a matrix, make it a data frame like cars for modeling:
scars <- as.data.frame(scars)
smod <- lm(speed ~ dist, data = scars)
# Predictions on scaled data:
sp <- predict(smod, scars)
# Fit the same model to the original cars data:
omod <- lm(speed ~ dist, data = cars)
op <- predict(omod, cars)
# Now the original answer was to use these stored attributes to modify the predictions:
usp1 <- sp * scaleList$scale["speed"] + scaleList$center["speed"]
# We can also simply use the standard deviation and mean from the original dataset:
usp2 <- sp * sd(cars$speed) + mean(cars$speed)
identical(usp1,usp2)
[1] TRUE
all.equal(op, usp1, usp2)
[1] TRUE
This might be faster / more efficient if you do it this way since there is no need to create extra dataframes / objects:
Mod <- lm(scale(speed) ~ scale(dist), data = cars) # add scale() function directly to model
Unscaled_Pred <- predict(Mod, cars) * sd(cars$speed) + mean(cars$speed)
all.equal(op, Unscaled_Pred)
[1] TRUE # predictions are the same as the model that was never scaled
|
Rescale predictions of regression model fitted on scaled predictors
I have built on skaluzny's answer, if you want a more intuitive way to do this, without saving the scale attributes, but rather using knowledge of what the scale() function does by default (you really
|
44,860
|
Arima Model with weekday dummy variables Forecast
|
To start with we will explore different ways that repeating patterns can appear in time series data and how we can model those patterns. This may be over kill for the question, however I do think that this answer will help you think about what is happening in the models and design better experiments to model your data going forward.
Simple daily seasonality
To begin with, lets think about a daily seasonal ARIMA model. This type of model is looking for some type of pattern that repeats where we see the same thing every day. A time series that this type of model might work well for might look like this:
set.seed(0)
# create a pattern that repeats every 24 hours
daily <- sinpi(seq(1, 3 - 1/24, length.out = 24))
# create a time series out of that repeating pattern 2 weeks long
dailyseas <- ts(rep(daily, 14), frequency = 24)
# add some noise to the data to make it interesting to fit
dailyseas <- dailyseas + rnorm(length(dailyseas), sd = 0.2)
plot(dailyseas, main = "Daily Repeating Pattern")
We can then fit a seasonal AR model to the data and get a pretty good forecast for the series. Since we know that the pattern is pretty stable over time, I will use two seasonal lags rather than just one because this will allow the model to smooth out any noise in the data better.
> (dsmodel <- arima(dailyseas, seasonal = c(2L, 0L, 0L)))
Call:
arima(x = dailyseas, seasonal = c(2L, 0L, 0L))
Coefficients:
sar1 sar2 intercept
0.4254 0.5396 0.0094
s.e. 0.0455 0.0465 0.1368
sigma^2 estimated as 0.05491: log likelihood = -20.56, aic = 49.13
This is just about perfect, the intercept is near zero, which it should be, and the sum of the coefficients on the seasonal lags is close to 1, meaning the forecast is about an average of them (eg. we predict the value tomorrow at noon is about the average of today and yesterday at noon). It is important to note we let the ARIMA model know how often the pattern repeated itself by making a ts object and setting the frequency = 24. Alternatively, we could have used a vector for the series and set seasonal = list(order = c(1L, 0L, 0L), period = 24).
This works well when we have a simple repeating pattern, but what if we have a day of week effect.
Daily Seasonality with a Day of Week effect
A day of week effect is an consistant impact on the underlying series we see based on the day of the week. We can add a day of week effect to our data using:
# Create an adjustment for day of week: we will leave Monday as 0
# so later it is easy to see the change in other days relative to
# Monday.
(doweffect <- c(0 , sample(c(-3, -2, -1, 1, 2, 3))))
# add our day of week effect to the original series
dowseas <- dailyseas + rep(doweffect, 2, each = 24)
plot(dowseas)
We handle this new pattern in our data in one of two ways 1) adding external regressors to our original ARIMA model or 2) thinking of the weekly repeating pattern in the data as the new seasonality of the data. In an ARIMA model with external regressors, we are looking for some sort of ARIMA type pattern that is "thrown off" by some amount by the things quantified by the external regressors. In the model with weekly seasonality, we are looking for the interaction of the daily and weekly pattern and ignoring that the daily pattern is still present in each day of the week. Below we create the original seasonal model as well as the 2 variants.
# Creates a model matrix to indicate the day of week for values in
# our time series. Note the model matrix does not have a column to
# indicate Monday. The purpose of the model matrix is to allow the
# model to include the impact of a value relative to some baseline,
# usually the first factor level; in this case, Monday. We also
# remove the intercept term since intercept is already part of the
# ARIMA models.
> dowreg <- model.matrix(~as.factor(rep(1:7, 2, each = 24)))[, -1]
> colnames(dowreg) <- c("Tues", "Weds", "Thurs", "Fri", "Sat", "Sun")
# Creates a first model where the day of week is ignored
> (dowsmodel <- arima(dowseas, seasonal = c(2L, 0L, 0L)))
Call:
arima(x = dowseas, seasonal = c(2L, 0L, 0L))
Coefficients:
sar1 sar2 intercept
0.1085 -0.1550 0.0170
s.e. 0.0588 0.0612 0.1123
sigma^2 estimated as 4.455: log likelihood = -728.46, aic = 1464.93
Hmm, that doesn't look great, our intercept is near zero and our seasonal coefficients are nearly canceling each other out. Lets look at our external regressor model:
# Creates a model where the day of week effect is accounted for using
# an external regressor
> (dowxreg <- arima(dowseas, seasonal = c(2L, 0L, 0L), xreg = dowreg))
Call:
arima(x = dowseas, seasonal = c(2L, 0L, 0L), xreg = dowreg)
Coefficients:
sar1 sar2 intercept Tues Weds Thurs Fri Sat Sun
0.4301 0.5351 -0.0080 1.0376 2.0036 -0.9759 -1.9841 -3.0051 3.0363
s.e. 0.0459 0.0468 0.1389 0.0394 0.0360 0.0424 0.0427 0.0413 0.0438
sigma^2 estimated as 0.05457: log likelihood = -19.51, aic = 59.01
This is much better, the two seasonal lags (sar1 and sar2) are basically taking an average again like they did in our other model, and the external regressors are adjusting the days by the correct amount (1 for Tues, 2 for Weds, -1 for Thurs, -2 for Fri, -3 for Sat and 3 for Sun). How about our weekly seasonal model:
# Creates a model where the day of week effect is accounted for by
# increasing the seasonality to weekly rather than daily. This time
# we can only use 1 seasonal lag because our data only don't have
# enough seasonal periods at the weekly frequency.
> (dowlongs <- arima(dowseas, seasonal = list(order = c(1L, 0L, 0L),
+ period = 24*7)))
Call:
arima(x = dowseas, seasonal = list(order = c(1L, 0L, 0L), period = 24 * 7))
Coefficients:
sar1 intercept
1 0.005
s.e. NaN 1400.371
sigma^2 estimated as 0.03082: log likelihood = -8.64, aic = 23.27
Once again, this looks good, it is predicting next Monday at noon should be the same as last Monday at noon, which makes sense for this data. Lets look at how this turns out in the forecasts:
par(mfrow = c(1,1))
plot(forecast(dowsmodel, 48), PI = FALSE, xlim = c(8, 17),
main = "Forecasts from Various Seasonal AR models")
lines(forecast(dowxreg, 48, xreg = dowreg[1:48, ])$mean, col = "red", lwd = 2)
lines(forecast(dowlongs, 48)$mean, col = "green", lwd = 2)
abline(v = 8, lty = 2)
legend("topleft", bty = "n",
c("Seasonal AR", "Seasonal AR with DoW", "Long Seasonal AR"),
fill = c("blue", "red", "green"))
While the original mode that worked so well before falls apart, we see that both of the other approaches work well for this new data. The model with external regressors is able to find the daily pattern that is occurring once it takes into account the effect of the day of week. The weekly seasonal model is missing the daily pattern, but is able to overcome that by seeing the larger pattern that repeats every week.
Distinct pattern for each day of the week
Now we are finally going to get into what you claim to see in your data; a daily pattern which is different for each day of the week. We can make a series with this property as follows:
# Creates a time series where each day of week has a unique pattern
dailysig <- c(daily,
sort(daily),
sort(daily, decreasing = TRUE),
abs(daily),
exp(daily),
log(daily + 2),
cos(daily))
# Creates a two week long version of this series with a noise component
diffdaily <- ts(rep(dailysig, 2), frequency = 24)
diffdaily <- diffdaily + rnorm(length(diffdaily), sd = 0.2)
plot(diffdaily, main = "Unique Pattern for Each Day of Week")
We see that Monday's pattern doesn't really look like Tuesday's or Wednesday's etc. Lets examine what happens if we try and make each of our 3 types of models for this data set.
# Makes the same three models as last time
> (dowsmodel <- arima(diffdaily, seasonal = c(2L, 0L, 0L)))
Call:
arima(x = diffdaily, seasonal = c(2L, 0L, 0L))
Coefficients:
sar1 sar2 intercept
0.2488 -0.2669 0.4651
s.e. 0.0533 0.0537 0.0390
sigma^2 estimated as 0.509: log likelihood = -365.57, aic = 739.14
> (dowxreg <- arima(diffdaily, seasonal = c(2L, 0L, 0L), xreg = dowreg))
Call:
arima(x = diffdaily, seasonal = c(2L, 0L, 0L), xreg = dowreg)
Coefficients:
sar1 sar2 intercept Tues Weds Thurs Fri Sat Sun
0.0756 -0.3189 0.0224 -0.0670 -0.0105 0.5791 1.1814 0.6315 0.7423
s.e. 0.0531 0.0532 0.0875 0.1205 0.1419 0.1241 0.1199 0.1325 0.1234
sigma^2 estimated as 0.3413: log likelihood = -298.76, aic = 617.52
> (dowlongs <- arima(diffdaily, seasonal = list(order = c(1L, 0L, 0L), period = 24*7)))
Call:
arima(x = diffdaily, seasonal = list(order = c(1L, 0L, 0L), period = 24 * 7))
Coefficients:
sar1 intercept
0.9254 0.4591
s.e. 0.0111 0.0575
sigma^2 estimated as 0.08293: log likelihood = -221.5, aic = 448.99
> plot(forecast(dowsmodel, 48), PI = FALSE, xlim = c(8, 17),
+ main = "Forecasts from Various Seasonal AR models for Different DoW Effects")
> lines(forecast(dowxreg, 48, xreg = dowreg[1:48, ])$mean, col = "red", lwd = 2)
> lines(forecast(dowlongs, 48)$mean, col = "green", lwd = 2)
> abline(v = 8, lty = 2)
> legend("topleft", bty = "n",
+ c("Seasonal AR", "Seasonal AR with DoW", "Long Seasonal AR"),
+ fill = c("blue", "red", "green"))
Results fell apart for the first two models this time. Since there isn't an underlying daily pattern any more, the model with external regressors was only able to see that certain days of the week are higher or lower on average, but the pattern from hour to hour was missed. The weekly seasonal model however was still about to see the weekly repeat and make a reasonable model.
Your Data
Now that we have seen the importance of seasonality in our models, lets see what happens if we try running auto.arima again, but this time making your data a seasonal time series.
> tsTrain <- ts(tsTrain, frequency = 24)
> (dowsmodel <- auto.arima(tsTrain))
Series: tsTrain
ARIMA(0,0,0)(1,0,0)[24] with non-zero mean
Coefficients:
sar1 mean
0.0508 8.4899
s.e. 0.0579 0.2452
sigma^2 estimated as 17.31: log likelihood=-928.96
AIC=1863.91 AICc=1863.98 BIC=1875.36
> (dowxreg <- auto.arima(tsTrain, xreg = dowreg))
Series: tsTrain
Regression with ARIMA(0,0,0)(1,0,0)[24] errors
Coefficients:
sar1 intercept Tues Weds Thurs Fri Sat Sun
0.0841 7.7401 -0.4165 2.3504 0.0211 1.1671 1.8975 0.3029
s.e. 0.0587 0.5991 0.8074 0.8496 0.8617 0.8520 0.8461 0.8288
sigma^2 estimated as 16.63: log likelihood=-919.55
AIC=1857.1 AICc=1857.65 BIC=1891.45
> (dowlongs <- auto.arima(ts(tsTrain, frequency = 24*7)))
Series: ts(tsTrain, frequency = 24 * 7)
ARIMA(0,0,2) with non-zero mean
Coefficients:
ma1 ma2 mean
0.2433 -0.0171 8.5118
s.e. 0.0583 0.0506 0.2782
sigma^2 estimated as 16.51: log likelihood=-921.06
AIC=1850.12 AICc=1850.24 BIC=1865.39
> plot(forecast(dowsmodel, 24), PI = FALSE, xlim = c(8, 16),
+ main = "Forecasts from Various Seasonal AR models for Different DoW Effects")
> lines(forecast(dowxreg, 24, xreg = dowreg[1:24, ])$mean, col = "red", lwd = 2)
> lines(ts(forecast(dowlongs, 24)$mean, start = 15, frequency = 24),
+ col = "green", lwd = 2)
> abline(v = 8, lty = 2)
> legend("topleft", bty = "n",
+ c("Seasonal AR", "Seasonal AR with DoW", "Long Seasonal AR"),
+ fill = c("blue", "red", "green"))
The "each weekday seems to have a distinct 24 hour pattern" doesn't seem to be happening as seen by the trouble fitting a the weekly seasonal model, but there does seem to be a daily seasonality the models are picking up on since. Personally, I would trust the plain seasonal model (no external regressors) the most since it is less prone to over fitting than the one with external regressors, but that is your call. In general, you might feel disappointed since the forecasts don't look much like your data. This is because there is a lot of noise in your data that the model still can't account for.
Conclusions
A seasonal model will allow your data to find repeating patterns in your data.
Adding external regressors to your model can allow the model to find the underlying pattern when the pattern is obscured by another influence.
If every day of the week has a different pattern, that is a weekly seasonality, not a day of week effect.
|
Arima Model with weekday dummy variables Forecast
|
To start with we will explore different ways that repeating patterns can appear in time series data and how we can model those patterns. This may be over kill for the question, however I do think tha
|
Arima Model with weekday dummy variables Forecast
To start with we will explore different ways that repeating patterns can appear in time series data and how we can model those patterns. This may be over kill for the question, however I do think that this answer will help you think about what is happening in the models and design better experiments to model your data going forward.
Simple daily seasonality
To begin with, lets think about a daily seasonal ARIMA model. This type of model is looking for some type of pattern that repeats where we see the same thing every day. A time series that this type of model might work well for might look like this:
set.seed(0)
# create a pattern that repeats every 24 hours
daily <- sinpi(seq(1, 3 - 1/24, length.out = 24))
# create a time series out of that repeating pattern 2 weeks long
dailyseas <- ts(rep(daily, 14), frequency = 24)
# add some noise to the data to make it interesting to fit
dailyseas <- dailyseas + rnorm(length(dailyseas), sd = 0.2)
plot(dailyseas, main = "Daily Repeating Pattern")
We can then fit a seasonal AR model to the data and get a pretty good forecast for the series. Since we know that the pattern is pretty stable over time, I will use two seasonal lags rather than just one because this will allow the model to smooth out any noise in the data better.
> (dsmodel <- arima(dailyseas, seasonal = c(2L, 0L, 0L)))
Call:
arima(x = dailyseas, seasonal = c(2L, 0L, 0L))
Coefficients:
sar1 sar2 intercept
0.4254 0.5396 0.0094
s.e. 0.0455 0.0465 0.1368
sigma^2 estimated as 0.05491: log likelihood = -20.56, aic = 49.13
This is just about perfect, the intercept is near zero, which it should be, and the sum of the coefficients on the seasonal lags is close to 1, meaning the forecast is about an average of them (eg. we predict the value tomorrow at noon is about the average of today and yesterday at noon). It is important to note we let the ARIMA model know how often the pattern repeated itself by making a ts object and setting the frequency = 24. Alternatively, we could have used a vector for the series and set seasonal = list(order = c(1L, 0L, 0L), period = 24).
This works well when we have a simple repeating pattern, but what if we have a day of week effect.
Daily Seasonality with a Day of Week effect
A day of week effect is an consistant impact on the underlying series we see based on the day of the week. We can add a day of week effect to our data using:
# Create an adjustment for day of week: we will leave Monday as 0
# so later it is easy to see the change in other days relative to
# Monday.
(doweffect <- c(0 , sample(c(-3, -2, -1, 1, 2, 3))))
# add our day of week effect to the original series
dowseas <- dailyseas + rep(doweffect, 2, each = 24)
plot(dowseas)
We handle this new pattern in our data in one of two ways 1) adding external regressors to our original ARIMA model or 2) thinking of the weekly repeating pattern in the data as the new seasonality of the data. In an ARIMA model with external regressors, we are looking for some sort of ARIMA type pattern that is "thrown off" by some amount by the things quantified by the external regressors. In the model with weekly seasonality, we are looking for the interaction of the daily and weekly pattern and ignoring that the daily pattern is still present in each day of the week. Below we create the original seasonal model as well as the 2 variants.
# Creates a model matrix to indicate the day of week for values in
# our time series. Note the model matrix does not have a column to
# indicate Monday. The purpose of the model matrix is to allow the
# model to include the impact of a value relative to some baseline,
# usually the first factor level; in this case, Monday. We also
# remove the intercept term since intercept is already part of the
# ARIMA models.
> dowreg <- model.matrix(~as.factor(rep(1:7, 2, each = 24)))[, -1]
> colnames(dowreg) <- c("Tues", "Weds", "Thurs", "Fri", "Sat", "Sun")
# Creates a first model where the day of week is ignored
> (dowsmodel <- arima(dowseas, seasonal = c(2L, 0L, 0L)))
Call:
arima(x = dowseas, seasonal = c(2L, 0L, 0L))
Coefficients:
sar1 sar2 intercept
0.1085 -0.1550 0.0170
s.e. 0.0588 0.0612 0.1123
sigma^2 estimated as 4.455: log likelihood = -728.46, aic = 1464.93
Hmm, that doesn't look great, our intercept is near zero and our seasonal coefficients are nearly canceling each other out. Lets look at our external regressor model:
# Creates a model where the day of week effect is accounted for using
# an external regressor
> (dowxreg <- arima(dowseas, seasonal = c(2L, 0L, 0L), xreg = dowreg))
Call:
arima(x = dowseas, seasonal = c(2L, 0L, 0L), xreg = dowreg)
Coefficients:
sar1 sar2 intercept Tues Weds Thurs Fri Sat Sun
0.4301 0.5351 -0.0080 1.0376 2.0036 -0.9759 -1.9841 -3.0051 3.0363
s.e. 0.0459 0.0468 0.1389 0.0394 0.0360 0.0424 0.0427 0.0413 0.0438
sigma^2 estimated as 0.05457: log likelihood = -19.51, aic = 59.01
This is much better, the two seasonal lags (sar1 and sar2) are basically taking an average again like they did in our other model, and the external regressors are adjusting the days by the correct amount (1 for Tues, 2 for Weds, -1 for Thurs, -2 for Fri, -3 for Sat and 3 for Sun). How about our weekly seasonal model:
# Creates a model where the day of week effect is accounted for by
# increasing the seasonality to weekly rather than daily. This time
# we can only use 1 seasonal lag because our data only don't have
# enough seasonal periods at the weekly frequency.
> (dowlongs <- arima(dowseas, seasonal = list(order = c(1L, 0L, 0L),
+ period = 24*7)))
Call:
arima(x = dowseas, seasonal = list(order = c(1L, 0L, 0L), period = 24 * 7))
Coefficients:
sar1 intercept
1 0.005
s.e. NaN 1400.371
sigma^2 estimated as 0.03082: log likelihood = -8.64, aic = 23.27
Once again, this looks good, it is predicting next Monday at noon should be the same as last Monday at noon, which makes sense for this data. Lets look at how this turns out in the forecasts:
par(mfrow = c(1,1))
plot(forecast(dowsmodel, 48), PI = FALSE, xlim = c(8, 17),
main = "Forecasts from Various Seasonal AR models")
lines(forecast(dowxreg, 48, xreg = dowreg[1:48, ])$mean, col = "red", lwd = 2)
lines(forecast(dowlongs, 48)$mean, col = "green", lwd = 2)
abline(v = 8, lty = 2)
legend("topleft", bty = "n",
c("Seasonal AR", "Seasonal AR with DoW", "Long Seasonal AR"),
fill = c("blue", "red", "green"))
While the original mode that worked so well before falls apart, we see that both of the other approaches work well for this new data. The model with external regressors is able to find the daily pattern that is occurring once it takes into account the effect of the day of week. The weekly seasonal model is missing the daily pattern, but is able to overcome that by seeing the larger pattern that repeats every week.
Distinct pattern for each day of the week
Now we are finally going to get into what you claim to see in your data; a daily pattern which is different for each day of the week. We can make a series with this property as follows:
# Creates a time series where each day of week has a unique pattern
dailysig <- c(daily,
sort(daily),
sort(daily, decreasing = TRUE),
abs(daily),
exp(daily),
log(daily + 2),
cos(daily))
# Creates a two week long version of this series with a noise component
diffdaily <- ts(rep(dailysig, 2), frequency = 24)
diffdaily <- diffdaily + rnorm(length(diffdaily), sd = 0.2)
plot(diffdaily, main = "Unique Pattern for Each Day of Week")
We see that Monday's pattern doesn't really look like Tuesday's or Wednesday's etc. Lets examine what happens if we try and make each of our 3 types of models for this data set.
# Makes the same three models as last time
> (dowsmodel <- arima(diffdaily, seasonal = c(2L, 0L, 0L)))
Call:
arima(x = diffdaily, seasonal = c(2L, 0L, 0L))
Coefficients:
sar1 sar2 intercept
0.2488 -0.2669 0.4651
s.e. 0.0533 0.0537 0.0390
sigma^2 estimated as 0.509: log likelihood = -365.57, aic = 739.14
> (dowxreg <- arima(diffdaily, seasonal = c(2L, 0L, 0L), xreg = dowreg))
Call:
arima(x = diffdaily, seasonal = c(2L, 0L, 0L), xreg = dowreg)
Coefficients:
sar1 sar2 intercept Tues Weds Thurs Fri Sat Sun
0.0756 -0.3189 0.0224 -0.0670 -0.0105 0.5791 1.1814 0.6315 0.7423
s.e. 0.0531 0.0532 0.0875 0.1205 0.1419 0.1241 0.1199 0.1325 0.1234
sigma^2 estimated as 0.3413: log likelihood = -298.76, aic = 617.52
> (dowlongs <- arima(diffdaily, seasonal = list(order = c(1L, 0L, 0L), period = 24*7)))
Call:
arima(x = diffdaily, seasonal = list(order = c(1L, 0L, 0L), period = 24 * 7))
Coefficients:
sar1 intercept
0.9254 0.4591
s.e. 0.0111 0.0575
sigma^2 estimated as 0.08293: log likelihood = -221.5, aic = 448.99
> plot(forecast(dowsmodel, 48), PI = FALSE, xlim = c(8, 17),
+ main = "Forecasts from Various Seasonal AR models for Different DoW Effects")
> lines(forecast(dowxreg, 48, xreg = dowreg[1:48, ])$mean, col = "red", lwd = 2)
> lines(forecast(dowlongs, 48)$mean, col = "green", lwd = 2)
> abline(v = 8, lty = 2)
> legend("topleft", bty = "n",
+ c("Seasonal AR", "Seasonal AR with DoW", "Long Seasonal AR"),
+ fill = c("blue", "red", "green"))
Results fell apart for the first two models this time. Since there isn't an underlying daily pattern any more, the model with external regressors was only able to see that certain days of the week are higher or lower on average, but the pattern from hour to hour was missed. The weekly seasonal model however was still about to see the weekly repeat and make a reasonable model.
Your Data
Now that we have seen the importance of seasonality in our models, lets see what happens if we try running auto.arima again, but this time making your data a seasonal time series.
> tsTrain <- ts(tsTrain, frequency = 24)
> (dowsmodel <- auto.arima(tsTrain))
Series: tsTrain
ARIMA(0,0,0)(1,0,0)[24] with non-zero mean
Coefficients:
sar1 mean
0.0508 8.4899
s.e. 0.0579 0.2452
sigma^2 estimated as 17.31: log likelihood=-928.96
AIC=1863.91 AICc=1863.98 BIC=1875.36
> (dowxreg <- auto.arima(tsTrain, xreg = dowreg))
Series: tsTrain
Regression with ARIMA(0,0,0)(1,0,0)[24] errors
Coefficients:
sar1 intercept Tues Weds Thurs Fri Sat Sun
0.0841 7.7401 -0.4165 2.3504 0.0211 1.1671 1.8975 0.3029
s.e. 0.0587 0.5991 0.8074 0.8496 0.8617 0.8520 0.8461 0.8288
sigma^2 estimated as 16.63: log likelihood=-919.55
AIC=1857.1 AICc=1857.65 BIC=1891.45
> (dowlongs <- auto.arima(ts(tsTrain, frequency = 24*7)))
Series: ts(tsTrain, frequency = 24 * 7)
ARIMA(0,0,2) with non-zero mean
Coefficients:
ma1 ma2 mean
0.2433 -0.0171 8.5118
s.e. 0.0583 0.0506 0.2782
sigma^2 estimated as 16.51: log likelihood=-921.06
AIC=1850.12 AICc=1850.24 BIC=1865.39
> plot(forecast(dowsmodel, 24), PI = FALSE, xlim = c(8, 16),
+ main = "Forecasts from Various Seasonal AR models for Different DoW Effects")
> lines(forecast(dowxreg, 24, xreg = dowreg[1:24, ])$mean, col = "red", lwd = 2)
> lines(ts(forecast(dowlongs, 24)$mean, start = 15, frequency = 24),
+ col = "green", lwd = 2)
> abline(v = 8, lty = 2)
> legend("topleft", bty = "n",
+ c("Seasonal AR", "Seasonal AR with DoW", "Long Seasonal AR"),
+ fill = c("blue", "red", "green"))
The "each weekday seems to have a distinct 24 hour pattern" doesn't seem to be happening as seen by the trouble fitting a the weekly seasonal model, but there does seem to be a daily seasonality the models are picking up on since. Personally, I would trust the plain seasonal model (no external regressors) the most since it is less prone to over fitting than the one with external regressors, but that is your call. In general, you might feel disappointed since the forecasts don't look much like your data. This is because there is a lot of noise in your data that the model still can't account for.
Conclusions
A seasonal model will allow your data to find repeating patterns in your data.
Adding external regressors to your model can allow the model to find the underlying pattern when the pattern is obscured by another influence.
If every day of the week has a different pattern, that is a weekly seasonality, not a day of week effect.
|
Arima Model with weekday dummy variables Forecast
To start with we will explore different ways that repeating patterns can appear in time series data and how we can model those patterns. This may be over kill for the question, however I do think tha
|
44,861
|
Arima Model with weekday dummy variables Forecast
|
Hyndman's docs say the xreg vector needs to have the same number of rows as the time series. In your code, where defining 'Weekdays' you are missing a comma before the closing square bracket.
If this external regressor approach doesn't work I'd try fitting a seasonal arima model with m=7 manually.
|
Arima Model with weekday dummy variables Forecast
|
Hyndman's docs say the xreg vector needs to have the same number of rows as the time series. In your code, where defining 'Weekdays' you are missing a comma before the closing square bracket.
If this
|
Arima Model with weekday dummy variables Forecast
Hyndman's docs say the xreg vector needs to have the same number of rows as the time series. In your code, where defining 'Weekdays' you are missing a comma before the closing square bracket.
If this external regressor approach doesn't work I'd try fitting a seasonal arima model with m=7 manually.
|
Arima Model with weekday dummy variables Forecast
Hyndman's docs say the xreg vector needs to have the same number of rows as the time series. In your code, where defining 'Weekdays' you are missing a comma before the closing square bracket.
If this
|
44,862
|
p values and significance in RLM (MASS package) R
|
The sfsmisc package offers a helpful function for conducting a Wald test:
library(MASS)
library(sfsmisc)
summary(rsl <- rlm(stack.loss ~ ., stackloss))
#Call: rlm(formula = stack.loss ~ ., data = stackloss)
#Residuals:
# Min 1Q Median 3Q Max
#-8.91753 -1.73127 0.06187 1.54306 6.50163
#
#Coefficients:
# Value Std. Error t value
#(Intercept) -41.0265 9.8073 -4.1832
#Air.Flow 0.8294 0.1112 7.4597
#Water.Temp 0.9261 0.3034 3.0524
#Acid.Conc. -0.1278 0.1289 -0.9922
#
#Residual standard error: 2.441 on 17 degrees of freedom
f.robftest(rsl, var = "Air.Flow")
# robust F-test (as if non-random weights)
#
#data: from rlm(formula = stack.loss ~ ., data = stackloss)
#F = 50.879, p-value = 1.677e-06
#alternative hypothesis: true Air.Flow is not equal to 0
f.robftest(rsl, var = "Acid.Conc.")
# robust F-test (as if non-random weights)
#
#data: from rlm(formula = stack.loss ~ ., data = stackloss)
#F = 1.0447, p-value = 0.3211
#alternative hypothesis: true Acid.Conc. is not equal to 0
|
p values and significance in RLM (MASS package) R
|
The sfsmisc package offers a helpful function for conducting a Wald test:
library(MASS)
library(sfsmisc)
summary(rsl <- rlm(stack.loss ~ ., stackloss))
#Call: rlm(formula = stack.loss ~ ., data = stac
|
p values and significance in RLM (MASS package) R
The sfsmisc package offers a helpful function for conducting a Wald test:
library(MASS)
library(sfsmisc)
summary(rsl <- rlm(stack.loss ~ ., stackloss))
#Call: rlm(formula = stack.loss ~ ., data = stackloss)
#Residuals:
# Min 1Q Median 3Q Max
#-8.91753 -1.73127 0.06187 1.54306 6.50163
#
#Coefficients:
# Value Std. Error t value
#(Intercept) -41.0265 9.8073 -4.1832
#Air.Flow 0.8294 0.1112 7.4597
#Water.Temp 0.9261 0.3034 3.0524
#Acid.Conc. -0.1278 0.1289 -0.9922
#
#Residual standard error: 2.441 on 17 degrees of freedom
f.robftest(rsl, var = "Air.Flow")
# robust F-test (as if non-random weights)
#
#data: from rlm(formula = stack.loss ~ ., data = stackloss)
#F = 50.879, p-value = 1.677e-06
#alternative hypothesis: true Air.Flow is not equal to 0
f.robftest(rsl, var = "Acid.Conc.")
# robust F-test (as if non-random weights)
#
#data: from rlm(formula = stack.loss ~ ., data = stackloss)
#F = 1.0447, p-value = 0.3211
#alternative hypothesis: true Acid.Conc. is not equal to 0
|
p values and significance in RLM (MASS package) R
The sfsmisc package offers a helpful function for conducting a Wald test:
library(MASS)
library(sfsmisc)
summary(rsl <- rlm(stack.loss ~ ., stackloss))
#Call: rlm(formula = stack.loss ~ ., data = stac
|
44,863
|
How To quickly do derivatives with respect to matrices
|
There is something called the Matrix Cookbook, which includes a lot of identities and matrix derivatives. So if we look at eq. (88) of the Matrix Cookbook,
$$\frac{\partial}{\partial A} (\mathbf{x} -\mathbf{A}\mathbf{s})^T\mathbf{W}(\mathbf{x} -\mathbf{A}\mathbf{s}) = -2\mathbf{W}(\mathbf{x}-\mathbf{A}\mathbf{s})\mathbf{s}^T$$
we see that this directly refers to your problem, if we assume $\Sigma^{-1}$ is a covariance matrix and therefore symmetric.
|
How To quickly do derivatives with respect to matrices
|
There is something called the Matrix Cookbook, which includes a lot of identities and matrix derivatives. So if we look at eq. (88) of the Matrix Cookbook,
$$\frac{\partial}{\partial A} (\mathbf{x} -\
|
How To quickly do derivatives with respect to matrices
There is something called the Matrix Cookbook, which includes a lot of identities and matrix derivatives. So if we look at eq. (88) of the Matrix Cookbook,
$$\frac{\partial}{\partial A} (\mathbf{x} -\mathbf{A}\mathbf{s})^T\mathbf{W}(\mathbf{x} -\mathbf{A}\mathbf{s}) = -2\mathbf{W}(\mathbf{x}-\mathbf{A}\mathbf{s})\mathbf{s}^T$$
we see that this directly refers to your problem, if we assume $\Sigma^{-1}$ is a covariance matrix and therefore symmetric.
|
How To quickly do derivatives with respect to matrices
There is something called the Matrix Cookbook, which includes a lot of identities and matrix derivatives. So if we look at eq. (88) of the Matrix Cookbook,
$$\frac{\partial}{\partial A} (\mathbf{x} -\
|
44,864
|
How is an ROC curve constructed for a set of data?
|
Here's an example of calculating an ROC curve. There are many things that ROC curves are used for, but this will give an overall idea of how an ROC curve is created.
Lets say that we want to use white blood cell counts to diagnose appendicitis. We'd like to collect a white blood cell count from a patient and then tell them if they have appendicitis or not. We'll call it our "Appendicitis Test". (This is just an example, no guarantees that this makes medical sense.)
We'd like to make an ROC curve for our Appendicitis Test. We have 50 patients that we can use. We know two things about these patients: 1) whether or not the patient has appendicitis and 2) what the white blood cell count of the patient is.
With these two pieces of information, we can create an ROC curve.
Here's the fake data of our patients:
#White blood cell counts for our 50 patients (WBC)
WBC = c(10:59)
#Whether they have appendicitis (Append: 1=yes, 0=no)
Append=c(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,1,1,1,1,1,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1)
#Combine our data
dat = data.frame(cbind(WBC, Append))
We decide that a white blood cell count above 30 is a pretty good cutoff for deciding whether a patient has appendicitis. But we're not sure. So we try 5 possible white blood cell cutoffs that would indicate that a patient has appendicitis: 10, 20, 30, 40, and 50. Now, we need to find the sensitivity and specificity at each cutoff to see how well the test detects appendicitis at each cutoff.
#Create variables that equal 1 if the test detects
#appendicitis (whether it's correct or not) using our five cutoffs.
dat$yes10 = ifelse(dat$WBC > 10, 1, 0)
dat$yes20 = ifelse(dat$WBC > 20, 1, 0)
dat$yes30 = ifelse(dat$WBC > 30, 1, 0)
dat$yes40 = ifelse(dat$WBC > 40, 1, 0)
dat$yes50 = ifelse(dat$WBC > 50, 1, 0)
Now calculate the sensitivity and specificity at each cutoff:
sensitivity <-c(sum(dat$yes10 == 1 & dat$Append == 1)/sum(dat$yes10 ==1),
sum(dat$yes20 ==1 & dat$Append == 1)/sum(dat$yes20 ==1),
sum(dat$yes30 ==1 & dat$Append == 1)/sum(dat$yes30 ==1),
sum(dat$yes40 ==1 & dat$Append == 1)/sum(dat$yes40 ==1),
sum(dat$yes50 ==1 & dat$Append == 1)/sum(dat$yes50 ==1))
specificity <-c(sum(dat$yes10 == 0 & dat$Append == 0)/sum(dat$yes10 ==0),
sum(dat$yes20 ==0 & dat$Append == 0)/sum(dat$yes20 ==0),
sum(dat$yes30 ==0 & dat$Append == 0)/sum(dat$yes30 ==0),
sum(dat$yes40 ==0 & dat$Append == 0)/sum(dat$yes40 ==0),
sum(dat$yes50 ==0 & dat$Append == 0)/sum(dat$yes50 ==0))
Then plot the True Positive Rate (sensitivity) by the False Positive Rate (1 - specificity) of each cutoff to get the ROC curve for the appendicitis test.
TruePositiveRate = sensitivity
FalsePositiveRate = 1 - specificity
ROCCurve = data.frame(cbind(FalsePositiveRate, TruePositiveRate))
plot(ROCCurve, main="ROC Curve for Appendicitis Test")
lines(ROCCurve, col="red")
Now we can see that our cutoff of 40 is our best choice, because there is a 100% true positive rate and only 20% false positive rate.
|
How is an ROC curve constructed for a set of data?
|
Here's an example of calculating an ROC curve. There are many things that ROC curves are used for, but this will give an overall idea of how an ROC curve is created.
Lets say that we want to use whit
|
How is an ROC curve constructed for a set of data?
Here's an example of calculating an ROC curve. There are many things that ROC curves are used for, but this will give an overall idea of how an ROC curve is created.
Lets say that we want to use white blood cell counts to diagnose appendicitis. We'd like to collect a white blood cell count from a patient and then tell them if they have appendicitis or not. We'll call it our "Appendicitis Test". (This is just an example, no guarantees that this makes medical sense.)
We'd like to make an ROC curve for our Appendicitis Test. We have 50 patients that we can use. We know two things about these patients: 1) whether or not the patient has appendicitis and 2) what the white blood cell count of the patient is.
With these two pieces of information, we can create an ROC curve.
Here's the fake data of our patients:
#White blood cell counts for our 50 patients (WBC)
WBC = c(10:59)
#Whether they have appendicitis (Append: 1=yes, 0=no)
Append=c(0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,1,0,0,0,0,0,0,1,1,1,1,1,0,0,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1,1)
#Combine our data
dat = data.frame(cbind(WBC, Append))
We decide that a white blood cell count above 30 is a pretty good cutoff for deciding whether a patient has appendicitis. But we're not sure. So we try 5 possible white blood cell cutoffs that would indicate that a patient has appendicitis: 10, 20, 30, 40, and 50. Now, we need to find the sensitivity and specificity at each cutoff to see how well the test detects appendicitis at each cutoff.
#Create variables that equal 1 if the test detects
#appendicitis (whether it's correct or not) using our five cutoffs.
dat$yes10 = ifelse(dat$WBC > 10, 1, 0)
dat$yes20 = ifelse(dat$WBC > 20, 1, 0)
dat$yes30 = ifelse(dat$WBC > 30, 1, 0)
dat$yes40 = ifelse(dat$WBC > 40, 1, 0)
dat$yes50 = ifelse(dat$WBC > 50, 1, 0)
Now calculate the sensitivity and specificity at each cutoff:
sensitivity <-c(sum(dat$yes10 == 1 & dat$Append == 1)/sum(dat$yes10 ==1),
sum(dat$yes20 ==1 & dat$Append == 1)/sum(dat$yes20 ==1),
sum(dat$yes30 ==1 & dat$Append == 1)/sum(dat$yes30 ==1),
sum(dat$yes40 ==1 & dat$Append == 1)/sum(dat$yes40 ==1),
sum(dat$yes50 ==1 & dat$Append == 1)/sum(dat$yes50 ==1))
specificity <-c(sum(dat$yes10 == 0 & dat$Append == 0)/sum(dat$yes10 ==0),
sum(dat$yes20 ==0 & dat$Append == 0)/sum(dat$yes20 ==0),
sum(dat$yes30 ==0 & dat$Append == 0)/sum(dat$yes30 ==0),
sum(dat$yes40 ==0 & dat$Append == 0)/sum(dat$yes40 ==0),
sum(dat$yes50 ==0 & dat$Append == 0)/sum(dat$yes50 ==0))
Then plot the True Positive Rate (sensitivity) by the False Positive Rate (1 - specificity) of each cutoff to get the ROC curve for the appendicitis test.
TruePositiveRate = sensitivity
FalsePositiveRate = 1 - specificity
ROCCurve = data.frame(cbind(FalsePositiveRate, TruePositiveRate))
plot(ROCCurve, main="ROC Curve for Appendicitis Test")
lines(ROCCurve, col="red")
Now we can see that our cutoff of 40 is our best choice, because there is a 100% true positive rate and only 20% false positive rate.
|
How is an ROC curve constructed for a set of data?
Here's an example of calculating an ROC curve. There are many things that ROC curves are used for, but this will give an overall idea of how an ROC curve is created.
Lets say that we want to use whit
|
44,865
|
How is an ROC curve constructed for a set of data?
|
A ROC curve is calculated from an independent risk prediction or risk score that has been merged to validation data containing observed binary outcome variables, e.g. life or death, recurrence or remission, guilty or innocent, etc..
The ranges of possible values for that risk prediction/score are sorted and enumerated from least to greatest as a range of possible cutpoints. For each cutpoint, the continuous risk variable is dichotomized producing a classification of the observed event. This binary value is compared to the observed outcomes and the sensitivity and specificity is calculated each time. Plotting sensitivity and specificity over the range of possible cutpoints produces a ROC.
Note that the linepath produced by these cutpoints does not convey what the actual cutpoint is. For that reason, it is often nice to annotate a ROC curve with example sens/spec combinations for a couple cutoff points.
Example R code here:
ROC<-function(T,D){
DD <- table(-T,D)
tpr <- cumsum(DD[,2])/sum(DD[,2])
fpr <- cumsum(DD[,1])/sum(DD[,1])
rval <- list(tpr=tpr, fpr=fpr,
cutpoints=rev(sort(unique(T))),
call=sys.call())
class(rval)<-"ROC"
rval
}
|
How is an ROC curve constructed for a set of data?
|
A ROC curve is calculated from an independent risk prediction or risk score that has been merged to validation data containing observed binary outcome variables, e.g. life or death, recurrence or remi
|
How is an ROC curve constructed for a set of data?
A ROC curve is calculated from an independent risk prediction or risk score that has been merged to validation data containing observed binary outcome variables, e.g. life or death, recurrence or remission, guilty or innocent, etc..
The ranges of possible values for that risk prediction/score are sorted and enumerated from least to greatest as a range of possible cutpoints. For each cutpoint, the continuous risk variable is dichotomized producing a classification of the observed event. This binary value is compared to the observed outcomes and the sensitivity and specificity is calculated each time. Plotting sensitivity and specificity over the range of possible cutpoints produces a ROC.
Note that the linepath produced by these cutpoints does not convey what the actual cutpoint is. For that reason, it is often nice to annotate a ROC curve with example sens/spec combinations for a couple cutoff points.
Example R code here:
ROC<-function(T,D){
DD <- table(-T,D)
tpr <- cumsum(DD[,2])/sum(DD[,2])
fpr <- cumsum(DD[,1])/sum(DD[,1])
rval <- list(tpr=tpr, fpr=fpr,
cutpoints=rev(sort(unique(T))),
call=sys.call())
class(rval)<-"ROC"
rval
}
|
How is an ROC curve constructed for a set of data?
A ROC curve is calculated from an independent risk prediction or risk score that has been merged to validation data containing observed binary outcome variables, e.g. life or death, recurrence or remi
|
44,866
|
How is an ROC curve constructed for a set of data?
|
So say your model gives some % prediction between 0 and 100%. Let's call that Y. And your classifier is either A or B.
My understanding is that an ROC curve is built by ranging a variable, let's call it "K", for values between 0 and 100%. For every value K and every estimate Y, you say something like ... if K is greater than Y, then it's an A, otherwise it's a B.
Then you build a confusion matrix for every K. From each of those confusion matrices, you can determine the true positive (the Y value in the ROC curve) and false positive (X value in the ROC curve).
Hence, if you have perfect separation between A and B, your ROC curve will go straight up and to the left.
|
How is an ROC curve constructed for a set of data?
|
So say your model gives some % prediction between 0 and 100%. Let's call that Y. And your classifier is either A or B.
My understanding is that an ROC curve is built by ranging a variable, let's cal
|
How is an ROC curve constructed for a set of data?
So say your model gives some % prediction between 0 and 100%. Let's call that Y. And your classifier is either A or B.
My understanding is that an ROC curve is built by ranging a variable, let's call it "K", for values between 0 and 100%. For every value K and every estimate Y, you say something like ... if K is greater than Y, then it's an A, otherwise it's a B.
Then you build a confusion matrix for every K. From each of those confusion matrices, you can determine the true positive (the Y value in the ROC curve) and false positive (X value in the ROC curve).
Hence, if you have perfect separation between A and B, your ROC curve will go straight up and to the left.
|
How is an ROC curve constructed for a set of data?
So say your model gives some % prediction between 0 and 100%. Let's call that Y. And your classifier is either A or B.
My understanding is that an ROC curve is built by ranging a variable, let's cal
|
44,867
|
How is an ROC curve constructed for a set of data?
|
A ROC Curve is not constructed for a set of data, it is constructed for the results of a classification performed on a set of data.
There are models (or methods of implementing them) that produce multiple ROC curves for a single model and set- say, one for the results of the model applied to the training set itself and one for the results of the model applied to validation set. Multiple ROC curves for a single model and data set would most likely be of this variety.
|
How is an ROC curve constructed for a set of data?
|
A ROC Curve is not constructed for a set of data, it is constructed for the results of a classification performed on a set of data.
There are models (or methods of implementing them) that produce mul
|
How is an ROC curve constructed for a set of data?
A ROC Curve is not constructed for a set of data, it is constructed for the results of a classification performed on a set of data.
There are models (or methods of implementing them) that produce multiple ROC curves for a single model and set- say, one for the results of the model applied to the training set itself and one for the results of the model applied to validation set. Multiple ROC curves for a single model and data set would most likely be of this variety.
|
How is an ROC curve constructed for a set of data?
A ROC Curve is not constructed for a set of data, it is constructed for the results of a classification performed on a set of data.
There are models (or methods of implementing them) that produce mul
|
44,868
|
Why can't I simulate variables with negative correlation? How can I fix it?
|
Your correlation matrix is not positive definite. This means that it is not possible for a real dataset to have generated it.
> det(M)
[1] -0.2496
This works and has a negative correlation:
> M=matrix(c(1.0, 0.6, 0.6, 0.6,
0.6, 1.0, -0.2, 0.3,
0.6, -0.2, 1.0, 0.3,
0.6, 0.3, 0.3, 1.0)
,nrow=4, ncol=4)
>
> det(M)
[1] 0.0528
Your code doesn't run, because megf doesn't get defined.
You can save a little effort by using the mvrnorm() function, in the MASS package.
> library(MASS)
> set.seed(1234) #Set seed for replicability
> r <- mvrnorm(n=1000, Sigma=M, mu=rep(0, 4) )
> cor(r)
[,1] [,2] [,3] [,4]
[1,] 1.0000000 0.5748690 0.6330390 0.5950443
[2,] 0.5748690 1.0000000 -0.1879727 0.2915380
[3,] 0.6330390 -0.1879727 1.0000000 0.3048610
[4,] 0.5950443 0.2915380 0.3048610 1.0000000
|
Why can't I simulate variables with negative correlation? How can I fix it?
|
Your correlation matrix is not positive definite. This means that it is not possible for a real dataset to have generated it.
> det(M)
[1] -0.2496
This works and has a negative correlation:
> M=mat
|
Why can't I simulate variables with negative correlation? How can I fix it?
Your correlation matrix is not positive definite. This means that it is not possible for a real dataset to have generated it.
> det(M)
[1] -0.2496
This works and has a negative correlation:
> M=matrix(c(1.0, 0.6, 0.6, 0.6,
0.6, 1.0, -0.2, 0.3,
0.6, -0.2, 1.0, 0.3,
0.6, 0.3, 0.3, 1.0)
,nrow=4, ncol=4)
>
> det(M)
[1] 0.0528
Your code doesn't run, because megf doesn't get defined.
You can save a little effort by using the mvrnorm() function, in the MASS package.
> library(MASS)
> set.seed(1234) #Set seed for replicability
> r <- mvrnorm(n=1000, Sigma=M, mu=rep(0, 4) )
> cor(r)
[,1] [,2] [,3] [,4]
[1,] 1.0000000 0.5748690 0.6330390 0.5950443
[2,] 0.5748690 1.0000000 -0.1879727 0.2915380
[3,] 0.6330390 -0.1879727 1.0000000 0.3048610
[4,] 0.5950443 0.2915380 0.3048610 1.0000000
|
Why can't I simulate variables with negative correlation? How can I fix it?
Your correlation matrix is not positive definite. This means that it is not possible for a real dataset to have generated it.
> det(M)
[1] -0.2496
This works and has a negative correlation:
> M=mat
|
44,869
|
Why can't I simulate variables with negative correlation? How can I fix it?
|
Cholesky method works with negative correlations. It does require positive definite matrix, of course, but the matrix can have negative elements in it, see this example.
|
Why can't I simulate variables with negative correlation? How can I fix it?
|
Cholesky method works with negative correlations. It does require positive definite matrix, of course, but the matrix can have negative elements in it, see this example.
|
Why can't I simulate variables with negative correlation? How can I fix it?
Cholesky method works with negative correlations. It does require positive definite matrix, of course, but the matrix can have negative elements in it, see this example.
|
Why can't I simulate variables with negative correlation? How can I fix it?
Cholesky method works with negative correlations. It does require positive definite matrix, of course, but the matrix can have negative elements in it, see this example.
|
44,870
|
Are LOESS and GAM with one covariate the same?
|
Not really a full answer, but too long for a comment: s sets up a spline, whereas loess does a local regression.
In the gam package (maybe mgcv too, not too familiar with that one) you can also feed a local regression, as in
library(gam)
set.seed(1234)
# generate data
x <- sort(runif(100))
y <- sin(2*pi*x) + rnorm(10, sd=0.1)
gam.1 <- gam(y ~ lo(x))
base.r <- loess(y ~ x)
summary(base.r$fitted - gam.1$fitted)
plot(base.r$fitted,gam.1$fitted)
That does not produce the same fitted values either, but maybe you can further play around with the settings of lo and loess.
|
Are LOESS and GAM with one covariate the same?
|
Not really a full answer, but too long for a comment: s sets up a spline, whereas loess does a local regression.
In the gam package (maybe mgcv too, not too familiar with that one) you can also feed a
|
Are LOESS and GAM with one covariate the same?
Not really a full answer, but too long for a comment: s sets up a spline, whereas loess does a local regression.
In the gam package (maybe mgcv too, not too familiar with that one) you can also feed a local regression, as in
library(gam)
set.seed(1234)
# generate data
x <- sort(runif(100))
y <- sin(2*pi*x) + rnorm(10, sd=0.1)
gam.1 <- gam(y ~ lo(x))
base.r <- loess(y ~ x)
summary(base.r$fitted - gam.1$fitted)
plot(base.r$fitted,gam.1$fitted)
That does not produce the same fitted values either, but maybe you can further play around with the settings of lo and loess.
|
Are LOESS and GAM with one covariate the same?
Not really a full answer, but too long for a comment: s sets up a spline, whereas loess does a local regression.
In the gam package (maybe mgcv too, not too familiar with that one) you can also feed a
|
44,871
|
Are LOESS and GAM with one covariate the same?
|
"LOESS" uses local kernel regression but is not a pure local kernel regression.
Local regression for a pre-specified bandwidth or pre-specified set of varying bandwidths can be written as a linear function of the data.
LOESS is however, non-linear, in that it attempts to introduce a degree of "robustification" to outliers (by downweighting large residuals and refitting).
As a result, in the general case, LOESS results won't be exactly reproduced by playing about with the settings for local linear regression, though if the data are sufficiently "nice" there may be a correspondence if the same settings are used -- LOESS also uses a "span" parameter that alters bandwidth to "cover" a given fraction of the data; even without any issues with large residuals, a local regression method would have to adjust bandwidth in the same way to reproduce it.
|
Are LOESS and GAM with one covariate the same?
|
"LOESS" uses local kernel regression but is not a pure local kernel regression.
Local regression for a pre-specified bandwidth or pre-specified set of varying bandwidths can be written as a linear fun
|
Are LOESS and GAM with one covariate the same?
"LOESS" uses local kernel regression but is not a pure local kernel regression.
Local regression for a pre-specified bandwidth or pre-specified set of varying bandwidths can be written as a linear function of the data.
LOESS is however, non-linear, in that it attempts to introduce a degree of "robustification" to outliers (by downweighting large residuals and refitting).
As a result, in the general case, LOESS results won't be exactly reproduced by playing about with the settings for local linear regression, though if the data are sufficiently "nice" there may be a correspondence if the same settings are used -- LOESS also uses a "span" parameter that alters bandwidth to "cover" a given fraction of the data; even without any issues with large residuals, a local regression method would have to adjust bandwidth in the same way to reproduce it.
|
Are LOESS and GAM with one covariate the same?
"LOESS" uses local kernel regression but is not a pure local kernel regression.
Local regression for a pre-specified bandwidth or pre-specified set of varying bandwidths can be written as a linear fun
|
44,872
|
Are LOESS and GAM with one covariate the same?
|
If your link function is identity (i.e., the error's PDF is Gaussian), a one covariate GAM is nothing else than the smooth version of your scatter plot. And this is generally a locally weighted scatterplot smoother. Read Hastie and Tibshirani 1986, particularly their section 5.2: They fit the GAMs by Fisher local scoring where the weighted least square fit is substituted by the more general (local) smoothing. Although they do not call it a LOESS, they speak about a running lines smoother with weights (dp/dq)^2*V^(-1)), which is basically a local weighted smoother. If your link function is identity the scoring procedure has no iterations and the linear estimator eta reduces to the smooth of the scatter plot.
Why R does not reflect this behaviour I do not know (I know little about R). I guess you have to specify in the LOESS function that your weights are all 1s (otherwise, I think they depend automatically on the distance between the observation points) and/or surely you have to use the same span in both GAM and LOESS.
|
Are LOESS and GAM with one covariate the same?
|
If your link function is identity (i.e., the error's PDF is Gaussian), a one covariate GAM is nothing else than the smooth version of your scatter plot. And this is generally a locally weighted scatte
|
Are LOESS and GAM with one covariate the same?
If your link function is identity (i.e., the error's PDF is Gaussian), a one covariate GAM is nothing else than the smooth version of your scatter plot. And this is generally a locally weighted scatterplot smoother. Read Hastie and Tibshirani 1986, particularly their section 5.2: They fit the GAMs by Fisher local scoring where the weighted least square fit is substituted by the more general (local) smoothing. Although they do not call it a LOESS, they speak about a running lines smoother with weights (dp/dq)^2*V^(-1)), which is basically a local weighted smoother. If your link function is identity the scoring procedure has no iterations and the linear estimator eta reduces to the smooth of the scatter plot.
Why R does not reflect this behaviour I do not know (I know little about R). I guess you have to specify in the LOESS function that your weights are all 1s (otherwise, I think they depend automatically on the distance between the observation points) and/or surely you have to use the same span in both GAM and LOESS.
|
Are LOESS and GAM with one covariate the same?
If your link function is identity (i.e., the error's PDF is Gaussian), a one covariate GAM is nothing else than the smooth version of your scatter plot. And this is generally a locally weighted scatte
|
44,873
|
Random Forests overfitting/unbalanced classes?
|
In highly unbalanced datasets, how do you detect over fitting?
Use metrics that are robust against unbalanced datasets, like Precision, Recall or F1-score. In your example with 99% 1's and 1% 0's. A classifier that always predicts positive samples will an Accuracy of 0.99, but Precision, Recall or F1-score of 0.00
What can you do to avoid overfitting in unbalanced datasets?
Cluster you positive samples into several clusters of the same size of the negative samples, i.e. move from binary classification to multi-label classification
Sub-sample the positive samples in order to have a 50/50 dataset.
Use other algorithm that deals naturally with unbalanced datasets, like anomaly detection methods.
Do Random Forest overfit?
Yes. Any classifier with high complexity respect to the training data will overfit. However, the overfitting will not increase when the number of single Decision Trees is increased.
How to avoid overfitting with Random Forest?
Decrease the complexity of the Decision Tree: pre- or post-pruning
Randomly drop features and/or samples per node
|
Random Forests overfitting/unbalanced classes?
|
In highly unbalanced datasets, how do you detect over fitting?
Use metrics that are robust against unbalanced datasets, like Precision, Recall or F1-score. In your example with 99% 1's and 1% 0's. A c
|
Random Forests overfitting/unbalanced classes?
In highly unbalanced datasets, how do you detect over fitting?
Use metrics that are robust against unbalanced datasets, like Precision, Recall or F1-score. In your example with 99% 1's and 1% 0's. A classifier that always predicts positive samples will an Accuracy of 0.99, but Precision, Recall or F1-score of 0.00
What can you do to avoid overfitting in unbalanced datasets?
Cluster you positive samples into several clusters of the same size of the negative samples, i.e. move from binary classification to multi-label classification
Sub-sample the positive samples in order to have a 50/50 dataset.
Use other algorithm that deals naturally with unbalanced datasets, like anomaly detection methods.
Do Random Forest overfit?
Yes. Any classifier with high complexity respect to the training data will overfit. However, the overfitting will not increase when the number of single Decision Trees is increased.
How to avoid overfitting with Random Forest?
Decrease the complexity of the Decision Tree: pre- or post-pruning
Randomly drop features and/or samples per node
|
Random Forests overfitting/unbalanced classes?
In highly unbalanced datasets, how do you detect over fitting?
Use metrics that are robust against unbalanced datasets, like Precision, Recall or F1-score. In your example with 99% 1's and 1% 0's. A c
|
44,874
|
Random Forests overfitting/unbalanced classes?
|
I personally think Random Forests are not impervious to overfitting. Overfitting is always a possibility, for any model. One possible way to counteract overfitting is by always using cross-validation.
If you want to detect overfitting, you can plot learning curves. Here, you are going to train the model multiple times, each time with a larger training set. Afterwards, you calculate the score on both the training set and the test set and plot these scores. Or, as Scikit Learn puts it: "A learning curve shows the validation and training error of an estimator for varying numbers of training samples." If the training error is low and the validation error is much higher, your model is overfitting.
To address your second question; when your data consists of 99% 1's and 1% 0's, this will certainly affect your final result!
|
Random Forests overfitting/unbalanced classes?
|
I personally think Random Forests are not impervious to overfitting. Overfitting is always a possibility, for any model. One possible way to counteract overfitting is by always using cross-validation.
|
Random Forests overfitting/unbalanced classes?
I personally think Random Forests are not impervious to overfitting. Overfitting is always a possibility, for any model. One possible way to counteract overfitting is by always using cross-validation.
If you want to detect overfitting, you can plot learning curves. Here, you are going to train the model multiple times, each time with a larger training set. Afterwards, you calculate the score on both the training set and the test set and plot these scores. Or, as Scikit Learn puts it: "A learning curve shows the validation and training error of an estimator for varying numbers of training samples." If the training error is low and the validation error is much higher, your model is overfitting.
To address your second question; when your data consists of 99% 1's and 1% 0's, this will certainly affect your final result!
|
Random Forests overfitting/unbalanced classes?
I personally think Random Forests are not impervious to overfitting. Overfitting is always a possibility, for any model. One possible way to counteract overfitting is by always using cross-validation.
|
44,875
|
Degrees of freedom
|
There is a sentence prior to the passage quoted by the OP that I believe helps to interpret this:
In statistics, the number of degrees of freedom (d.o.f.) is the number
of independent pieces of data being used to make a calculation. (...).
The number of degrees of freedom is a measure of how certain we are
that our sample population is representative of the entire population
- the more degrees of freedom, usually the more certain we can be that we have accurately sampled the entire population.
So here
"more degrees of freedom"$\equiv$ "greater number of independent pieces of data"
This starts to sound familiar, since it points to the size of a sample of independent draws from the population. Moreover, on focus here are experimental data, so all nice properties I guess are assumed to be guaranteed, and therefore the larger the sample size of independent pieces of data, the more strongly the consistency property of estimator will actually emerge and reflect upon the estimates obtained.
So it appears that, for the author of the passage, the logical chain here goes as follows:
"more degrees of freedom"$\equiv$ "greater number of independent
pieces of data"
and
"greater number of independent pieces of data" $\Rightarrow$"greater
accuracy in recovering the population moments from the data"
and
"greater accuracy in recovering the populations moments from the data"
$\Rightarrow$ "the more representative of the population is the sample"
So
"more degrees of freedom"$\Rightarrow$ "the more representative of the population is the sample"
It appears therefore that the author of the passage employs the term "representative sample" with the meaning of "miniature population" or of "typical or ideal case", to follow the typology of Kruskal and Mosteller as re-relayed here.
|
Degrees of freedom
|
There is a sentence prior to the passage quoted by the OP that I believe helps to interpret this:
In statistics, the number of degrees of freedom (d.o.f.) is the number
of independent pieces of dat
|
Degrees of freedom
There is a sentence prior to the passage quoted by the OP that I believe helps to interpret this:
In statistics, the number of degrees of freedom (d.o.f.) is the number
of independent pieces of data being used to make a calculation. (...).
The number of degrees of freedom is a measure of how certain we are
that our sample population is representative of the entire population
- the more degrees of freedom, usually the more certain we can be that we have accurately sampled the entire population.
So here
"more degrees of freedom"$\equiv$ "greater number of independent pieces of data"
This starts to sound familiar, since it points to the size of a sample of independent draws from the population. Moreover, on focus here are experimental data, so all nice properties I guess are assumed to be guaranteed, and therefore the larger the sample size of independent pieces of data, the more strongly the consistency property of estimator will actually emerge and reflect upon the estimates obtained.
So it appears that, for the author of the passage, the logical chain here goes as follows:
"more degrees of freedom"$\equiv$ "greater number of independent
pieces of data"
and
"greater number of independent pieces of data" $\Rightarrow$"greater
accuracy in recovering the population moments from the data"
and
"greater accuracy in recovering the populations moments from the data"
$\Rightarrow$ "the more representative of the population is the sample"
So
"more degrees of freedom"$\Rightarrow$ "the more representative of the population is the sample"
It appears therefore that the author of the passage employs the term "representative sample" with the meaning of "miniature population" or of "typical or ideal case", to follow the typology of Kruskal and Mosteller as re-relayed here.
|
Degrees of freedom
There is a sentence prior to the passage quoted by the OP that I believe helps to interpret this:
In statistics, the number of degrees of freedom (d.o.f.) is the number
of independent pieces of dat
|
44,876
|
Regession diagnostics
|
Consider one of the simplest possible case. One independent variable (so 2 parameters, including the constant). One data point.
Plot your one data point
Draw a straight line through that one point. Draw a different straight line through the same point. Draw a third one. ... and so on.
$\hspace{3cm}$
They all fit the data perfectly. Which one are you going to pick?
The problem is similar with two points and two predictors (with a plane through two points it's a bit like trying to rest a sheet of plywood on top of a picket fence - stable in one direction, but it's a see-saw in the other).
|
Regession diagnostics
|
Consider one of the simplest possible case. One independent variable (so 2 parameters, including the constant). One data point.
Plot your one data point
Draw a straight line through that one point. D
|
Regession diagnostics
Consider one of the simplest possible case. One independent variable (so 2 parameters, including the constant). One data point.
Plot your one data point
Draw a straight line through that one point. Draw a different straight line through the same point. Draw a third one. ... and so on.
$\hspace{3cm}$
They all fit the data perfectly. Which one are you going to pick?
The problem is similar with two points and two predictors (with a plane through two points it's a bit like trying to rest a sheet of plywood on top of a picket fence - stable in one direction, but it's a see-saw in the other).
|
Regession diagnostics
Consider one of the simplest possible case. One independent variable (so 2 parameters, including the constant). One data point.
Plot your one data point
Draw a straight line through that one point. D
|
44,877
|
Distributions similar to Normal distribution
|
You can also use heavy tail Lambert W x Gaussian random variables Y with tail parameter $\delta \geq 0$ and $\alpha \geq 0$. Similar to the $t_{\nu}$ distribution, the Normal distribution is nested for $\delta = 0$ (in this case the input $X$ equals output $Y$). In R you can simulate, estimate, plot, etc. several Lambert W x F distributionswith the LambertW package.
In this similar post I fix the input variance $\sigma_X = 1$ and vary $\delta$ from $0$ to $2$. As the variance of $Y$ depends on $\delta$ ($\sigma_Y$ increases with $\delta$ and does not exist for $\delta \geq 0.5$), the comparison of densities are not at the same variance.
However, you want to compare the actual distribuation at the same (finite) variance, so we need to
keep $\delta < 0.5$ as otherwise $var(Y) \rightarrow \infty$ or undefined;
and compute the corresponding input standard deviation $\sigma_X = \sigma_X(\delta)$ so that $\sigma_Y = \sigma_Y(\sigma_X, \delta) = 1$ for any given $\delta$.
The following plot shows densities at varying $\delta$; as $\delta$ increases the density becomes more peaked/concentrated around $0$.
library(LambertW)
library(RColorBrewer)
# several heavy-tail parameters (delta < 0.5 so that variance exists)
delta.v <- seq(0, 0.45, length = 10)
x.grid <- seq(-3, 3, length = 201)
col.v <- colorRampPalette(c("blue", "red"))(length(delta.v))
pdf.vals <- matrix(NA, ncol = length(delta.v), nrow = length(x.grid))
for (ii in seq_along(delta.v)) {
# compute sigma_x such that sigma_y(delta) = 1
sigma.x <- delta_01(delta.v[ii])["sigma_x"]
theta.01 <- list(delta = delta.v[ii], beta = c(0, sigma.x))
pdf.vals[, ii] <- dLambertW(x.grid, "normal", theta = theta.01)
}
matplot(x.grid, pdf.vals, type = "l", col = col.v, lwd = 2,
ylab = "", xlab = "")
grid()
legend("topleft", paste(delta.v), col = col.v, title = expression(delta),
lwd = 3, lty = seq_along(delta.v))
And similar to post on t distribution peak height at $0$:
plot(delta.v, pdf.vals[x.grid == 0, ] / dnorm(0), pch = 19, lwd = 10,
col = col.v, ylab = "", xlab = expression(delta), xlim = c(0, 0.5))
mtext("Relative peak height \n (Normal(0, 1) = 1.0)", side = 2, line = 2)
grid()
abline(h = 1, lty = 2, col = "darkgreen")
|
Distributions similar to Normal distribution
|
You can also use heavy tail Lambert W x Gaussian random variables Y with tail parameter $\delta \geq 0$ and $\alpha \geq 0$. Similar to the $t_{\nu}$ distribution, the Normal distribution is nested f
|
Distributions similar to Normal distribution
You can also use heavy tail Lambert W x Gaussian random variables Y with tail parameter $\delta \geq 0$ and $\alpha \geq 0$. Similar to the $t_{\nu}$ distribution, the Normal distribution is nested for $\delta = 0$ (in this case the input $X$ equals output $Y$). In R you can simulate, estimate, plot, etc. several Lambert W x F distributionswith the LambertW package.
In this similar post I fix the input variance $\sigma_X = 1$ and vary $\delta$ from $0$ to $2$. As the variance of $Y$ depends on $\delta$ ($\sigma_Y$ increases with $\delta$ and does not exist for $\delta \geq 0.5$), the comparison of densities are not at the same variance.
However, you want to compare the actual distribuation at the same (finite) variance, so we need to
keep $\delta < 0.5$ as otherwise $var(Y) \rightarrow \infty$ or undefined;
and compute the corresponding input standard deviation $\sigma_X = \sigma_X(\delta)$ so that $\sigma_Y = \sigma_Y(\sigma_X, \delta) = 1$ for any given $\delta$.
The following plot shows densities at varying $\delta$; as $\delta$ increases the density becomes more peaked/concentrated around $0$.
library(LambertW)
library(RColorBrewer)
# several heavy-tail parameters (delta < 0.5 so that variance exists)
delta.v <- seq(0, 0.45, length = 10)
x.grid <- seq(-3, 3, length = 201)
col.v <- colorRampPalette(c("blue", "red"))(length(delta.v))
pdf.vals <- matrix(NA, ncol = length(delta.v), nrow = length(x.grid))
for (ii in seq_along(delta.v)) {
# compute sigma_x such that sigma_y(delta) = 1
sigma.x <- delta_01(delta.v[ii])["sigma_x"]
theta.01 <- list(delta = delta.v[ii], beta = c(0, sigma.x))
pdf.vals[, ii] <- dLambertW(x.grid, "normal", theta = theta.01)
}
matplot(x.grid, pdf.vals, type = "l", col = col.v, lwd = 2,
ylab = "", xlab = "")
grid()
legend("topleft", paste(delta.v), col = col.v, title = expression(delta),
lwd = 3, lty = seq_along(delta.v))
And similar to post on t distribution peak height at $0$:
plot(delta.v, pdf.vals[x.grid == 0, ] / dnorm(0), pch = 19, lwd = 10,
col = col.v, ylab = "", xlab = expression(delta), xlim = c(0, 0.5))
mtext("Relative peak height \n (Normal(0, 1) = 1.0)", side = 2, line = 2)
grid()
abline(h = 1, lty = 2, col = "darkgreen")
|
Distributions similar to Normal distribution
You can also use heavy tail Lambert W x Gaussian random variables Y with tail parameter $\delta \geq 0$ and $\alpha \geq 0$. Similar to the $t_{\nu}$ distribution, the Normal distribution is nested f
|
44,878
|
Distributions similar to Normal distribution
|
This touches on the notion of kurtosis (from the ancient Greek for curved, or arching), which was originally used by Karl Pearson to describe the greater or lesser degree of peakedness (more or less sharply curved) seen in some distribution when compared to the normal.
It's often the case that - at a fixed variance - a more sharply curved center is also associated with heavier tails.
However, even with symmetric distributions, the standardized fourth-moment measure of kurtosis doesn't necessarily go with a higher peak, heavier tails, or greater curvature near the mode. [Kendall and Stuart's Advanced Theory of Statistics (I'm thinking of the second to fourth edition, but it will also doubtless also be in more recent versions under different authors) show that all combinations of relative peak-height, relative tail-height and kurtosis can occur, for example.]
In any case, many examples abound and looking for distributions with excess kurtosis above 0 is an easy way to find examples.
Perhaps the most obvious one (already mentioned) is the $t_{\nu}$-distribution, which has the nice property of including the normal as a limiting case. If we take $\nu>2$ so the variance exists and is finite, then the $t$ has variance $\frac{\nu}{\nu-2}$.
To scale to variance 1, then, the usual $t$-variable must be divided by $\sqrt{\frac{\nu}{\nu-2}}$, which multiplies the height by the same quantity.
The pdf for the "standard" $t$ is:
$$\frac{\Gamma \left(\frac{\nu+1}{2} \right)} {\sqrt{\nu\pi}\,\Gamma \left(\frac{\nu}{2} \right)} \left(1+\frac{x^2}{\nu} \right)^{-\frac{\nu+1}{2}}\,,$$
so its height at 0 is:
$$\frac{\Gamma \left(\frac{\nu+1}{2} \right)} {\sqrt{\nu\pi}\,\Gamma \left(\frac{\nu}{2} \right)}$$
Therefore, the scaled-t with variance 1 has height at 0:
$$\frac{\Gamma \left(\frac{\nu+1}{2} \right)} {\sqrt{\nu\pi}\,\Gamma \left(\frac{\nu}{2} \right)}\sqrt{\frac{\nu}{\nu-2}}=\frac{\Gamma \left(\frac{\nu+1}{2} \right)} {\sqrt{\pi(\nu-2)}\,\Gamma \left(\frac{\nu}{2} \right)}$$
which gives:
$\quad$
The horizontal dashed line is the peak height for the normal. We see that the unit-variance $t$ has peak height above that of the normal for small degrees of freedom. It also turns out (e.g. by considering series expansions) that eventually every standardized-to-unit-variance $t$ with sufficiently large $\nu$ must have peak height above that of the normal.
There are numerous other distributions which might suit, of which I'll mention a few - the logistic distribution (when standardized has peak height $\frac{\pi}{4\sqrt{3}}$), the hyperbolic secant distribution (peak height $\frac{1}{2}$), the Laplace (or double exponential, with peak height $\frac{1}{\sqrt{2}}$). The last one is not smooth at the peak, however, so if you're after a smooth curve at the peak you might want to choose one of the others.
$\quad$
|
Distributions similar to Normal distribution
|
This touches on the notion of kurtosis (from the ancient Greek for curved, or arching), which was originally used by Karl Pearson to describe the greater or lesser degree of peakedness (more or less s
|
Distributions similar to Normal distribution
This touches on the notion of kurtosis (from the ancient Greek for curved, or arching), which was originally used by Karl Pearson to describe the greater or lesser degree of peakedness (more or less sharply curved) seen in some distribution when compared to the normal.
It's often the case that - at a fixed variance - a more sharply curved center is also associated with heavier tails.
However, even with symmetric distributions, the standardized fourth-moment measure of kurtosis doesn't necessarily go with a higher peak, heavier tails, or greater curvature near the mode. [Kendall and Stuart's Advanced Theory of Statistics (I'm thinking of the second to fourth edition, but it will also doubtless also be in more recent versions under different authors) show that all combinations of relative peak-height, relative tail-height and kurtosis can occur, for example.]
In any case, many examples abound and looking for distributions with excess kurtosis above 0 is an easy way to find examples.
Perhaps the most obvious one (already mentioned) is the $t_{\nu}$-distribution, which has the nice property of including the normal as a limiting case. If we take $\nu>2$ so the variance exists and is finite, then the $t$ has variance $\frac{\nu}{\nu-2}$.
To scale to variance 1, then, the usual $t$-variable must be divided by $\sqrt{\frac{\nu}{\nu-2}}$, which multiplies the height by the same quantity.
The pdf for the "standard" $t$ is:
$$\frac{\Gamma \left(\frac{\nu+1}{2} \right)} {\sqrt{\nu\pi}\,\Gamma \left(\frac{\nu}{2} \right)} \left(1+\frac{x^2}{\nu} \right)^{-\frac{\nu+1}{2}}\,,$$
so its height at 0 is:
$$\frac{\Gamma \left(\frac{\nu+1}{2} \right)} {\sqrt{\nu\pi}\,\Gamma \left(\frac{\nu}{2} \right)}$$
Therefore, the scaled-t with variance 1 has height at 0:
$$\frac{\Gamma \left(\frac{\nu+1}{2} \right)} {\sqrt{\nu\pi}\,\Gamma \left(\frac{\nu}{2} \right)}\sqrt{\frac{\nu}{\nu-2}}=\frac{\Gamma \left(\frac{\nu+1}{2} \right)} {\sqrt{\pi(\nu-2)}\,\Gamma \left(\frac{\nu}{2} \right)}$$
which gives:
$\quad$
The horizontal dashed line is the peak height for the normal. We see that the unit-variance $t$ has peak height above that of the normal for small degrees of freedom. It also turns out (e.g. by considering series expansions) that eventually every standardized-to-unit-variance $t$ with sufficiently large $\nu$ must have peak height above that of the normal.
There are numerous other distributions which might suit, of which I'll mention a few - the logistic distribution (when standardized has peak height $\frac{\pi}{4\sqrt{3}}$), the hyperbolic secant distribution (peak height $\frac{1}{2}$), the Laplace (or double exponential, with peak height $\frac{1}{\sqrt{2}}$). The last one is not smooth at the peak, however, so if you're after a smooth curve at the peak you might want to choose one of the others.
$\quad$
|
Distributions similar to Normal distribution
This touches on the notion of kurtosis (from the ancient Greek for curved, or arching), which was originally used by Karl Pearson to describe the greater or lesser degree of peakedness (more or less s
|
44,879
|
Distributions similar to Normal distribution
|
Chernoff's distribution (https://en.wikipedia.org/wiki/Chernoff%27s_distribution) is a distribution that has the characteristics I believe you are interested in: on the tails, the density is approximately proportional to
$|x| e^{-a|x|^3 + b|x|}$
for constants $a$ and $b$.
Noting that a normal density is proportional to
$e^{(ax - b)^2}$
you can see that the Chernoff's tails decay faster than the normal distribution.
It's not a very simple distribution, though.
|
Distributions similar to Normal distribution
|
Chernoff's distribution (https://en.wikipedia.org/wiki/Chernoff%27s_distribution) is a distribution that has the characteristics I believe you are interested in: on the tails, the density is approxima
|
Distributions similar to Normal distribution
Chernoff's distribution (https://en.wikipedia.org/wiki/Chernoff%27s_distribution) is a distribution that has the characteristics I believe you are interested in: on the tails, the density is approximately proportional to
$|x| e^{-a|x|^3 + b|x|}$
for constants $a$ and $b$.
Noting that a normal density is proportional to
$e^{(ax - b)^2}$
you can see that the Chernoff's tails decay faster than the normal distribution.
It's not a very simple distribution, though.
|
Distributions similar to Normal distribution
Chernoff's distribution (https://en.wikipedia.org/wiki/Chernoff%27s_distribution) is a distribution that has the characteristics I believe you are interested in: on the tails, the density is approxima
|
44,880
|
Normalization to non-degenerate distribution
|
Consider the most basic example, the sample mean from an i.i.d. sample of size $n$, $\bar X_n$.
We know that as $n \rightarrow \infty$, $\bar X_n \rightarrow \mu$, where $\mu$ is the common mean, the expected value, of the random variables from which the sample is generated.
So at the limit, $\bar X$ has a degenerate distribution, which is the formal way to say that it convergences to a constant. Constant terms can be considered as degenerate random variables. We usually say "constants do not have a distribution", but since sometimes issues of existence matter (meaning that the phrase "the distribution does not exist" properly means that the statistic we examine goes to infinity as the sample size goes to infinity), the correct way to distinguish the two cases is to say "the distribution of a constant is degenerate".
And what do we do, in order to obtain a non-degenerate asymptotic distribution? We create a function of the sample mean, that does not converge to a constant, but it doesn't diverge either. In the case of the sample mean, this function is $\sqrt n(\bar X_n -\mu)$.
In analogous spirit, in Extreme Value Theory, the extreme order statistics, either diverge (if the distribution has unbounded support), or tend to a constant (if the distribution has bounded support on their side). In both cases, we don't get a limiting distribution. So we need to find a function of the extreme order statistic, which will converge to a non-constant random variable and hence, with a usable distribution. The deterministic sequences $\{a_n\}$ and $\{b_n\}$, together with the statistic, create this function. Finding these sequences is not that simple, see for example this post.
Regarding the example given by @Glen_b for the maximum order statistic from a Uniform $U(0,1)$ (a distribution with bounded support), intuitively, as the sample size increases, we will obtain at least one realization of the random variable that exactly equals its upper bound. But this means that $X_{(n)} \rightarrow \max X$, which is a constant, and so it has a degenerate distribution. So we need to find a function of $X_{(n)}$ that does not diverge, and does converge to a random variable. In the specific case, this function is indeed $Z = n(1-X_{(n)})$. To see this, use the change of variable formula to find that
$$Z =n(1-X_{(n)}) \Rightarrow X_{(n)} = 1-\frac Zn \Rightarrow \left|\frac {\partial X}{\partial Z} \right|= \frac 1n$$
and note that $Z \in [0,n]$.
Therefore
$$f_Z(z) = \left|\frac {\partial X}{\partial Z} \right| f_{X_{(n)}}(1-z/n) = \frac 1n \left (nf_X(1-z/n)[F_X(1-z/n)]^{n-1}\right)$$
But $f_X(\cdot) =1$, and $F_X(x) =x$. So
$$f_Z(z) =\left(1-\frac zn\right)^{n-1}$$
and
$$F_Z(z) = \int_{0}^z\left(1-\frac tn\right)^{n-1}dt = 1-\left(1-\frac zn\right)^{n}$$
Then
$$\lim_{n\rightarrow \infty}F_Z(z) = 1-\lim_{n\rightarrow \infty}\left(1-\frac zn\right)^{n} = 1-e^{-z}$$
which is the distribution function of a standard exponential (i.e. with mean value $1$).
|
Normalization to non-degenerate distribution
|
Consider the most basic example, the sample mean from an i.i.d. sample of size $n$, $\bar X_n$.
We know that as $n \rightarrow \infty$, $\bar X_n \rightarrow \mu$, where $\mu$ is the common mean, the
|
Normalization to non-degenerate distribution
Consider the most basic example, the sample mean from an i.i.d. sample of size $n$, $\bar X_n$.
We know that as $n \rightarrow \infty$, $\bar X_n \rightarrow \mu$, where $\mu$ is the common mean, the expected value, of the random variables from which the sample is generated.
So at the limit, $\bar X$ has a degenerate distribution, which is the formal way to say that it convergences to a constant. Constant terms can be considered as degenerate random variables. We usually say "constants do not have a distribution", but since sometimes issues of existence matter (meaning that the phrase "the distribution does not exist" properly means that the statistic we examine goes to infinity as the sample size goes to infinity), the correct way to distinguish the two cases is to say "the distribution of a constant is degenerate".
And what do we do, in order to obtain a non-degenerate asymptotic distribution? We create a function of the sample mean, that does not converge to a constant, but it doesn't diverge either. In the case of the sample mean, this function is $\sqrt n(\bar X_n -\mu)$.
In analogous spirit, in Extreme Value Theory, the extreme order statistics, either diverge (if the distribution has unbounded support), or tend to a constant (if the distribution has bounded support on their side). In both cases, we don't get a limiting distribution. So we need to find a function of the extreme order statistic, which will converge to a non-constant random variable and hence, with a usable distribution. The deterministic sequences $\{a_n\}$ and $\{b_n\}$, together with the statistic, create this function. Finding these sequences is not that simple, see for example this post.
Regarding the example given by @Glen_b for the maximum order statistic from a Uniform $U(0,1)$ (a distribution with bounded support), intuitively, as the sample size increases, we will obtain at least one realization of the random variable that exactly equals its upper bound. But this means that $X_{(n)} \rightarrow \max X$, which is a constant, and so it has a degenerate distribution. So we need to find a function of $X_{(n)}$ that does not diverge, and does converge to a random variable. In the specific case, this function is indeed $Z = n(1-X_{(n)})$. To see this, use the change of variable formula to find that
$$Z =n(1-X_{(n)}) \Rightarrow X_{(n)} = 1-\frac Zn \Rightarrow \left|\frac {\partial X}{\partial Z} \right|= \frac 1n$$
and note that $Z \in [0,n]$.
Therefore
$$f_Z(z) = \left|\frac {\partial X}{\partial Z} \right| f_{X_{(n)}}(1-z/n) = \frac 1n \left (nf_X(1-z/n)[F_X(1-z/n)]^{n-1}\right)$$
But $f_X(\cdot) =1$, and $F_X(x) =x$. So
$$f_Z(z) =\left(1-\frac zn\right)^{n-1}$$
and
$$F_Z(z) = \int_{0}^z\left(1-\frac tn\right)^{n-1}dt = 1-\left(1-\frac zn\right)^{n}$$
Then
$$\lim_{n\rightarrow \infty}F_Z(z) = 1-\lim_{n\rightarrow \infty}\left(1-\frac zn\right)^{n} = 1-e^{-z}$$
which is the distribution function of a standard exponential (i.e. with mean value $1$).
|
Normalization to non-degenerate distribution
Consider the most basic example, the sample mean from an i.i.d. sample of size $n$, $\bar X_n$.
We know that as $n \rightarrow \infty$, $\bar X_n \rightarrow \mu$, where $\mu$ is the common mean, the
|
44,881
|
Normalization to non-degenerate distribution
|
Normalization is used to mean a variety of things - which usually relate to scaling in some way. In this case it's just a matter of finding constants to subtract and divide by such that the resulting sequence of random variables converges to a distribution that isn't degenerate.
Presumably in the situation under discussion,
\begin{equation}
\max \{X_1, \cdots, X_n\}
\end{equation}
is degenerate (it's typically the case).
Aside from some oddness in that they seem to be using one letter for two different things there, all they're talking about is choosing $a_n$ and $b_n$ so that
\begin{equation}
\frac{\max \{X_1, \cdots, X_n\} - b_n}{a_n}
\end{equation}
isn't degenerate in the limit.
If you can find $E(\max \{X_1, \cdots, X_n\})$ and $\text{Var}(\max \{X_1, \cdots, X_n\})$ as functions of $n$, for example, you might be able to set $b_n$ to the first and $a_n$ to the square root of the second, which would yield something that has constant mean and variance ($0$ and $1$ respectively). If the distribution converges in the limit, it should satisfy the conditions.
For example, consider $X_i$ being U(0,1). Then in the limit, the sample maximum $X_{(n)}$ is degenerate.
But I think $n(1-X_{(n)})$ is not degenerate in the limit - IIRC it goes to a standard exponential.
|
Normalization to non-degenerate distribution
|
Normalization is used to mean a variety of things - which usually relate to scaling in some way. In this case it's just a matter of finding constants to subtract and divide by such that the resulting
|
Normalization to non-degenerate distribution
Normalization is used to mean a variety of things - which usually relate to scaling in some way. In this case it's just a matter of finding constants to subtract and divide by such that the resulting sequence of random variables converges to a distribution that isn't degenerate.
Presumably in the situation under discussion,
\begin{equation}
\max \{X_1, \cdots, X_n\}
\end{equation}
is degenerate (it's typically the case).
Aside from some oddness in that they seem to be using one letter for two different things there, all they're talking about is choosing $a_n$ and $b_n$ so that
\begin{equation}
\frac{\max \{X_1, \cdots, X_n\} - b_n}{a_n}
\end{equation}
isn't degenerate in the limit.
If you can find $E(\max \{X_1, \cdots, X_n\})$ and $\text{Var}(\max \{X_1, \cdots, X_n\})$ as functions of $n$, for example, you might be able to set $b_n$ to the first and $a_n$ to the square root of the second, which would yield something that has constant mean and variance ($0$ and $1$ respectively). If the distribution converges in the limit, it should satisfy the conditions.
For example, consider $X_i$ being U(0,1). Then in the limit, the sample maximum $X_{(n)}$ is degenerate.
But I think $n(1-X_{(n)})$ is not degenerate in the limit - IIRC it goes to a standard exponential.
|
Normalization to non-degenerate distribution
Normalization is used to mean a variety of things - which usually relate to scaling in some way. In this case it's just a matter of finding constants to subtract and divide by such that the resulting
|
44,882
|
Tool for generating correlated data sets
|
You could do it in any variety of places. Excel, R, ... almost anything capable of doing basic statistical calculations.
Population correlation. This is a simple matter in the bivariate case of taking independent random variables with the same standard deviation and creating a third variable from those two that has the required correlation with one of the two random variables. If $X_1$ and $X_2$ are independent standard normal variables, then $Y=rX_2+\sqrt{1-r^2}X_1$ will have correlation $r$ between $Y$ and $X_2$.
Here's an example in R:
n = 10
r = 0.8
x1 = rnorm(n)
x2 = rnorm(n)
y1 = r*x2+sqrt(1-r*r)*x1
Here the underlying variables have population correlation of the desired size, but the sample correlation will differ from it. (I just ran the code three times and got sample correlations of 0.938,0.895, and 0.933).
This could be done in Excel or any number of other packages with similar ease.
If you need it for more than two variables and some prespecified correlation matrix, this can be done using Cholesky decomposition (of which the above is a special case). If $Z$ is a vector of length $k$ of independent random variables with unit (or at least constant) standard deviation; and $\S$ is a correlation matrix with Cholesky decomposition $S=LL'$, then $LZ$ with have population correlation $S$.
Sample correlation. For the exact sample correlation, you need samples with exactly zero sample correlation, and identical sample variances, before applying the above trick. There are a variety of ways to achieve that, but one simple way is to take residuals from a regression (which will be uncorrelated with the x-variable in the regression), and then scale both variables to have unit variance.
Here's an example in R:
n = 10
r = 0.8
x1 = rnorm(n)
x2 = rnorm(n)
y1 = scale(x2) * r + scale(residuals(lm(x1~x2))) * sqrt(1-r*r)
which produces the correlation:
cor(y1,x2)
[,1]
[1,] 0.8
exactly as desired.
So now it's just a matter of writing out the results in your preferred format (all the formats you mention can be done easily; for example, as a csv file, you'd call write.csv:
write.csv(data.frame(y=y1,x=x2),file="myfile.csv")
which makes a file of the name "myfile.csv" in the current working directory with the contents:
"","y","x"
"1",0.743433299251026,0.617686871809365
"2",0.527604385327034,-0.113047553664104
"3",-0.397333571358269,0.196447643803443
"4",-0.875264248799599,-1.57628371273354
"5",-0.225441433921137,-0.107919886825751
"6",0.0817573026498336,0.370207951209058
"7",-2.15935431462587,-1.21145928947767
"8",1.46638207013879,1.10215217029937
"9",0.311683673588212,-0.470550477344661
"10",0.526532837749974,-0.104382608454622
|
Tool for generating correlated data sets
|
You could do it in any variety of places. Excel, R, ... almost anything capable of doing basic statistical calculations.
Population correlation. This is a simple matter in the bivariate case of takin
|
Tool for generating correlated data sets
You could do it in any variety of places. Excel, R, ... almost anything capable of doing basic statistical calculations.
Population correlation. This is a simple matter in the bivariate case of taking independent random variables with the same standard deviation and creating a third variable from those two that has the required correlation with one of the two random variables. If $X_1$ and $X_2$ are independent standard normal variables, then $Y=rX_2+\sqrt{1-r^2}X_1$ will have correlation $r$ between $Y$ and $X_2$.
Here's an example in R:
n = 10
r = 0.8
x1 = rnorm(n)
x2 = rnorm(n)
y1 = r*x2+sqrt(1-r*r)*x1
Here the underlying variables have population correlation of the desired size, but the sample correlation will differ from it. (I just ran the code three times and got sample correlations of 0.938,0.895, and 0.933).
This could be done in Excel or any number of other packages with similar ease.
If you need it for more than two variables and some prespecified correlation matrix, this can be done using Cholesky decomposition (of which the above is a special case). If $Z$ is a vector of length $k$ of independent random variables with unit (or at least constant) standard deviation; and $\S$ is a correlation matrix with Cholesky decomposition $S=LL'$, then $LZ$ with have population correlation $S$.
Sample correlation. For the exact sample correlation, you need samples with exactly zero sample correlation, and identical sample variances, before applying the above trick. There are a variety of ways to achieve that, but one simple way is to take residuals from a regression (which will be uncorrelated with the x-variable in the regression), and then scale both variables to have unit variance.
Here's an example in R:
n = 10
r = 0.8
x1 = rnorm(n)
x2 = rnorm(n)
y1 = scale(x2) * r + scale(residuals(lm(x1~x2))) * sqrt(1-r*r)
which produces the correlation:
cor(y1,x2)
[,1]
[1,] 0.8
exactly as desired.
So now it's just a matter of writing out the results in your preferred format (all the formats you mention can be done easily; for example, as a csv file, you'd call write.csv:
write.csv(data.frame(y=y1,x=x2),file="myfile.csv")
which makes a file of the name "myfile.csv" in the current working directory with the contents:
"","y","x"
"1",0.743433299251026,0.617686871809365
"2",0.527604385327034,-0.113047553664104
"3",-0.397333571358269,0.196447643803443
"4",-0.875264248799599,-1.57628371273354
"5",-0.225441433921137,-0.107919886825751
"6",0.0817573026498336,0.370207951209058
"7",-2.15935431462587,-1.21145928947767
"8",1.46638207013879,1.10215217029937
"9",0.311683673588212,-0.470550477344661
"10",0.526532837749974,-0.104382608454622
|
Tool for generating correlated data sets
You could do it in any variety of places. Excel, R, ... almost anything capable of doing basic statistical calculations.
Population correlation. This is a simple matter in the bivariate case of takin
|
44,883
|
Tool for generating correlated data sets
|
Package mvtnorm in R produces random multivariate normals. You can specify the correlations.
If M is your matrix of random normals, do write.csv(M, file="mydata.csv") to write it out to a file.
|
Tool for generating correlated data sets
|
Package mvtnorm in R produces random multivariate normals. You can specify the correlations.
If M is your matrix of random normals, do write.csv(M, file="mydata.csv") to write it out to a file.
|
Tool for generating correlated data sets
Package mvtnorm in R produces random multivariate normals. You can specify the correlations.
If M is your matrix of random normals, do write.csv(M, file="mydata.csv") to write it out to a file.
|
Tool for generating correlated data sets
Package mvtnorm in R produces random multivariate normals. You can specify the correlations.
If M is your matrix of random normals, do write.csv(M, file="mydata.csv") to write it out to a file.
|
44,884
|
Tool for generating correlated data sets
|
Just to prevent to set correlation which are "impossible" as a whole set (the matrix of correlations can become non-positivedefinite) - for instance you can't define two nearly correlated variables and a third one near to one of them and far to the other of them - it might be more useful to begin with a "factorloadings"-matrix instead, which describe the composition of the randomvariables as linear (regression)-equations. This is less "natural" to look at at the beginning but one can get used to this.
The following might be done similarly, and perhaps better, in R but I show it here in my own matrix-tool-language MatMate because I'm unexperienced in R. It could be done shorter, without the naming/the use of variables like N , nv, etc, you could just insert the values but for documentation here I've done it with that richer documented form. Example is :
3 hidden common factors and
6 itemspecific error-factors (normal distribution) make
6 "empirical" variables
measured in N=1000 cases.
//==============================================================
N = 1000
nv = 6 // set number of empirical variables
ncf,nef = 3,nv // set number of common factors, error-factors
nf = ncf+nef // needed uncorrelated random-factors
// create a hidden ("unknown") loadingsmatrix, which describes the
// composition of our empirical data by the "unknown" factors
// remember we want ncf=3 common factors and nef=nv=6 error factors
ulad = {{ 10.0 , 1, 0}, _
{ 9 , 0, 1}, _
{ 0 , 11, 0}, _
{ 1 , 12, 1}, _
{ 0.2 , -1, 11}, _
{ -0.3 , 1, 10}}
ulad = ulad || 2*einh(nef) // append a identity-matrix as definition of the
// error-variance
// make the itemspecific variance a bit bigger
// than the spurious cross-factors loadings in
// the ulad-loadingsmatrix
chk = ulad * ulad' // check the expected covariancematrix
list chk // print it out
chk = covtocorr (chk) // look at it as correlation-matrix
list chk // print it out
// Now generate random data for nf uncorrelated normally-distributed factors
set randomstart=41 // set randomgenerator to get reproducable random data
rn = randomn(nf,N) // fix a basic datamatrix of random numbers (normal dist)
chk = (rn *' - N*einh(nf))*1e3 // we find spurious correlations of 1e-3
ufac=unkorrzl(rn) // refine data in rn: remove spurious correlations
// the process leaves still spurious correlations of 1e-12
chk = (ufac *' - N*einh(nf))*1e12 // still spurious correlations of 1e-12
// repeat to higher-precision
ufac=zvaluezl(abwzl(ufac)) // correct again for exacter z-values
ufac=unkorrzl(ufac) // remove again spurious correlation
chk = (ufac *' - N*einh(nf))*1e18 // spurious correlations of 1e-18
// create "empirical" dataset with N=1000 measures
// having the wished compositions of the random factors
data = ulad * ufac
// ========= end of the empirically unobservable mechanism ============
// now you can proceed with regression, factoranalysis or whatever on
// that data
// .....................
// or you can write out the data in a csv-file or into the clipboard
matwrite csv("mydata.csv",10,6) = data // write in csv-format, cases along row
// max 10 digits, 6 of them decimals
matwrite csv("mydata.csv",10,6) = data' // cases along column
matwrite csv("clip",10,6) = data' // write it directly into clipboard
// to insert it, for instance, in Excel
|
Tool for generating correlated data sets
|
Just to prevent to set correlation which are "impossible" as a whole set (the matrix of correlations can become non-positivedefinite) - for instance you can't define two nearly correlated variables an
|
Tool for generating correlated data sets
Just to prevent to set correlation which are "impossible" as a whole set (the matrix of correlations can become non-positivedefinite) - for instance you can't define two nearly correlated variables and a third one near to one of them and far to the other of them - it might be more useful to begin with a "factorloadings"-matrix instead, which describe the composition of the randomvariables as linear (regression)-equations. This is less "natural" to look at at the beginning but one can get used to this.
The following might be done similarly, and perhaps better, in R but I show it here in my own matrix-tool-language MatMate because I'm unexperienced in R. It could be done shorter, without the naming/the use of variables like N , nv, etc, you could just insert the values but for documentation here I've done it with that richer documented form. Example is :
3 hidden common factors and
6 itemspecific error-factors (normal distribution) make
6 "empirical" variables
measured in N=1000 cases.
//==============================================================
N = 1000
nv = 6 // set number of empirical variables
ncf,nef = 3,nv // set number of common factors, error-factors
nf = ncf+nef // needed uncorrelated random-factors
// create a hidden ("unknown") loadingsmatrix, which describes the
// composition of our empirical data by the "unknown" factors
// remember we want ncf=3 common factors and nef=nv=6 error factors
ulad = {{ 10.0 , 1, 0}, _
{ 9 , 0, 1}, _
{ 0 , 11, 0}, _
{ 1 , 12, 1}, _
{ 0.2 , -1, 11}, _
{ -0.3 , 1, 10}}
ulad = ulad || 2*einh(nef) // append a identity-matrix as definition of the
// error-variance
// make the itemspecific variance a bit bigger
// than the spurious cross-factors loadings in
// the ulad-loadingsmatrix
chk = ulad * ulad' // check the expected covariancematrix
list chk // print it out
chk = covtocorr (chk) // look at it as correlation-matrix
list chk // print it out
// Now generate random data for nf uncorrelated normally-distributed factors
set randomstart=41 // set randomgenerator to get reproducable random data
rn = randomn(nf,N) // fix a basic datamatrix of random numbers (normal dist)
chk = (rn *' - N*einh(nf))*1e3 // we find spurious correlations of 1e-3
ufac=unkorrzl(rn) // refine data in rn: remove spurious correlations
// the process leaves still spurious correlations of 1e-12
chk = (ufac *' - N*einh(nf))*1e12 // still spurious correlations of 1e-12
// repeat to higher-precision
ufac=zvaluezl(abwzl(ufac)) // correct again for exacter z-values
ufac=unkorrzl(ufac) // remove again spurious correlation
chk = (ufac *' - N*einh(nf))*1e18 // spurious correlations of 1e-18
// create "empirical" dataset with N=1000 measures
// having the wished compositions of the random factors
data = ulad * ufac
// ========= end of the empirically unobservable mechanism ============
// now you can proceed with regression, factoranalysis or whatever on
// that data
// .....................
// or you can write out the data in a csv-file or into the clipboard
matwrite csv("mydata.csv",10,6) = data // write in csv-format, cases along row
// max 10 digits, 6 of them decimals
matwrite csv("mydata.csv",10,6) = data' // cases along column
matwrite csv("clip",10,6) = data' // write it directly into clipboard
// to insert it, for instance, in Excel
|
Tool for generating correlated data sets
Just to prevent to set correlation which are "impossible" as a whole set (the matrix of correlations can become non-positivedefinite) - for instance you can't define two nearly correlated variables an
|
44,885
|
Does testing for assumptions affect type I error?
|
Generally speaking, the answer is yes, both type I and type II error rates are impacted by choosing tests on the basis of tests of assumptions.
This is pretty well established with testing of equality of variance (for which several papers point it out), and testing normality. It should be expected that it will be the case in general.
The advice is usually along the lines of "if you can't make the assumption without testing, better to simply act as if the assumption doesn't hold".
So, for example, if you're trying to decide between the equal-variance and Welch-type t-tests, by default use the Welch test (though under equal sample size it is robust to violations of that assumption).
Similarly, in moderately-small$^*$ samples, you may be better off using a permutation test for location by default than testing for normality and then using a t-test if you fail to reject (in large samples, the t-test is usually level-robust enough that it's not likely to be that big an issue in most cases, if the sample is also large enough that you're not concerned about impact on power). Alternatively, the Wilcoxon-Mann-Whitney has very good power compared to the t-test at the normal, and would often be a very viable alternative.
[If for some reason you need to test it would be best to be aware of the extent to which the significance level and power of the tests may be affected under either arm of any resulting choice the test of assumptions leads you to. This will depend on the particular circumstances; for example simulation can be used to help investigate the behavior in similar situations.]
* (but not very small, since the discreteness of the test statistic will limit the available significance levels too much; specifically, at very small sample sizes the smallest possible significance level may be impractically large)
A reference (with a link to more) on testing heteroskedasticity when choosing between equal-variance-t vs Welch-t location tests is here.
I also have one for the case of testing normality before choosing between the t test and the Wilcoxon-Mann-Whitney test (ref [3] here).
|
Does testing for assumptions affect type I error?
|
Generally speaking, the answer is yes, both type I and type II error rates are impacted by choosing tests on the basis of tests of assumptions.
This is pretty well established with testing of equality
|
Does testing for assumptions affect type I error?
Generally speaking, the answer is yes, both type I and type II error rates are impacted by choosing tests on the basis of tests of assumptions.
This is pretty well established with testing of equality of variance (for which several papers point it out), and testing normality. It should be expected that it will be the case in general.
The advice is usually along the lines of "if you can't make the assumption without testing, better to simply act as if the assumption doesn't hold".
So, for example, if you're trying to decide between the equal-variance and Welch-type t-tests, by default use the Welch test (though under equal sample size it is robust to violations of that assumption).
Similarly, in moderately-small$^*$ samples, you may be better off using a permutation test for location by default than testing for normality and then using a t-test if you fail to reject (in large samples, the t-test is usually level-robust enough that it's not likely to be that big an issue in most cases, if the sample is also large enough that you're not concerned about impact on power). Alternatively, the Wilcoxon-Mann-Whitney has very good power compared to the t-test at the normal, and would often be a very viable alternative.
[If for some reason you need to test it would be best to be aware of the extent to which the significance level and power of the tests may be affected under either arm of any resulting choice the test of assumptions leads you to. This will depend on the particular circumstances; for example simulation can be used to help investigate the behavior in similar situations.]
* (but not very small, since the discreteness of the test statistic will limit the available significance levels too much; specifically, at very small sample sizes the smallest possible significance level may be impractically large)
A reference (with a link to more) on testing heteroskedasticity when choosing between equal-variance-t vs Welch-t location tests is here.
I also have one for the case of testing normality before choosing between the t test and the Wilcoxon-Mann-Whitney test (ref [3] here).
|
Does testing for assumptions affect type I error?
Generally speaking, the answer is yes, both type I and type II error rates are impacted by choosing tests on the basis of tests of assumptions.
This is pretty well established with testing of equality
|
44,886
|
Does testing for assumptions affect type I error?
|
Just a thought on this topic.
It's certainly true that when testing for assumptions with many different tests, you are going to end up with a higher than $\alpha$ type I error rates (where $\alpha$ is the significance level test for each individual test), just by standard multiple testing issues.
But when testing assumptions, often type I error is not as bad as a type II error! For example, if your data is not normal and you falsely reject, then you may proceed to use a more robust test, which is (likely) still valid if the data is normal. On the other hand, if you fail to reject but the data is not normal, then you falsely assume normality, leading to the known problems of invalid assumptions.
When doing multiple comparison procedures (i.e. Bonferroni, etc.), we are trying to preserve the type I error rate, at the cost of higher type II error rates. In regards to testing assumptions, that seems like a silly notion, given the higher costs of type II errors.
You can, of course, take this one step further and ask why we are using significance levels for tests of assumptions to start with, when it's really power we should be concerned about. If you did, I would agree with you, but I'm not ready to open that whole can of worms at the moment.
|
Does testing for assumptions affect type I error?
|
Just a thought on this topic.
It's certainly true that when testing for assumptions with many different tests, you are going to end up with a higher than $\alpha$ type I error rates (where $\alpha$ i
|
Does testing for assumptions affect type I error?
Just a thought on this topic.
It's certainly true that when testing for assumptions with many different tests, you are going to end up with a higher than $\alpha$ type I error rates (where $\alpha$ is the significance level test for each individual test), just by standard multiple testing issues.
But when testing assumptions, often type I error is not as bad as a type II error! For example, if your data is not normal and you falsely reject, then you may proceed to use a more robust test, which is (likely) still valid if the data is normal. On the other hand, if you fail to reject but the data is not normal, then you falsely assume normality, leading to the known problems of invalid assumptions.
When doing multiple comparison procedures (i.e. Bonferroni, etc.), we are trying to preserve the type I error rate, at the cost of higher type II error rates. In regards to testing assumptions, that seems like a silly notion, given the higher costs of type II errors.
You can, of course, take this one step further and ask why we are using significance levels for tests of assumptions to start with, when it's really power we should be concerned about. If you did, I would agree with you, but I'm not ready to open that whole can of worms at the moment.
|
Does testing for assumptions affect type I error?
Just a thought on this topic.
It's certainly true that when testing for assumptions with many different tests, you are going to end up with a higher than $\alpha$ type I error rates (where $\alpha$ i
|
44,887
|
What can't be expressed as a linear model?
|
The parameter needs to enter linearly into the equation. So something like $E(Y)=\beta_1 \cos(\beta_2 x_i + \beta_3)$ would not qualify. But you can take functions of the independent variables as follows:
$E(Y)=\beta_0 + \beta_1X_i + \beta_2X^2 + \beta_3 e^{X_i}$
for example.
So the limits of linear regressions are: the mean of the $Y$ values is of the form parameter times (independent variable stuff) + parameter times (more independent variable stuff) ... and so on.
|
What can't be expressed as a linear model?
|
The parameter needs to enter linearly into the equation. So something like $E(Y)=\beta_1 \cos(\beta_2 x_i + \beta_3)$ would not qualify. But you can take functions of the independent variables as foll
|
What can't be expressed as a linear model?
The parameter needs to enter linearly into the equation. So something like $E(Y)=\beta_1 \cos(\beta_2 x_i + \beta_3)$ would not qualify. But you can take functions of the independent variables as follows:
$E(Y)=\beta_0 + \beta_1X_i + \beta_2X^2 + \beta_3 e^{X_i}$
for example.
So the limits of linear regressions are: the mean of the $Y$ values is of the form parameter times (independent variable stuff) + parameter times (more independent variable stuff) ... and so on.
|
What can't be expressed as a linear model?
The parameter needs to enter linearly into the equation. So something like $E(Y)=\beta_1 \cos(\beta_2 x_i + \beta_3)$ would not qualify. But you can take functions of the independent variables as foll
|
44,888
|
What can't be expressed as a linear model?
|
(Almost) Everything can be expressed as a linear model, if you don't restrict it to a finite number of parameters.
This is the basis of functional analysis and kernel regression (as in SVMs with kernels). For instance, Fourier series - you can produce an infinite sine/cosine series, where the amplitude of the wave of each frequency gets a learned coefficient, and you can learn (almost) any function (any function whose square is integrable - which is a very weak condition).
Kernel machines, and functional analysis, are a wonderful idea, and make the world seem very beautiful - virtually everything is linear!
See http://en.wikipedia.org/wiki/Kernel_methods
The classic statistical probabilistic reference is Grace Wahba's Spline Models for Observational Data.
|
What can't be expressed as a linear model?
|
(Almost) Everything can be expressed as a linear model, if you don't restrict it to a finite number of parameters.
This is the basis of functional analysis and kernel regression (as in SVMs with kerne
|
What can't be expressed as a linear model?
(Almost) Everything can be expressed as a linear model, if you don't restrict it to a finite number of parameters.
This is the basis of functional analysis and kernel regression (as in SVMs with kernels). For instance, Fourier series - you can produce an infinite sine/cosine series, where the amplitude of the wave of each frequency gets a learned coefficient, and you can learn (almost) any function (any function whose square is integrable - which is a very weak condition).
Kernel machines, and functional analysis, are a wonderful idea, and make the world seem very beautiful - virtually everything is linear!
See http://en.wikipedia.org/wiki/Kernel_methods
The classic statistical probabilistic reference is Grace Wahba's Spline Models for Observational Data.
|
What can't be expressed as a linear model?
(Almost) Everything can be expressed as a linear model, if you don't restrict it to a finite number of parameters.
This is the basis of functional analysis and kernel regression (as in SVMs with kerne
|
44,889
|
What happens to adjusted R squared as sample size increases?
|
Adjusted r-squared is intended to be an unbiased estimate of population variance explained using the population regression equation. There are several different formulas for adjusted r-squared and there are various definitions of population variance explained (e.g., fixed versus random-x assumptions). Most commonly, statistical software will report the Ezekiel formula which makes the fixed-x assumption.
In general, as sample size increases,
the difference between expected adjusted r-squared and expected r-squared approaches zero; in theory this is because expected r-squared becomes less biased.
the standard error of adjusted r-squared would get smaller approaching zero in the limit.
So the main take-home message is that if you are interested in population variance explained, then adjusted r-squared is always a better option than r-squared. That said, as your sample size gets very large, r-squared won't be that biased (note that for models with large numbers of predictors, sample size needs to be even bigger for r-squared to approach being unbiased).
|
What happens to adjusted R squared as sample size increases?
|
Adjusted r-squared is intended to be an unbiased estimate of population variance explained using the population regression equation. There are several different formulas for adjusted r-squared and the
|
What happens to adjusted R squared as sample size increases?
Adjusted r-squared is intended to be an unbiased estimate of population variance explained using the population regression equation. There are several different formulas for adjusted r-squared and there are various definitions of population variance explained (e.g., fixed versus random-x assumptions). Most commonly, statistical software will report the Ezekiel formula which makes the fixed-x assumption.
In general, as sample size increases,
the difference between expected adjusted r-squared and expected r-squared approaches zero; in theory this is because expected r-squared becomes less biased.
the standard error of adjusted r-squared would get smaller approaching zero in the limit.
So the main take-home message is that if you are interested in population variance explained, then adjusted r-squared is always a better option than r-squared. That said, as your sample size gets very large, r-squared won't be that biased (note that for models with large numbers of predictors, sample size needs to be even bigger for r-squared to approach being unbiased).
|
What happens to adjusted R squared as sample size increases?
Adjusted r-squared is intended to be an unbiased estimate of population variance explained using the population regression equation. There are several different formulas for adjusted r-squared and the
|
44,890
|
What happens to adjusted R squared as sample size increases?
|
Here's a simple function in R that simulates two Gaussian variables and copies them to inflate sample size without changing their correlation. It plots adjusted $R^2$ over increasing copies with a line at $R^2$.
Adj.R.Squared=function(Sample.Size=10,Max.Copies=30,Noise=1){Adj.R²=c();y=rnorm(Sample.Size)
x=y+Noise*rnorm(Sample.Size);Copies=1:Max.Copies;for(i in 1:Max.Copies)
{Adj.R²=append(Adj.R²,summary(lm(rep(y,i)~rep(x,i)))$adj.r.squared)}
plot(Copies,Adj.R²);abline(h=summary(lm(y~x))$r.squared,col='red');lines(Copies,Adj.R²)
legend('bottomright',c('Adj. R²','R²'),lty=1,col=c('black','red'),pch=c(1,NA))}
It generates new data every time you run it, but always takes on more or less the same shape, asymptotically approaching $R^2$ like @JeromyAnglim described. For example, with set.seed(1):
Decreasing Noise (which increases $R^2$) will shrink the scale of the $y$ axis but not alter the shape: i.e., the difference between adjusted $R^2$ and $R^2$ decreases as $R^2$ increases, but doesn't change the effect of sample size. Increasing Sample.Size makes adjusted $R^2$ approach $R^2$ a little more slowly, but mostly shrinks the scale of the $y$ axis. You can increase Max.Copies to extend the $x$ axis, or modify this function to work with manually entered data or multiple predictors. I've done this myself but not included the code because it doesn't seem to change the basic conclusion. This is intended as a maximally simple answer to a very simple question. Therefore its generalizability may be limited.
|
What happens to adjusted R squared as sample size increases?
|
Here's a simple function in R that simulates two Gaussian variables and copies them to inflate sample size without changing their correlation. It plots adjusted $R^2$ over increasing copies with a lin
|
What happens to adjusted R squared as sample size increases?
Here's a simple function in R that simulates two Gaussian variables and copies them to inflate sample size without changing their correlation. It plots adjusted $R^2$ over increasing copies with a line at $R^2$.
Adj.R.Squared=function(Sample.Size=10,Max.Copies=30,Noise=1){Adj.R²=c();y=rnorm(Sample.Size)
x=y+Noise*rnorm(Sample.Size);Copies=1:Max.Copies;for(i in 1:Max.Copies)
{Adj.R²=append(Adj.R²,summary(lm(rep(y,i)~rep(x,i)))$adj.r.squared)}
plot(Copies,Adj.R²);abline(h=summary(lm(y~x))$r.squared,col='red');lines(Copies,Adj.R²)
legend('bottomright',c('Adj. R²','R²'),lty=1,col=c('black','red'),pch=c(1,NA))}
It generates new data every time you run it, but always takes on more or less the same shape, asymptotically approaching $R^2$ like @JeromyAnglim described. For example, with set.seed(1):
Decreasing Noise (which increases $R^2$) will shrink the scale of the $y$ axis but not alter the shape: i.e., the difference between adjusted $R^2$ and $R^2$ decreases as $R^2$ increases, but doesn't change the effect of sample size. Increasing Sample.Size makes adjusted $R^2$ approach $R^2$ a little more slowly, but mostly shrinks the scale of the $y$ axis. You can increase Max.Copies to extend the $x$ axis, or modify this function to work with manually entered data or multiple predictors. I've done this myself but not included the code because it doesn't seem to change the basic conclusion. This is intended as a maximally simple answer to a very simple question. Therefore its generalizability may be limited.
|
What happens to adjusted R squared as sample size increases?
Here's a simple function in R that simulates two Gaussian variables and copies them to inflate sample size without changing their correlation. It plots adjusted $R^2$ over increasing copies with a lin
|
44,891
|
How is the formula for the Standard error of the slope in linear regression derived? [duplicate]
|
There are a couple of rules to start with:
If $X$ is a random vector from $N(\mu,\Sigma)$ and $A$ is a Constant matrix then $AX \sim N(A\mu, A\Sigma A^T)$.
And in a regression we assume $Y = \beta X + \epsilon$ where $\epsilon \sim N(0,\sigma^2 I)$.
We estimate $\hat\beta = (X^T X)^{-1}X^T Y$
So: $\hat\beta = (X^T X)^{-1}X^T (X\beta + \epsilon)= (X^T X)^{-1}(X^T X)\beta + (X^T X)^{-1}X^T \epsilon$
So $\hat\beta \sim N(\beta, (X^T X)^{-1}X^T \sigma^2IX(^T X)^{-1})$.
So the variance of $\hat\beta$ is $(X^T X)^{-1}\sigma^2$
When you look at what is in $(X^T X)^{-1}$ this becomes $\frac{\sigma^2}{SSX}$ for the slope.
|
How is the formula for the Standard error of the slope in linear regression derived? [duplicate]
|
There are a couple of rules to start with:
If $X$ is a random vector from $N(\mu,\Sigma)$ and $A$ is a Constant matrix then $AX \sim N(A\mu, A\Sigma A^T)$.
And in a regression we assume $Y = \beta X +
|
How is the formula for the Standard error of the slope in linear regression derived? [duplicate]
There are a couple of rules to start with:
If $X$ is a random vector from $N(\mu,\Sigma)$ and $A$ is a Constant matrix then $AX \sim N(A\mu, A\Sigma A^T)$.
And in a regression we assume $Y = \beta X + \epsilon$ where $\epsilon \sim N(0,\sigma^2 I)$.
We estimate $\hat\beta = (X^T X)^{-1}X^T Y$
So: $\hat\beta = (X^T X)^{-1}X^T (X\beta + \epsilon)= (X^T X)^{-1}(X^T X)\beta + (X^T X)^{-1}X^T \epsilon$
So $\hat\beta \sim N(\beta, (X^T X)^{-1}X^T \sigma^2IX(^T X)^{-1})$.
So the variance of $\hat\beta$ is $(X^T X)^{-1}\sigma^2$
When you look at what is in $(X^T X)^{-1}$ this becomes $\frac{\sigma^2}{SSX}$ for the slope.
|
How is the formula for the Standard error of the slope in linear regression derived? [duplicate]
There are a couple of rules to start with:
If $X$ is a random vector from $N(\mu,\Sigma)$ and $A$ is a Constant matrix then $AX \sim N(A\mu, A\Sigma A^T)$.
And in a regression we assume $Y = \beta X +
|
44,892
|
How is the formula for the Standard error of the slope in linear regression derived? [duplicate]
|
To elaborate on Greg Snow's answer: suppose your data is in the form of $t$ versus $y$ i.e. you have a vector of $t$'s $(t_1,t_2,...,t_n)^{\top}$ as inputs, and corresponding scalar observations $(y_1,...,y_n)^{\top}$.
We can model the linear regression as $Y_i \sim N(\mu_i, \sigma^2)$ independently over i, where $\mu_i = a t_i + b$ is the line of best fit. Greg's way is to use vector notation.
We can rewrite the above in Greg's notation: let
$Y = (Y_1,...,Y_n)^{\top}$, $X = \left( \begin{array}{2} 1 & t_1\\ 1 & t_2\\ 1 & t_3\\ \vdots \\ 1 & t_n \end{array} \right)$,
$\beta = (a, b)^{\top}$. Then the linear regression model becomes:
$Y \sim N_n(X\beta, \sigma^2 I)$.
The goal then is to find the variance matrix of of the estimator $\widehat{\beta}$ of $\beta$.
The estimator $\widehat{\beta}$ can be found by Maximum Likelihood estimation (i.e. minimise $||Y - X\beta||^2$ with respect to the vector $\beta$), and Greg quite rightly states that
$\widehat{\beta} = (X^{\top}X)^{-1}X^{\top}Y$.
See that the estimator $\widehat{b}$ of the slope $b$ is just the 2nd component of $\widehat{\beta}$ --- i.e $\widehat{b} = \widehat{\beta}_2$
.
Note that $\widehat{\beta}$ is now expressed as some constant matrix multiplied by the random $Y$, and he uses a multivariate normal distribution result (see his 2nd sentence) to give you the distribution of $\widehat{\beta}$ as
$N_2(\beta, \sigma^2 (X^{\top}X)^{-1})$.
The corollary of this is that the variance matrix of $\widehat{\beta}$ is $\sigma^2 (X^{\top}X)^{-1}$ and a further corollary is that the variance of $\widehat{b}$ (i.e. the estimator of the slope) is $\left[\sigma^2 (X^{\top}X)^{-1}\right]_{22}$ i.e. the bottom right hand element of the variance matrix (recall that $\beta := (a, b)^{\top}$). I leave it as exercise to evaluate this answer.
Note that this answer $\left[\sigma^2 (X^{\top}X)^{-1}\right]_{22}$ depends on the unknown true variance $\sigma^2$ and therefore from a statistics point of view, useless. However, we can attempt to estimate this variance by substituting $\sigma^2$ with its estimate $\widehat{\sigma}^2$ (obtained via the Maximum Likelihood estimation earlier) i.e. the final answer to your question is $\text{var} (\widehat{\beta}) \approx \left[\widehat{\sigma}^2 (X^{\top}X)^{-1}\right]_{22}$. As an exercise, I leave you to perform the minimisation to derive $\widehat{\sigma}^2 = ||Y - X\widehat{\beta}||^2$.
|
How is the formula for the Standard error of the slope in linear regression derived? [duplicate]
|
To elaborate on Greg Snow's answer: suppose your data is in the form of $t$ versus $y$ i.e. you have a vector of $t$'s $(t_1,t_2,...,t_n)^{\top}$ as inputs, and corresponding scalar observations $(y_1
|
How is the formula for the Standard error of the slope in linear regression derived? [duplicate]
To elaborate on Greg Snow's answer: suppose your data is in the form of $t$ versus $y$ i.e. you have a vector of $t$'s $(t_1,t_2,...,t_n)^{\top}$ as inputs, and corresponding scalar observations $(y_1,...,y_n)^{\top}$.
We can model the linear regression as $Y_i \sim N(\mu_i, \sigma^2)$ independently over i, where $\mu_i = a t_i + b$ is the line of best fit. Greg's way is to use vector notation.
We can rewrite the above in Greg's notation: let
$Y = (Y_1,...,Y_n)^{\top}$, $X = \left( \begin{array}{2} 1 & t_1\\ 1 & t_2\\ 1 & t_3\\ \vdots \\ 1 & t_n \end{array} \right)$,
$\beta = (a, b)^{\top}$. Then the linear regression model becomes:
$Y \sim N_n(X\beta, \sigma^2 I)$.
The goal then is to find the variance matrix of of the estimator $\widehat{\beta}$ of $\beta$.
The estimator $\widehat{\beta}$ can be found by Maximum Likelihood estimation (i.e. minimise $||Y - X\beta||^2$ with respect to the vector $\beta$), and Greg quite rightly states that
$\widehat{\beta} = (X^{\top}X)^{-1}X^{\top}Y$.
See that the estimator $\widehat{b}$ of the slope $b$ is just the 2nd component of $\widehat{\beta}$ --- i.e $\widehat{b} = \widehat{\beta}_2$
.
Note that $\widehat{\beta}$ is now expressed as some constant matrix multiplied by the random $Y$, and he uses a multivariate normal distribution result (see his 2nd sentence) to give you the distribution of $\widehat{\beta}$ as
$N_2(\beta, \sigma^2 (X^{\top}X)^{-1})$.
The corollary of this is that the variance matrix of $\widehat{\beta}$ is $\sigma^2 (X^{\top}X)^{-1}$ and a further corollary is that the variance of $\widehat{b}$ (i.e. the estimator of the slope) is $\left[\sigma^2 (X^{\top}X)^{-1}\right]_{22}$ i.e. the bottom right hand element of the variance matrix (recall that $\beta := (a, b)^{\top}$). I leave it as exercise to evaluate this answer.
Note that this answer $\left[\sigma^2 (X^{\top}X)^{-1}\right]_{22}$ depends on the unknown true variance $\sigma^2$ and therefore from a statistics point of view, useless. However, we can attempt to estimate this variance by substituting $\sigma^2$ with its estimate $\widehat{\sigma}^2$ (obtained via the Maximum Likelihood estimation earlier) i.e. the final answer to your question is $\text{var} (\widehat{\beta}) \approx \left[\widehat{\sigma}^2 (X^{\top}X)^{-1}\right]_{22}$. As an exercise, I leave you to perform the minimisation to derive $\widehat{\sigma}^2 = ||Y - X\widehat{\beta}||^2$.
|
How is the formula for the Standard error of the slope in linear regression derived? [duplicate]
To elaborate on Greg Snow's answer: suppose your data is in the form of $t$ versus $y$ i.e. you have a vector of $t$'s $(t_1,t_2,...,t_n)^{\top}$ as inputs, and corresponding scalar observations $(y_1
|
44,893
|
Are pairwise Wilcoxon tests a valid non-parametric alternative to Tukey's HSD test?
|
No, it is not a valid nonparametric alternative.
The rank sum test (either original Wilcoxon flavor, or New Improved Mann-Whitney $U$ varieties):
ignore the rankings used by the Kruskal-Wallis test, and
do not employ pooled variance for the pairwise tests.
See, for example, Kruskal-Wallis Test and Mann-Whitney U Test. (Also the pairwise.wilcox.test seems not to have the ties adjustments that these tests do.)
The nonparametric pairwise multiple comparisons tests you are likely looking for are Dunn's test, the Conover-Iman test, or the Dwass-Steel-Crichtlow-Fligner test. I have made packages that perform Dunn's test (with options for controlling the FWER or FDR) freely available I have implemented Dunn's test for Stata and for R, and have implemented the Conover-Iman test for Stata and for R.
References
Conover, W. J. and Iman, R. L. (1979). On multiple-comparisons procedures. Technical Report LA-7677-MS, Los Alamos Scientific Laboratory.
Crichtlow, D. E. and Fligner, M. A. (1991). On distribution-free multiple comparisons in the one-way analysis of variance. Communications in Statistics—Theory and Methods, 20(1):127.
Dunn, O. J. (1964). Multiple comparisons using rank sums. Technometrics, 6(3):241–252.
|
Are pairwise Wilcoxon tests a valid non-parametric alternative to Tukey's HSD test?
|
No, it is not a valid nonparametric alternative.
The rank sum test (either original Wilcoxon flavor, or New Improved Mann-Whitney $U$ varieties):
ignore the rankings used by the Kruskal-Wallis test,
|
Are pairwise Wilcoxon tests a valid non-parametric alternative to Tukey's HSD test?
No, it is not a valid nonparametric alternative.
The rank sum test (either original Wilcoxon flavor, or New Improved Mann-Whitney $U$ varieties):
ignore the rankings used by the Kruskal-Wallis test, and
do not employ pooled variance for the pairwise tests.
See, for example, Kruskal-Wallis Test and Mann-Whitney U Test. (Also the pairwise.wilcox.test seems not to have the ties adjustments that these tests do.)
The nonparametric pairwise multiple comparisons tests you are likely looking for are Dunn's test, the Conover-Iman test, or the Dwass-Steel-Crichtlow-Fligner test. I have made packages that perform Dunn's test (with options for controlling the FWER or FDR) freely available I have implemented Dunn's test for Stata and for R, and have implemented the Conover-Iman test for Stata and for R.
References
Conover, W. J. and Iman, R. L. (1979). On multiple-comparisons procedures. Technical Report LA-7677-MS, Los Alamos Scientific Laboratory.
Crichtlow, D. E. and Fligner, M. A. (1991). On distribution-free multiple comparisons in the one-way analysis of variance. Communications in Statistics—Theory and Methods, 20(1):127.
Dunn, O. J. (1964). Multiple comparisons using rank sums. Technometrics, 6(3):241–252.
|
Are pairwise Wilcoxon tests a valid non-parametric alternative to Tukey's HSD test?
No, it is not a valid nonparametric alternative.
The rank sum test (either original Wilcoxon flavor, or New Improved Mann-Whitney $U$ varieties):
ignore the rankings used by the Kruskal-Wallis test,
|
44,894
|
Do discriminative models overfit more than generative models?
|
This is a fun question as it provides good context for why the often used heuristic that more parameters $\implies$ more risk of overfitting is just that, a heuristic. To ground the discussion let's consider what is in some sense the simplest problem, binary classification. As a specific example we will take the canonical generative-discriminative pair, naive Bayes and logistic regression respectively. It is important we look at corresponding pairs in this way. Otherwise anything we have to say is going to be useless. We could certainly come up with extraordinarily flexible generative models (imagine replacing the factored conditional $p(x|y)$ in naive Bayes with something like delta functions) which will essentially always overfit.
First we should define what we mean by overfitting. One useful definition of overfitting involves deriving tight probabilistic bounds on generalization error based on training set error and the type of classifiers we're using. In this case a relevant parameter is the VC dimension of the hypothesis class $H$ (the set of classifiers being used). Put simply the VC dimension of a hypothesis class is given by the largest set of examples such that for any possible labeling of that set, there exists an $h \in H$ which can label them in exactly that way. So given $m$ examples in a binary classification setting there are $2^m$ ways to label them. If there exists some set of $m$ example, and for each of the $2^m$ possible ways of labeling there is some $h \in H$ that labels them correctly, we can conclude $\text{VCdim}(H) \ge m$. Furthermore we say the set of $m$ points is shattered by $H$.
In turns out that naive Bayes and logistic regression both have the same VC dimension since they both classify examples in $n$ dimensions using an $n$ dimensional hyperplane. The VC dimension of an $n$ dimensional hyperplane is $n+1$. You can show this by upper-bounding the VC dimension using Randon's Theorem and then giving an example of a set of $n+1$ points which is shattered. Furthermore in this case our intuition from low dimensional examples generalizes to higher dimensions and we can make pretty pictures.
In this sense, both are equally likely to overfit because we'll generally get similar bounds on generalization. This also explain why 1 nearest neighbor is the king of overfitting, it has infinite VC dimension because it shatters every set of examples. This isn't the end of the story though.
Another useful way to formally define overfitting is based on obtaining generalization bounds in terms of the best predictor in the hypothesis class, i.e., the hypothesis we would all agree is best if we had an infinite number of examples. Here, things look different for the naive Bayes/logistic regression comparison. The first thing to note is that because of the way naive Bayes is parameterized (they need to specify valid conditionals, sum to 1, etc), we are not guaranteed to converge on the optimal linear classifier, even given an infinite number of examples. On the other hand, logistic regression will. So we can conclude that in fact the set of all naive Bayes classifiers is a proper subset of all logistic regression classifiers. This provides some evidence that indeed, naive Bayes classifiers may be less prone to overfit in this sense simply because they are less powerful/more constrained. Indeed this is the case. In short if we are performing classification in $n$ dimensional space, naive Bayes requires on the order of $O(\log n)$ samples to converge whp to the best naive Bayes classifier. Logistic regression requires on the order of $O(n)$. For a reference for this result see On Discriminative vs. Generative classifiers by Ng and Jordan.
|
Do discriminative models overfit more than generative models?
|
This is a fun question as it provides good context for why the often used heuristic that more parameters $\implies$ more risk of overfitting is just that, a heuristic. To ground the discussion let's c
|
Do discriminative models overfit more than generative models?
This is a fun question as it provides good context for why the often used heuristic that more parameters $\implies$ more risk of overfitting is just that, a heuristic. To ground the discussion let's consider what is in some sense the simplest problem, binary classification. As a specific example we will take the canonical generative-discriminative pair, naive Bayes and logistic regression respectively. It is important we look at corresponding pairs in this way. Otherwise anything we have to say is going to be useless. We could certainly come up with extraordinarily flexible generative models (imagine replacing the factored conditional $p(x|y)$ in naive Bayes with something like delta functions) which will essentially always overfit.
First we should define what we mean by overfitting. One useful definition of overfitting involves deriving tight probabilistic bounds on generalization error based on training set error and the type of classifiers we're using. In this case a relevant parameter is the VC dimension of the hypothesis class $H$ (the set of classifiers being used). Put simply the VC dimension of a hypothesis class is given by the largest set of examples such that for any possible labeling of that set, there exists an $h \in H$ which can label them in exactly that way. So given $m$ examples in a binary classification setting there are $2^m$ ways to label them. If there exists some set of $m$ example, and for each of the $2^m$ possible ways of labeling there is some $h \in H$ that labels them correctly, we can conclude $\text{VCdim}(H) \ge m$. Furthermore we say the set of $m$ points is shattered by $H$.
In turns out that naive Bayes and logistic regression both have the same VC dimension since they both classify examples in $n$ dimensions using an $n$ dimensional hyperplane. The VC dimension of an $n$ dimensional hyperplane is $n+1$. You can show this by upper-bounding the VC dimension using Randon's Theorem and then giving an example of a set of $n+1$ points which is shattered. Furthermore in this case our intuition from low dimensional examples generalizes to higher dimensions and we can make pretty pictures.
In this sense, both are equally likely to overfit because we'll generally get similar bounds on generalization. This also explain why 1 nearest neighbor is the king of overfitting, it has infinite VC dimension because it shatters every set of examples. This isn't the end of the story though.
Another useful way to formally define overfitting is based on obtaining generalization bounds in terms of the best predictor in the hypothesis class, i.e., the hypothesis we would all agree is best if we had an infinite number of examples. Here, things look different for the naive Bayes/logistic regression comparison. The first thing to note is that because of the way naive Bayes is parameterized (they need to specify valid conditionals, sum to 1, etc), we are not guaranteed to converge on the optimal linear classifier, even given an infinite number of examples. On the other hand, logistic regression will. So we can conclude that in fact the set of all naive Bayes classifiers is a proper subset of all logistic regression classifiers. This provides some evidence that indeed, naive Bayes classifiers may be less prone to overfit in this sense simply because they are less powerful/more constrained. Indeed this is the case. In short if we are performing classification in $n$ dimensional space, naive Bayes requires on the order of $O(\log n)$ samples to converge whp to the best naive Bayes classifier. Logistic regression requires on the order of $O(n)$. For a reference for this result see On Discriminative vs. Generative classifiers by Ng and Jordan.
|
Do discriminative models overfit more than generative models?
This is a fun question as it provides good context for why the often used heuristic that more parameters $\implies$ more risk of overfitting is just that, a heuristic. To ground the discussion let's c
|
44,895
|
Do discriminative models overfit more than generative models?
|
A generative model is typically overfitting less because it allows the user to put in more side information in the form of class conditionals.
Consider a generative model $p(c|x) = p(c)p(x|c)$. If the class conditionals are mulitvariate normals with shared covariance, this will have a linear decision boundary. Thus, the model by itself is just as powerful as a linear SVM or logistic regression.
However, a discriminative classifier is much more free in the choice of decision function: it just has to find an appropriate hyperplane. The generative classifier however will need much less samples to find good parameters if the assumptions are valid.
Sorry, this is rather handwavy and there is no hard math behind it. But it is an intuition.
|
Do discriminative models overfit more than generative models?
|
A generative model is typically overfitting less because it allows the user to put in more side information in the form of class conditionals.
Consider a generative model $p(c|x) = p(c)p(x|c)$. If the
|
Do discriminative models overfit more than generative models?
A generative model is typically overfitting less because it allows the user to put in more side information in the form of class conditionals.
Consider a generative model $p(c|x) = p(c)p(x|c)$. If the class conditionals are mulitvariate normals with shared covariance, this will have a linear decision boundary. Thus, the model by itself is just as powerful as a linear SVM or logistic regression.
However, a discriminative classifier is much more free in the choice of decision function: it just has to find an appropriate hyperplane. The generative classifier however will need much less samples to find good parameters if the assumptions are valid.
Sorry, this is rather handwavy and there is no hard math behind it. But it is an intuition.
|
Do discriminative models overfit more than generative models?
A generative model is typically overfitting less because it allows the user to put in more side information in the form of class conditionals.
Consider a generative model $p(c|x) = p(c)p(x|c)$. If the
|
44,896
|
Sparse parameters when computing AIC, BIC, etc
|
Degrees of freedom do not depend on the outcome alone but on the fitting procedure. If it's maximum likelihood, all parameters count.
There is an interesting case where zero weights do not count, and that's lasso:
H Zou, T Hastie, R Tibshirani On the “degrees of freedom” of the lasso. The Annals of Statistics, 2007
|
Sparse parameters when computing AIC, BIC, etc
|
Degrees of freedom do not depend on the outcome alone but on the fitting procedure. If it's maximum likelihood, all parameters count.
There is an interesting case where zero weights do not count, and
|
Sparse parameters when computing AIC, BIC, etc
Degrees of freedom do not depend on the outcome alone but on the fitting procedure. If it's maximum likelihood, all parameters count.
There is an interesting case where zero weights do not count, and that's lasso:
H Zou, T Hastie, R Tibshirani On the “degrees of freedom” of the lasso. The Annals of Statistics, 2007
|
Sparse parameters when computing AIC, BIC, etc
Degrees of freedom do not depend on the outcome alone but on the fitting procedure. If it's maximum likelihood, all parameters count.
There is an interesting case where zero weights do not count, and
|
44,897
|
Sparse parameters when computing AIC, BIC, etc
|
This is a really difficult question to answer without precise knowledge of the fitting algorithm, nor is it clear cut that there is a reasonable definition of the "number of parameters" that will justify AIC, BIC or other "information criteria" in general.
If estimation is done by $\ell_1$-penalized maximum-likelihood estimation, then I can partially iterate the answer by user27493. In this case the estimated number of non-zero parameters is a sensible substitute for the total number of parameters in AIC. Note, however, that the Zou et al. paper is on least squares regression with an $\ell_1$-penalty $-$ not logistic regression. See, for instance, Differential geometric least angle regression: a differential geometric approach to sparse generalized linear models by L. Augugliaro et al. for results related to generalized linear models.
BIC is different, and I don't know results in this direction.
The paper with the catchy title Effective Degrees of Freedom:
A Flawed Metaphor, posted recently on archive by Lucas Janson, William Fithian, and Trevor Hastie, shows that, depending on the data generating mechanism, the effective degrees of freedom ("number of parameters") may exceed the total number of parameters, and may even be unbounded.
In this paper (shameless self promotion of my research) Degrees of freedom for nonlinear least squares estimation with my coauthor Alexander Sokol, we show that for nonlinear least squares estimation the effective degrees of freedom generally contains a hard-to-estimate term that depends on the data generating model. This is also what pops up in some of the examples in the Janson et al. paper mentioned above. In an asymptotic scenario, if the model is close to being true and/or if the model does not "curve too much", and if you use $\ell_1$-penalized least squares estimation, a useful surrogate estimate of the effective degrees of freedom is still the estimated number of non-zero parameters. However, once you move outside of some of the standard and most well behaved models, anything could happen.
|
Sparse parameters when computing AIC, BIC, etc
|
This is a really difficult question to answer without precise knowledge of the fitting algorithm, nor is it clear cut that there is a reasonable definition of the "number of parameters" that will just
|
Sparse parameters when computing AIC, BIC, etc
This is a really difficult question to answer without precise knowledge of the fitting algorithm, nor is it clear cut that there is a reasonable definition of the "number of parameters" that will justify AIC, BIC or other "information criteria" in general.
If estimation is done by $\ell_1$-penalized maximum-likelihood estimation, then I can partially iterate the answer by user27493. In this case the estimated number of non-zero parameters is a sensible substitute for the total number of parameters in AIC. Note, however, that the Zou et al. paper is on least squares regression with an $\ell_1$-penalty $-$ not logistic regression. See, for instance, Differential geometric least angle regression: a differential geometric approach to sparse generalized linear models by L. Augugliaro et al. for results related to generalized linear models.
BIC is different, and I don't know results in this direction.
The paper with the catchy title Effective Degrees of Freedom:
A Flawed Metaphor, posted recently on archive by Lucas Janson, William Fithian, and Trevor Hastie, shows that, depending on the data generating mechanism, the effective degrees of freedom ("number of parameters") may exceed the total number of parameters, and may even be unbounded.
In this paper (shameless self promotion of my research) Degrees of freedom for nonlinear least squares estimation with my coauthor Alexander Sokol, we show that for nonlinear least squares estimation the effective degrees of freedom generally contains a hard-to-estimate term that depends on the data generating model. This is also what pops up in some of the examples in the Janson et al. paper mentioned above. In an asymptotic scenario, if the model is close to being true and/or if the model does not "curve too much", and if you use $\ell_1$-penalized least squares estimation, a useful surrogate estimate of the effective degrees of freedom is still the estimated number of non-zero parameters. However, once you move outside of some of the standard and most well behaved models, anything could happen.
|
Sparse parameters when computing AIC, BIC, etc
This is a really difficult question to answer without precise knowledge of the fitting algorithm, nor is it clear cut that there is a reasonable definition of the "number of parameters" that will just
|
44,898
|
Is the null model for binary logistic regression just the natural log function?
|
The full model is
$$\ln \frac {\pi}{1-\pi}=\beta_0 +\beta_1 x_1 +\beta_2 x_2+\ldots$$
where $x_i$ is the $i$th predictor, $\beta_i$ its coefficient, & $$\pi=\Pr(Y=1)$$
where $Y$ is the response (coded 1 for "success" & 0 for "failure")
The null model, as @Michael says, contains just the intercept:
$$\ln \frac {\pi}{1-\pi}=\beta_0$$
So the intercept is the log-odds of "success", estimated without reference to any predictors.
|
Is the null model for binary logistic regression just the natural log function?
|
The full model is
$$\ln \frac {\pi}{1-\pi}=\beta_0 +\beta_1 x_1 +\beta_2 x_2+\ldots$$
where $x_i$ is the $i$th predictor, $\beta_i$ its coefficient, & $$\pi=\Pr(Y=1)$$
where $Y$ is the response (cod
|
Is the null model for binary logistic regression just the natural log function?
The full model is
$$\ln \frac {\pi}{1-\pi}=\beta_0 +\beta_1 x_1 +\beta_2 x_2+\ldots$$
where $x_i$ is the $i$th predictor, $\beta_i$ its coefficient, & $$\pi=\Pr(Y=1)$$
where $Y$ is the response (coded 1 for "success" & 0 for "failure")
The null model, as @Michael says, contains just the intercept:
$$\ln \frac {\pi}{1-\pi}=\beta_0$$
So the intercept is the log-odds of "success", estimated without reference to any predictors.
|
Is the null model for binary logistic regression just the natural log function?
The full model is
$$\ln \frac {\pi}{1-\pi}=\beta_0 +\beta_1 x_1 +\beta_2 x_2+\ldots$$
where $x_i$ is the $i$th predictor, $\beta_i$ its coefficient, & $$\pi=\Pr(Y=1)$$
where $Y$ is the response (cod
|
44,899
|
Is the absolute value of the difference between two Poisson distributions a Poisson distribution?
|
Two quite different questions!
Is the absolute value of the difference between two Poisson distributions a Poisson distribution?
This one is easily answered: clearly no, since the relationship between the mean and variance doesn't hold.
what is the distribution of the absolute value of the Skellam distribution.
This second one is a little trickier. I'm working on a nicer way to do that than the brute force of stupidity direct methods. But since I don't seem to be clever today, here's one of the brute force of stupidity direct methods:
Let $Z = \max(X,Y) - \min(X,Y)$
\begin{eqnarray}
P(Z=0) &=& \sum_{i=0}^\infty P(X=i)P(Y=i) \\
&=& \sum_{i=0}^\infty [\exp(-\mu_1)\mu_1^i/i! \exp(-\mu_2)\mu_2^i/i!\\
&=& \sum_{i=0}^\infty [\exp(-(\mu_1+\mu_2))(\mu_1\mu_2)^i/(i!)^2\\
&=& \exp(-(\mu_1+\mu_2)) \sum_{i=0}^\infty (\mu_1\mu_2)^i/(i!)^2\\
&=& \exp(-(\mu_1+\mu_2)) \sum_{i=0}^\infty g^{2i}/(i!)^2\\
&=& \exp(-(\mu_1+\mu_2)) \sum_{i=0}^\infty (2g/2)^{2i}/(i!)^2
\end{eqnarray}
where $g =\sqrt{\mu_1\mu_2}$
Now
$I_\alpha(x) =\sum_{m=0}^\infty \frac{1}{m! \Gamma(m+\alpha+1)}\left(\frac{x}{2}\right)^{2m+\alpha}$, where $I_\alpha(x)$ is a modified Bessel function of
the first kind.
so
$$P(Z=0) = \exp(-(\mu_1+\mu_2)) I_0(2g)$$.
Now, for $j = 1,2,...$,
\begin{eqnarray}
P(Z=j) &=& \sum_{i=0}^\infty [P(X=i)P(Y=i+j) + P(X=i+j)P(Y=i)] \\
&=& \sum_{i=0}^\infty [\exp(-\mu_1)\mu_1^i/i!\exp(-\mu_2)\mu_2^{i+j}/(i+j)!\\
& & \quad\quad + \exp(-\mu_2)\mu_2^i/i! \exp(-\mu_1)\mu_1^{i+j}/(i+j)!]\\
&=& \exp(-(\mu_1+\mu_2))\sum_{i=0}^\infty [g^{2i}(\mu_2^j + \mu_1^j)]/[i!(i+j)]!\\
&=& \exp(-(\mu_1+\mu_2))(\mu_2^j + \mu_1^j)\sum_{i=0}^\infty g^{2i}/[i!(i+j)]!\\
&=& \exp(-(\mu_1+\mu_2))(\mu_2^j + \mu_1^j)/g^j\sum_{i=0}^\infty g^{2i+j}/[i!(i+j)]!\\
&=& \exp(-(\mu_1+\mu_2))(\mu_2^j + \mu_1^j)/g^j\sum_{i=0}^\infty (2g/2)^{2i+j}/[i!(i+j)]!\\
&=& \exp(-(\mu_1+\mu_2))(\mu_2^j + \mu_1^j)/g^j I_j(2g)
\end{eqnarray}
...assuming I didn't make errors - which I easily could have.
Another direct alternative would be to try to work with the Skellam distribution itself, but I don't think it's going to be any nicer.
Now, how do we check I didn't make a mistake?
I see a couple of approaches:
One check is to see if for some examples, the values sum to 1.
Another way is to compute a few values by direct summation, truncated when the terms become very small (the probabilities decrease quite rapidly) and compare with the above results.
Yet another is simulation.
(i) First, a function to compute the pmf:
dabskel <- function(x,mu1,mu2) {
g <- sqrt(mu1*mu2)
emm <- exp(-(mu1+mu2))
emm*besselI(2*g,x)*ifelse(x==0,1,(mu2^x + mu1^x)/g^x)
}
> sum(dabskel(0:20,1,1)) # 20 should be plenty far enough to make it sum to 1
[1] 1
> sum(dabskel(0:20,3,2))
[1] 1
So it seems to sum to 1. Good start
(ii) Now compute directly:
x <- 0:8
y <- x
f <- function(x,y)dpois(x,3)*dpois(y,2)
probs <- outer(x,y,f)
f2 <- function(x,y)pmax(x,y)-pmin(x,y)
vals <- outer(x,y,f2)
asmres <- tapply(c(probs),c(vals),FUN=sum,simplify=TRUE)
(iii) Now simulate
xyar=abs(rpois(100000,3)-rpois(100000,2))
And compare them:
plot(x,dabskel(x,3,2),type="h")
points(x+.04,asmres,col=2,type="h")
points(x+.08,table(xyar)[1:9]/100000,col=4,type="h")
Black is the function worked out above, red is the direct-but-truncated calculation and the blue is the simulated distribution. Looks like it's okay.
|
Is the absolute value of the difference between two Poisson distributions a Poisson distribution?
|
Two quite different questions!
Is the absolute value of the difference between two Poisson distributions a Poisson distribution?
This one is easily answered: clearly no, since the relationship betw
|
Is the absolute value of the difference between two Poisson distributions a Poisson distribution?
Two quite different questions!
Is the absolute value of the difference between two Poisson distributions a Poisson distribution?
This one is easily answered: clearly no, since the relationship between the mean and variance doesn't hold.
what is the distribution of the absolute value of the Skellam distribution.
This second one is a little trickier. I'm working on a nicer way to do that than the brute force of stupidity direct methods. But since I don't seem to be clever today, here's one of the brute force of stupidity direct methods:
Let $Z = \max(X,Y) - \min(X,Y)$
\begin{eqnarray}
P(Z=0) &=& \sum_{i=0}^\infty P(X=i)P(Y=i) \\
&=& \sum_{i=0}^\infty [\exp(-\mu_1)\mu_1^i/i! \exp(-\mu_2)\mu_2^i/i!\\
&=& \sum_{i=0}^\infty [\exp(-(\mu_1+\mu_2))(\mu_1\mu_2)^i/(i!)^2\\
&=& \exp(-(\mu_1+\mu_2)) \sum_{i=0}^\infty (\mu_1\mu_2)^i/(i!)^2\\
&=& \exp(-(\mu_1+\mu_2)) \sum_{i=0}^\infty g^{2i}/(i!)^2\\
&=& \exp(-(\mu_1+\mu_2)) \sum_{i=0}^\infty (2g/2)^{2i}/(i!)^2
\end{eqnarray}
where $g =\sqrt{\mu_1\mu_2}$
Now
$I_\alpha(x) =\sum_{m=0}^\infty \frac{1}{m! \Gamma(m+\alpha+1)}\left(\frac{x}{2}\right)^{2m+\alpha}$, where $I_\alpha(x)$ is a modified Bessel function of
the first kind.
so
$$P(Z=0) = \exp(-(\mu_1+\mu_2)) I_0(2g)$$.
Now, for $j = 1,2,...$,
\begin{eqnarray}
P(Z=j) &=& \sum_{i=0}^\infty [P(X=i)P(Y=i+j) + P(X=i+j)P(Y=i)] \\
&=& \sum_{i=0}^\infty [\exp(-\mu_1)\mu_1^i/i!\exp(-\mu_2)\mu_2^{i+j}/(i+j)!\\
& & \quad\quad + \exp(-\mu_2)\mu_2^i/i! \exp(-\mu_1)\mu_1^{i+j}/(i+j)!]\\
&=& \exp(-(\mu_1+\mu_2))\sum_{i=0}^\infty [g^{2i}(\mu_2^j + \mu_1^j)]/[i!(i+j)]!\\
&=& \exp(-(\mu_1+\mu_2))(\mu_2^j + \mu_1^j)\sum_{i=0}^\infty g^{2i}/[i!(i+j)]!\\
&=& \exp(-(\mu_1+\mu_2))(\mu_2^j + \mu_1^j)/g^j\sum_{i=0}^\infty g^{2i+j}/[i!(i+j)]!\\
&=& \exp(-(\mu_1+\mu_2))(\mu_2^j + \mu_1^j)/g^j\sum_{i=0}^\infty (2g/2)^{2i+j}/[i!(i+j)]!\\
&=& \exp(-(\mu_1+\mu_2))(\mu_2^j + \mu_1^j)/g^j I_j(2g)
\end{eqnarray}
...assuming I didn't make errors - which I easily could have.
Another direct alternative would be to try to work with the Skellam distribution itself, but I don't think it's going to be any nicer.
Now, how do we check I didn't make a mistake?
I see a couple of approaches:
One check is to see if for some examples, the values sum to 1.
Another way is to compute a few values by direct summation, truncated when the terms become very small (the probabilities decrease quite rapidly) and compare with the above results.
Yet another is simulation.
(i) First, a function to compute the pmf:
dabskel <- function(x,mu1,mu2) {
g <- sqrt(mu1*mu2)
emm <- exp(-(mu1+mu2))
emm*besselI(2*g,x)*ifelse(x==0,1,(mu2^x + mu1^x)/g^x)
}
> sum(dabskel(0:20,1,1)) # 20 should be plenty far enough to make it sum to 1
[1] 1
> sum(dabskel(0:20,3,2))
[1] 1
So it seems to sum to 1. Good start
(ii) Now compute directly:
x <- 0:8
y <- x
f <- function(x,y)dpois(x,3)*dpois(y,2)
probs <- outer(x,y,f)
f2 <- function(x,y)pmax(x,y)-pmin(x,y)
vals <- outer(x,y,f2)
asmres <- tapply(c(probs),c(vals),FUN=sum,simplify=TRUE)
(iii) Now simulate
xyar=abs(rpois(100000,3)-rpois(100000,2))
And compare them:
plot(x,dabskel(x,3,2),type="h")
points(x+.04,asmres,col=2,type="h")
points(x+.08,table(xyar)[1:9]/100000,col=4,type="h")
Black is the function worked out above, red is the direct-but-truncated calculation and the blue is the simulated distribution. Looks like it's okay.
|
Is the absolute value of the difference between two Poisson distributions a Poisson distribution?
Two quite different questions!
Is the absolute value of the difference between two Poisson distributions a Poisson distribution?
This one is easily answered: clearly no, since the relationship betw
|
44,900
|
Is the absolute value of the difference between two Poisson distributions a Poisson distribution?
|
OP: what is the distribution of the absolute value of the Skellam distribution
Let $X$ ~ SkellamDistribution$(a,b)$, with pmf $f(x)$:
$$f(x) = e^{-a-b} \left(\frac{a}{b}\right)^{x/2} I_x\left(2 \sqrt{a b}\right)$$
Then, the pmf of $Y=|X|$ will be, say $g(y)$:
$$g(y) = \begin{cases}f(0) & y = 0 \\ f(y) + f(-y) & y \ge 1 \end{cases}$$
All done.
|
Is the absolute value of the difference between two Poisson distributions a Poisson distribution?
|
OP: what is the distribution of the absolute value of the Skellam distribution
Let $X$ ~ SkellamDistribution$(a,b)$, with pmf $f(x)$:
$$f(x) = e^{-a-b} \left(\frac{a}{b}\right)^{x/2} I_x\left(2 \sq
|
Is the absolute value of the difference between two Poisson distributions a Poisson distribution?
OP: what is the distribution of the absolute value of the Skellam distribution
Let $X$ ~ SkellamDistribution$(a,b)$, with pmf $f(x)$:
$$f(x) = e^{-a-b} \left(\frac{a}{b}\right)^{x/2} I_x\left(2 \sqrt{a b}\right)$$
Then, the pmf of $Y=|X|$ will be, say $g(y)$:
$$g(y) = \begin{cases}f(0) & y = 0 \\ f(y) + f(-y) & y \ge 1 \end{cases}$$
All done.
|
Is the absolute value of the difference between two Poisson distributions a Poisson distribution?
OP: what is the distribution of the absolute value of the Skellam distribution
Let $X$ ~ SkellamDistribution$(a,b)$, with pmf $f(x)$:
$$f(x) = e^{-a-b} \left(\frac{a}{b}\right)^{x/2} I_x\left(2 \sq
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.