idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
18,401 | Can mutual information gain value be greater than 1 | Yes, it does have an upper bound, but not 1.
The mutual information (in bits) is 1 when two parties (statistically) share one bit of information. However, they can share a arbitrary large data. In particular, if they share 2 bits, then it is 2.
The mutual information is bounded from above by the Shannon entropy of probability distributions for single parties, i.e. $I(X,Y) \leq \min \left[ H(X), H(Y) \right]$ . | Can mutual information gain value be greater than 1 | Yes, it does have an upper bound, but not 1.
The mutual information (in bits) is 1 when two parties (statistically) share one bit of information. However, they can share a arbitrary large data. In par | Can mutual information gain value be greater than 1
Yes, it does have an upper bound, but not 1.
The mutual information (in bits) is 1 when two parties (statistically) share one bit of information. However, they can share a arbitrary large data. In particular, if they share 2 bits, then it is 2.
The mutual information is bounded from above by the Shannon entropy of probability distributions for single parties, i.e. $I(X,Y) \leq \min \left[ H(X), H(Y) \right]$ . | Can mutual information gain value be greater than 1
Yes, it does have an upper bound, but not 1.
The mutual information (in bits) is 1 when two parties (statistically) share one bit of information. However, they can share a arbitrary large data. In par |
18,402 | Can mutual information gain value be greater than 1 | It depends on whether the alphabet of interest is finite with a known finite cardinality $K$, a finite but unknown cardinality $K$, or an infinite countable alphabet. If you are talking about mutual information (there is a confusion in names, for example, mutual information, information gain, information gain ratio, etc.), then the answer is YES if $K$ is known, NO if$K$ is unknown or infinite - mutual information is unbounded on a countable alphabet!
The answer provided above is incorrect because $I(X,Y)\leq \min(H(X),H(Y))$ is an incorrect statement. This is easily seen since $H(X)$ and $H(Y)$ may be arbitrarily large. That said however it must be mentioned that the author of the answer above may be thinking $K$ being a known integer. | Can mutual information gain value be greater than 1 | It depends on whether the alphabet of interest is finite with a known finite cardinality $K$, a finite but unknown cardinality $K$, or an infinite countable alphabet. If you are talking about mutual | Can mutual information gain value be greater than 1
It depends on whether the alphabet of interest is finite with a known finite cardinality $K$, a finite but unknown cardinality $K$, or an infinite countable alphabet. If you are talking about mutual information (there is a confusion in names, for example, mutual information, information gain, information gain ratio, etc.), then the answer is YES if $K$ is known, NO if$K$ is unknown or infinite - mutual information is unbounded on a countable alphabet!
The answer provided above is incorrect because $I(X,Y)\leq \min(H(X),H(Y))$ is an incorrect statement. This is easily seen since $H(X)$ and $H(Y)$ may be arbitrarily large. That said however it must be mentioned that the author of the answer above may be thinking $K$ being a known integer. | Can mutual information gain value be greater than 1
It depends on whether the alphabet of interest is finite with a known finite cardinality $K$, a finite but unknown cardinality $K$, or an infinite countable alphabet. If you are talking about mutual |
18,403 | A good way to show lots of data graphically | The best "graph" is so obvious nobody has mentioned it yet: make maps. Housing data depend fundamentally on spatial location (according to the old saw about real estate), so the very first thing to be done is to make a clear detailed map of each variable. To do this well with a third of a million points really requires an industrial-strength GIS, which can make short work of the process. After that it makes sense to go on and make probability plots and boxplots to explore univariate distributions, and to plot scatterplot matrices and wandering schematic boxplots, etc, to explore dependencies--but the maps will immediately suggest what to explore, how to model the data relationships, and how to break up the data geographically into meaningful subsets. | A good way to show lots of data graphically | The best "graph" is so obvious nobody has mentioned it yet: make maps. Housing data depend fundamentally on spatial location (according to the old saw about real estate), so the very first thing to b | A good way to show lots of data graphically
The best "graph" is so obvious nobody has mentioned it yet: make maps. Housing data depend fundamentally on spatial location (according to the old saw about real estate), so the very first thing to be done is to make a clear detailed map of each variable. To do this well with a third of a million points really requires an industrial-strength GIS, which can make short work of the process. After that it makes sense to go on and make probability plots and boxplots to explore univariate distributions, and to plot scatterplot matrices and wandering schematic boxplots, etc, to explore dependencies--but the maps will immediately suggest what to explore, how to model the data relationships, and how to break up the data geographically into meaningful subsets. | A good way to show lots of data graphically
The best "graph" is so obvious nobody has mentioned it yet: make maps. Housing data depend fundamentally on spatial location (according to the old saw about real estate), so the very first thing to b |
18,404 | A good way to show lots of data graphically | I'd recommend taking a look at GGobi, which also has an R interface, at least for exploratory purposes. It has a number of graphical displays especially useful for dealing with a large number observations and variables and for linking these together. You might want to start by watching some of the videos under the "Watch a demo" section on the Learn GGobi page.
Update
Links to Hadley Wickham's tools for GGobi, as suggested by chl in the comments:
DescribeDisplay "R package that provides a way to recreate ggobi graphics in R"
clusterfly "Explore clustering results in high dimensions"
rggobi "R package that provides an easy interface with GGobi" | A good way to show lots of data graphically | I'd recommend taking a look at GGobi, which also has an R interface, at least for exploratory purposes. It has a number of graphical displays especially useful for dealing with a large number observa | A good way to show lots of data graphically
I'd recommend taking a look at GGobi, which also has an R interface, at least for exploratory purposes. It has a number of graphical displays especially useful for dealing with a large number observations and variables and for linking these together. You might want to start by watching some of the videos under the "Watch a demo" section on the Learn GGobi page.
Update
Links to Hadley Wickham's tools for GGobi, as suggested by chl in the comments:
DescribeDisplay "R package that provides a way to recreate ggobi graphics in R"
clusterfly "Explore clustering results in high dimensions"
rggobi "R package that provides an easy interface with GGobi" | A good way to show lots of data graphically
I'd recommend taking a look at GGobi, which also has an R interface, at least for exploratory purposes. It has a number of graphical displays especially useful for dealing with a large number observa |
18,405 | A good way to show lots of data graphically | I feel you are actually asking two questions: 1) what types of visualizations to use and 2) what R package can produce them.
In the case of what type of graph to use, there are many, and it depends on your needs (e.g: types of variables - numeric, factor, geographic etc, and the type of connections you are interested to display):
If you have many numeric variables, you might want to use a scatter plot matrix (have a look here)
If you have many factor variables, you might want to use a scatter plot matrix for factors (have a look here)
You could also go with doing some Parallel coordinates there are several ways to do it in R.
For a wider range of graphical facilities in R, have a look at the graphics task view.
Now regarding how to do it. One problem with many data points is time till the plot is created. ggplot2, iplots, ggobi are not very good for too many data points (at least from my experience). In which case you might want to focus on R base graphics facilities, or sample your data and on that to use all the other tools. Or you can hope that the people developing iplots extreme (or Acinonyx) would get to an advance release stage. | A good way to show lots of data graphically | I feel you are actually asking two questions: 1) what types of visualizations to use and 2) what R package can produce them.
In the case of what type of graph to use, there are many, and it depends o | A good way to show lots of data graphically
I feel you are actually asking two questions: 1) what types of visualizations to use and 2) what R package can produce them.
In the case of what type of graph to use, there are many, and it depends on your needs (e.g: types of variables - numeric, factor, geographic etc, and the type of connections you are interested to display):
If you have many numeric variables, you might want to use a scatter plot matrix (have a look here)
If you have many factor variables, you might want to use a scatter plot matrix for factors (have a look here)
You could also go with doing some Parallel coordinates there are several ways to do it in R.
For a wider range of graphical facilities in R, have a look at the graphics task view.
Now regarding how to do it. One problem with many data points is time till the plot is created. ggplot2, iplots, ggobi are not very good for too many data points (at least from my experience). In which case you might want to focus on R base graphics facilities, or sample your data and on that to use all the other tools. Or you can hope that the people developing iplots extreme (or Acinonyx) would get to an advance release stage. | A good way to show lots of data graphically
I feel you are actually asking two questions: 1) what types of visualizations to use and 2) what R package can produce them.
In the case of what type of graph to use, there are many, and it depends o |
18,406 | A good way to show lots of data graphically | Mondrian provides interactive features and handles quite large data sets (it's in Java, though).
Paraview includes 2D/3D viz. features. | A good way to show lots of data graphically | Mondrian provides interactive features and handles quite large data sets (it's in Java, though).
Paraview includes 2D/3D viz. features. | A good way to show lots of data graphically
Mondrian provides interactive features and handles quite large data sets (it's in Java, though).
Paraview includes 2D/3D viz. features. | A good way to show lots of data graphically
Mondrian provides interactive features and handles quite large data sets (it's in Java, though).
Paraview includes 2D/3D viz. features. |
18,407 | A good way to show lots of data graphically | I would like to bring to your attention, Parallel Coordinates: Visual Multidimensional Geometry and Its Applications, which contains the latest breakthroughs and applications in the field.
The book was praised by Stephen Hawking among others. Surfaces are described (using duality) by their normal vectors at its points. It contains applications to Air Traffic Control (Automatic Collision Avoidance -- 3 USA Patents), Multivariate Data Mining (on real datasets some with hundreds of variables), Multiobjective Optimization, Process Control, Intensive Care Smart Displays, Security, Network visualization and recently Big Data. | A good way to show lots of data graphically | I would like to bring to your attention, Parallel Coordinates: Visual Multidimensional Geometry and Its Applications, which contains the latest breakthroughs and applications in the field.
The book w | A good way to show lots of data graphically
I would like to bring to your attention, Parallel Coordinates: Visual Multidimensional Geometry and Its Applications, which contains the latest breakthroughs and applications in the field.
The book was praised by Stephen Hawking among others. Surfaces are described (using duality) by their normal vectors at its points. It contains applications to Air Traffic Control (Automatic Collision Avoidance -- 3 USA Patents), Multivariate Data Mining (on real datasets some with hundreds of variables), Multiobjective Optimization, Process Control, Intensive Care Smart Displays, Security, Network visualization and recently Big Data. | A good way to show lots of data graphically
I would like to bring to your attention, Parallel Coordinates: Visual Multidimensional Geometry and Its Applications, which contains the latest breakthroughs and applications in the field.
The book w |
18,408 | Why does component-wise median not make sense in higher dimensions? | The underlying concept is that a median splits the data (or a distribution) into two halves with equal amounts in each half (by count or probability).
Even in one dimension the median is problematic. When clustering occurs, one cluster of values may be near $x_0$ and another cluster near $x_1,$ far from $x_0.$ Slight changes in the amount of data (or probability) can shift the median from one cluster to the other. But, at the least, a median can always be located close to some data values or probability support. Therefore we shouldn't complain about multidimensional examples of the same phenomenon.
The fundamental problem is that the point whose coordinates are the marginal medians can be located unreasonably far from any data values (or probability).
Here's an extreme example in three dimensions. Consider a nine-element dataset consisting of one value near $(1,0,0),$ two values near $(0,1,0),$ and three values each near $(0,0,1)$ and $(1,1,1).$ Such data often arise when the values are proportions: in such cases anything outside the cube is meaningless and values near the corners (as in this dataset) are extreme.
$$\begin{array}{lll|r}
\text{x}&\text{y} &\text{z}& \text{Count} \\
\hline
1 & 0 & 0 & 1 \\
0 & 1 & 0 & 2 \\
0 & 0 & 1 & 3 \\
1 & 1 & 1 & 3 \\
\hline
0 & 1 & 1 & \text{median}
\end{array}$$
These data are located near four corners of the unit cube:
The blue starbursts indicate the data locations. Their sizes reflect the amount of data at each location: you can see there is a preponderance of values in the back, to the right, and at the top.
You can check that the medians of the coordinates in this dataset are near $0,$ $1,$ and $1,$ respectively. For instance, four of the nine values of the first coordinate equal $1$ and the other five are near $0,$ putting their median near $0.$
Consequently, the point of marginal medians is $(0,1,1).$ But this isn't anywhere near any of the data--indeed, it's about as far from any of them as one can possibly get. We would struggle to interpret such a "median" as the center of anything. All the data lie (relatively far) to one side of it.
For alternative approaches, please see our thread on multivariate generalizations of the median. | Why does component-wise median not make sense in higher dimensions? | The underlying concept is that a median splits the data (or a distribution) into two halves with equal amounts in each half (by count or probability).
Even in one dimension the median is problematic. | Why does component-wise median not make sense in higher dimensions?
The underlying concept is that a median splits the data (or a distribution) into two halves with equal amounts in each half (by count or probability).
Even in one dimension the median is problematic. When clustering occurs, one cluster of values may be near $x_0$ and another cluster near $x_1,$ far from $x_0.$ Slight changes in the amount of data (or probability) can shift the median from one cluster to the other. But, at the least, a median can always be located close to some data values or probability support. Therefore we shouldn't complain about multidimensional examples of the same phenomenon.
The fundamental problem is that the point whose coordinates are the marginal medians can be located unreasonably far from any data values (or probability).
Here's an extreme example in three dimensions. Consider a nine-element dataset consisting of one value near $(1,0,0),$ two values near $(0,1,0),$ and three values each near $(0,0,1)$ and $(1,1,1).$ Such data often arise when the values are proportions: in such cases anything outside the cube is meaningless and values near the corners (as in this dataset) are extreme.
$$\begin{array}{lll|r}
\text{x}&\text{y} &\text{z}& \text{Count} \\
\hline
1 & 0 & 0 & 1 \\
0 & 1 & 0 & 2 \\
0 & 0 & 1 & 3 \\
1 & 1 & 1 & 3 \\
\hline
0 & 1 & 1 & \text{median}
\end{array}$$
These data are located near four corners of the unit cube:
The blue starbursts indicate the data locations. Their sizes reflect the amount of data at each location: you can see there is a preponderance of values in the back, to the right, and at the top.
You can check that the medians of the coordinates in this dataset are near $0,$ $1,$ and $1,$ respectively. For instance, four of the nine values of the first coordinate equal $1$ and the other five are near $0,$ putting their median near $0.$
Consequently, the point of marginal medians is $(0,1,1).$ But this isn't anywhere near any of the data--indeed, it's about as far from any of them as one can possibly get. We would struggle to interpret such a "median" as the center of anything. All the data lie (relatively far) to one side of it.
For alternative approaches, please see our thread on multivariate generalizations of the median. | Why does component-wise median not make sense in higher dimensions?
The underlying concept is that a median splits the data (or a distribution) into two halves with equal amounts in each half (by count or probability).
Even in one dimension the median is problematic. |
18,409 | Feature Importance for Linear Regression | Linear Regression are already highly interpretable models. I recommend you to read the respective chapter in the Book: Interpretable Machine Learning (avaiable here).
In addition you could use a model-agnostic approach like the permutation feature importance (see chapter 5.5 in the IML Book). The idea was original introduced by Leo Breiman (2001) for random forest, but can be modified to work with any machine learning model. The steps for the importance would be:
You estimate the original model error.
For every predictor j (1 .. p) you do:
Permute the values of the predictor j, leave the rest of the dataset as it is
Estimate the error of the model with the permuted data
Calculate the difference between the error of the original (baseline) model and the permuted model
Sort the resulting difference score in descending number
Permutation feature importancen is avaiable in several R packages like:
IML
DALEX
VIP | Feature Importance for Linear Regression | Linear Regression are already highly interpretable models. I recommend you to read the respective chapter in the Book: Interpretable Machine Learning (avaiable here).
In addition you could use a model | Feature Importance for Linear Regression
Linear Regression are already highly interpretable models. I recommend you to read the respective chapter in the Book: Interpretable Machine Learning (avaiable here).
In addition you could use a model-agnostic approach like the permutation feature importance (see chapter 5.5 in the IML Book). The idea was original introduced by Leo Breiman (2001) for random forest, but can be modified to work with any machine learning model. The steps for the importance would be:
You estimate the original model error.
For every predictor j (1 .. p) you do:
Permute the values of the predictor j, leave the rest of the dataset as it is
Estimate the error of the model with the permuted data
Calculate the difference between the error of the original (baseline) model and the permuted model
Sort the resulting difference score in descending number
Permutation feature importancen is avaiable in several R packages like:
IML
DALEX
VIP | Feature Importance for Linear Regression
Linear Regression are already highly interpretable models. I recommend you to read the respective chapter in the Book: Interpretable Machine Learning (avaiable here).
In addition you could use a model |
18,410 | Feature Importance for Linear Regression | Many available methods rely on the decomposition of the $R^2$ to assign ranks or relative importance to each predictor in a multiple linear regression model. A certain approach in this family is better known under the term "Dominance analysis" (see Azen et al. 2003). Azen et al. (2003) also discuss other measures of importance such as importance based on regression coefficients, based on correlations of importance based on a combination of coefficients and correlations. A general good overview of techniques based on variance decomposition can be found in the paper of Grömping (2012). These techniques are implemented in the R packages relaimpo, domir and yhat. Similar procedures are available for other software.
In his book Frank Harrell uses the partial $\chi^{2}$ minus its degrees of freedom as importance metric and the bootstrap to create confidence intervals around the ranks (see Harrell (2015) on page 117 ff).
References
Azen R, Budescu DV (2003): The Dominance Analysis Approach for Comparing Predictors in Multiple Regression. Psychological Methods 8:2, 129-148. (link to PDF)
Grömping U (2012): Estimators of relative importance in linear regression based on variance decomposition. Am Stat 61:2, 139-147. (link to PDF)
Harrell FE (2015): Regression modeling strategies. 2nd ed. Springer. | Feature Importance for Linear Regression | Many available methods rely on the decomposition of the $R^2$ to assign ranks or relative importance to each predictor in a multiple linear regression model. A certain approach in this family is bette | Feature Importance for Linear Regression
Many available methods rely on the decomposition of the $R^2$ to assign ranks or relative importance to each predictor in a multiple linear regression model. A certain approach in this family is better known under the term "Dominance analysis" (see Azen et al. 2003). Azen et al. (2003) also discuss other measures of importance such as importance based on regression coefficients, based on correlations of importance based on a combination of coefficients and correlations. A general good overview of techniques based on variance decomposition can be found in the paper of Grömping (2012). These techniques are implemented in the R packages relaimpo, domir and yhat. Similar procedures are available for other software.
In his book Frank Harrell uses the partial $\chi^{2}$ minus its degrees of freedom as importance metric and the bootstrap to create confidence intervals around the ranks (see Harrell (2015) on page 117 ff).
References
Azen R, Budescu DV (2003): The Dominance Analysis Approach for Comparing Predictors in Multiple Regression. Psychological Methods 8:2, 129-148. (link to PDF)
Grömping U (2012): Estimators of relative importance in linear regression based on variance decomposition. Am Stat 61:2, 139-147. (link to PDF)
Harrell FE (2015): Regression modeling strategies. 2nd ed. Springer. | Feature Importance for Linear Regression
Many available methods rely on the decomposition of the $R^2$ to assign ranks or relative importance to each predictor in a multiple linear regression model. A certain approach in this family is bette |
18,411 | Feature Importance for Linear Regression | Yes it is possible. Basically any learner can be bootstrap aggregated (bagged) to produce ensemble models and for any bagged ensemble model, the variable importance can be computed. Since the random forest learner inherently produces bagged ensemble models, you get the variable importance almost with no extra computation time. For linear regression which is not a bagged ensemble, you would need to bag the learner first. That is to re-run the learner e.g. 50 times on bootstrap sampled data. So for large data sets it is computationally expensive (~factor 50) to bag any learner, however for diagnostics purposes it can be very interesting.
For a regression example, if a strict interaction (no main effect) between two variables is central to produce accurate predictions. The vanilla linear model would ascribe no importance to these two variables, because it cannot utilize this information. Any general purpose non-linear learner, would be able to capture this interaction effect, and would therefore ascribe importance to the variables.
Here's a related answer including a practical coding example: | Feature Importance for Linear Regression | Yes it is possible. Basically any learner can be bootstrap aggregated (bagged) to produce ensemble models and for any bagged ensemble model, the variable importance can be computed. Since the random f | Feature Importance for Linear Regression
Yes it is possible. Basically any learner can be bootstrap aggregated (bagged) to produce ensemble models and for any bagged ensemble model, the variable importance can be computed. Since the random forest learner inherently produces bagged ensemble models, you get the variable importance almost with no extra computation time. For linear regression which is not a bagged ensemble, you would need to bag the learner first. That is to re-run the learner e.g. 50 times on bootstrap sampled data. So for large data sets it is computationally expensive (~factor 50) to bag any learner, however for diagnostics purposes it can be very interesting.
For a regression example, if a strict interaction (no main effect) between two variables is central to produce accurate predictions. The vanilla linear model would ascribe no importance to these two variables, because it cannot utilize this information. Any general purpose non-linear learner, would be able to capture this interaction effect, and would therefore ascribe importance to the variables.
Here's a related answer including a practical coding example: | Feature Importance for Linear Regression
Yes it is possible. Basically any learner can be bootstrap aggregated (bagged) to produce ensemble models and for any bagged ensemble model, the variable importance can be computed. Since the random f |
18,412 | Linear regression when Y is bounded and discrete | When a response or outcome $Y$ is bounded, various questions arise in fitting a model, including the following:
Any model that could predict values for the response outside those bounds is in principle dubious. Hence a linear model might be problematic as there are no bounds on $\hat Y = Xb$ for predictors $X$ and coefficients $b$ whenever the $X$ are themselves unbounded in one or both directions. However, the relationship might be weak enough for this not to bite and/or predictions might well remain within bounds over the observed or plausible range of the predictors. At one extreme, if the response is some mean $+$ noise it hardly matters which model one fits.
As the response can't exceed its bounds, a nonlinear relationship is often more plausible with predicted responses tailing off to approach bounds asymptotically. Sigmoid curves or surfaces such as those predicted by logit or probit models are attractive in this regard and are now not difficult to fit. A response such as literacy (or fraction adopting any new idea) often shows such a sigmoid curve in time and plausibly with almost any other predictor.
A bounded response can't have the variance properties expected in plain or vanilla regression. Necessarily as the mean response approaches lower and upper bounds, the variance always approaches zero.
A model should be chosen according to what works and knowledge of the underlying generating process. Whether the client or audience knows about particular model families may also guide practice.
Note that I am deliberately avoiding blanket judgments such as good/not good, appropriate/not appropriate, right/wrong. All models are approximations at best and which approximation appeals, or is good enough for a project, isn't so easy to predict. I typically favour logit models as first choice for bounded responses myself, but even that preference is based partly on habit (e.g. my avoiding probit models for no very good reason) and partly on where I will report results, usually to readerships that are, or should be, statistically well informed.
Your examples of discrete scales are for scores 1-100 (in assignments I mark, 0 is certainly possible!) or rankings 1-17. For scales like that, I would usually think of fitting continuous models to responses scaled to [0, 1]. There are, however, practitioners of ordinal regression models who would happily fit such models to scales with a fairly large number of discrete values. I am happy if they reply if they are so minded. | Linear regression when Y is bounded and discrete | When a response or outcome $Y$ is bounded, various questions arise in fitting a model, including the following:
Any model that could predict values for the response outside those bounds is in prin | Linear regression when Y is bounded and discrete
When a response or outcome $Y$ is bounded, various questions arise in fitting a model, including the following:
Any model that could predict values for the response outside those bounds is in principle dubious. Hence a linear model might be problematic as there are no bounds on $\hat Y = Xb$ for predictors $X$ and coefficients $b$ whenever the $X$ are themselves unbounded in one or both directions. However, the relationship might be weak enough for this not to bite and/or predictions might well remain within bounds over the observed or plausible range of the predictors. At one extreme, if the response is some mean $+$ noise it hardly matters which model one fits.
As the response can't exceed its bounds, a nonlinear relationship is often more plausible with predicted responses tailing off to approach bounds asymptotically. Sigmoid curves or surfaces such as those predicted by logit or probit models are attractive in this regard and are now not difficult to fit. A response such as literacy (or fraction adopting any new idea) often shows such a sigmoid curve in time and plausibly with almost any other predictor.
A bounded response can't have the variance properties expected in plain or vanilla regression. Necessarily as the mean response approaches lower and upper bounds, the variance always approaches zero.
A model should be chosen according to what works and knowledge of the underlying generating process. Whether the client or audience knows about particular model families may also guide practice.
Note that I am deliberately avoiding blanket judgments such as good/not good, appropriate/not appropriate, right/wrong. All models are approximations at best and which approximation appeals, or is good enough for a project, isn't so easy to predict. I typically favour logit models as first choice for bounded responses myself, but even that preference is based partly on habit (e.g. my avoiding probit models for no very good reason) and partly on where I will report results, usually to readerships that are, or should be, statistically well informed.
Your examples of discrete scales are for scores 1-100 (in assignments I mark, 0 is certainly possible!) or rankings 1-17. For scales like that, I would usually think of fitting continuous models to responses scaled to [0, 1]. There are, however, practitioners of ordinal regression models who would happily fit such models to scales with a fairly large number of discrete values. I am happy if they reply if they are so minded. | Linear regression when Y is bounded and discrete
When a response or outcome $Y$ is bounded, various questions arise in fitting a model, including the following:
Any model that could predict values for the response outside those bounds is in prin |
18,413 | Linear regression when Y is bounded and discrete | I work in health services research. We collect patient-reported outcomes, e.g. physical function or depressive symptoms, and they are frequently scored in the format you mentioned: a 0 to N scale generated by summing up all the individual questions in the scale.
The vast majority of the literature I've reviewed has just used a linear model (or a hierarchical linear model if the data stem from repeat observations). I've yet to see anyone use @NickCox's suggestion for a (fractional) logit model, although it is a perfectly plausible model.
Item response theory strikes me as another plausible statistical model to apply. This is where you assume some latent trait $\theta$ causes responses to the questions using a logistic or ordered logistic model. That inherently handles the issues of boundedness and possible non-linearity that Nick raised.
The graph below stems from my forthcoming dissertation work. This is where I fit a linear model (red) to a depressive symptom question score that's been converted to Z-scores, and an (explanatory) IRT model in blue to the same questions. Basically, the coefficients for both model are on the same scale (i.e. in standard deviations). There's actually a fair bit of agreement in the size of the coefficients. As Nick alluded to, all models are wrong. But the linear model may not be too wrong to use.
That said, a fundamental assumption of almost all current IRT models is that the trait in question is bipolar, i.e. its support is $-\infty$ to $\infty$. That's probably not true of depressive symptoms. Models for unipolar latent traits are still under development, and standard software can't fit them. A lot of the traits in health services research that we're interested in are likely to be unipolar, e.g. depressive symptoms, other aspects of psychopathology, patient satisfaction. So the IRT model may also be wrong as well.
(Note: the model above was fit usint Phil Chalmers' mirt package in R. Graph produced using ggplot2 and ggthemes. Color scheme draws from the Stata default color scheme.) | Linear regression when Y is bounded and discrete | I work in health services research. We collect patient-reported outcomes, e.g. physical function or depressive symptoms, and they are frequently scored in the format you mentioned: a 0 to N scale gene | Linear regression when Y is bounded and discrete
I work in health services research. We collect patient-reported outcomes, e.g. physical function or depressive symptoms, and they are frequently scored in the format you mentioned: a 0 to N scale generated by summing up all the individual questions in the scale.
The vast majority of the literature I've reviewed has just used a linear model (or a hierarchical linear model if the data stem from repeat observations). I've yet to see anyone use @NickCox's suggestion for a (fractional) logit model, although it is a perfectly plausible model.
Item response theory strikes me as another plausible statistical model to apply. This is where you assume some latent trait $\theta$ causes responses to the questions using a logistic or ordered logistic model. That inherently handles the issues of boundedness and possible non-linearity that Nick raised.
The graph below stems from my forthcoming dissertation work. This is where I fit a linear model (red) to a depressive symptom question score that's been converted to Z-scores, and an (explanatory) IRT model in blue to the same questions. Basically, the coefficients for both model are on the same scale (i.e. in standard deviations). There's actually a fair bit of agreement in the size of the coefficients. As Nick alluded to, all models are wrong. But the linear model may not be too wrong to use.
That said, a fundamental assumption of almost all current IRT models is that the trait in question is bipolar, i.e. its support is $-\infty$ to $\infty$. That's probably not true of depressive symptoms. Models for unipolar latent traits are still under development, and standard software can't fit them. A lot of the traits in health services research that we're interested in are likely to be unipolar, e.g. depressive symptoms, other aspects of psychopathology, patient satisfaction. So the IRT model may also be wrong as well.
(Note: the model above was fit usint Phil Chalmers' mirt package in R. Graph produced using ggplot2 and ggthemes. Color scheme draws from the Stata default color scheme.) | Linear regression when Y is bounded and discrete
I work in health services research. We collect patient-reported outcomes, e.g. physical function or depressive symptoms, and they are frequently scored in the format you mentioned: a 0 to N scale gene |
18,414 | Linear regression when Y is bounded and discrete | Take a look at the predicted values and check if they have roughly the same distribution as the original Ys. If this is the case, linear regression is probably fine. and you will gain little by improving your model. | Linear regression when Y is bounded and discrete | Take a look at the predicted values and check if they have roughly the same distribution as the original Ys. If this is the case, linear regression is probably fine. and you will gain little by improv | Linear regression when Y is bounded and discrete
Take a look at the predicted values and check if they have roughly the same distribution as the original Ys. If this is the case, linear regression is probably fine. and you will gain little by improving your model. | Linear regression when Y is bounded and discrete
Take a look at the predicted values and check if they have roughly the same distribution as the original Ys. If this is the case, linear regression is probably fine. and you will gain little by improv |
18,415 | Linear regression when Y is bounded and discrete | A linear regression may "adequately" describe such data, but it's unlikely. Many assumptions of linear regression tend to be violated in this type of data to such a degree that linear regression becomes ill-advised. I'll just choose a few assumptions as examples,
Normality - Even ignoring the discreteness of such data, such data tends to exhibit extreme violations of normality because the distributions are "cut off" by the bounds.
Homoscedasticity - This type of data tends to violate homoscedasticity. Variances tend to be larger when the actual mean is towards the center of the range, as compared to the edges.
Linearity - Since the range of Y is bounded, the assumption is automatically violated.
The violations of these assumptions are mitigated if the data tends to fall around the center of the range, away from the edges. But really, linear regression is not the optimal tool for this kind of data. Much better alternatives might be binomial regression, or poisson regression. | Linear regression when Y is bounded and discrete | A linear regression may "adequately" describe such data, but it's unlikely. Many assumptions of linear regression tend to be violated in this type of data to such a degree that linear regression becom | Linear regression when Y is bounded and discrete
A linear regression may "adequately" describe such data, but it's unlikely. Many assumptions of linear regression tend to be violated in this type of data to such a degree that linear regression becomes ill-advised. I'll just choose a few assumptions as examples,
Normality - Even ignoring the discreteness of such data, such data tends to exhibit extreme violations of normality because the distributions are "cut off" by the bounds.
Homoscedasticity - This type of data tends to violate homoscedasticity. Variances tend to be larger when the actual mean is towards the center of the range, as compared to the edges.
Linearity - Since the range of Y is bounded, the assumption is automatically violated.
The violations of these assumptions are mitigated if the data tends to fall around the center of the range, away from the edges. But really, linear regression is not the optimal tool for this kind of data. Much better alternatives might be binomial regression, or poisson regression. | Linear regression when Y is bounded and discrete
A linear regression may "adequately" describe such data, but it's unlikely. Many assumptions of linear regression tend to be violated in this type of data to such a degree that linear regression becom |
18,416 | Linear regression when Y is bounded and discrete | use a cdf (cumulative distribution function from statistics). if your model is y=xb+e, then change it to y=cdf(xb+e). You will need to rescale your dependent variable data to fall between 0 and 1. If it's positive numbers, divide by them max, and take your model predictions and multiply by the same number.
Then go check the fit and see if the bounded predictions improve things.
You probably want to use a canned algorithm to take care of the statistics for you. | Linear regression when Y is bounded and discrete | use a cdf (cumulative distribution function from statistics). if your model is y=xb+e, then change it to y=cdf(xb+e). You will need to rescale your dependent variable data to fall between 0 and 1. If | Linear regression when Y is bounded and discrete
use a cdf (cumulative distribution function from statistics). if your model is y=xb+e, then change it to y=cdf(xb+e). You will need to rescale your dependent variable data to fall between 0 and 1. If it's positive numbers, divide by them max, and take your model predictions and multiply by the same number.
Then go check the fit and see if the bounded predictions improve things.
You probably want to use a canned algorithm to take care of the statistics for you. | Linear regression when Y is bounded and discrete
use a cdf (cumulative distribution function from statistics). if your model is y=xb+e, then change it to y=cdf(xb+e). You will need to rescale your dependent variable data to fall between 0 and 1. If |
18,417 | Linear regression when Y is bounded and discrete | If the response only takes a few categories, you may be able to use classification methods or ordinal regression if your response variable is ordinal.
Plain linear regression will neither give you discrete categories nor bounded response variables. The latter can be fixed by using a logit model like in logistic regression. For something like a test score with 100 categories 1-100, you might as well simplify your prediction and use a bounded response variable. | Linear regression when Y is bounded and discrete | If the response only takes a few categories, you may be able to use classification methods or ordinal regression if your response variable is ordinal.
Plain linear regression will neither give you di | Linear regression when Y is bounded and discrete
If the response only takes a few categories, you may be able to use classification methods or ordinal regression if your response variable is ordinal.
Plain linear regression will neither give you discrete categories nor bounded response variables. The latter can be fixed by using a logit model like in logistic regression. For something like a test score with 100 categories 1-100, you might as well simplify your prediction and use a bounded response variable. | Linear regression when Y is bounded and discrete
If the response only takes a few categories, you may be able to use classification methods or ordinal regression if your response variable is ordinal.
Plain linear regression will neither give you di |
18,418 | Pros and cons of weight normalization vs batch normalization | Batch Norm:
(+) Stable if the batch size is large
(+) Robust (in train) to the scale & shift of input data
(+) Robust to the scale of weight vector
(+) Scale of update decreases while training
(-) Not good for online learning
(-) Not good for RNN, LSTM
(-) Different calculation between train and test
Weight Norm:
(+) Smaller calculation cost on CNN
(+) Well-considered about weight initialization
(+) Implementation is easy
(+) Robust to the scale of weight vector
(-) Compared with the others, might be unstable on training
(-) High dependence to input data
Layer Norm:
(+) Effective to small mini batch RNN
(+) Robust to the scale of input
(+) Robust to the scale and shift of weight matrix
(+) Scale of update decreases while training
(-) Might be not good for CNN (Batch Norm is better in some cases) | Pros and cons of weight normalization vs batch normalization | Batch Norm:
(+) Stable if the batch size is large
(+) Robust (in train) to the scale & shift of input data
(+) Robust to the scale of weight vector
(+) Scale of update decreases while training
(-) No | Pros and cons of weight normalization vs batch normalization
Batch Norm:
(+) Stable if the batch size is large
(+) Robust (in train) to the scale & shift of input data
(+) Robust to the scale of weight vector
(+) Scale of update decreases while training
(-) Not good for online learning
(-) Not good for RNN, LSTM
(-) Different calculation between train and test
Weight Norm:
(+) Smaller calculation cost on CNN
(+) Well-considered about weight initialization
(+) Implementation is easy
(+) Robust to the scale of weight vector
(-) Compared with the others, might be unstable on training
(-) High dependence to input data
Layer Norm:
(+) Effective to small mini batch RNN
(+) Robust to the scale of input
(+) Robust to the scale and shift of weight matrix
(+) Scale of update decreases while training
(-) Might be not good for CNN (Batch Norm is better in some cases) | Pros and cons of weight normalization vs batch normalization
Batch Norm:
(+) Stable if the batch size is large
(+) Robust (in train) to the scale & shift of input data
(+) Robust to the scale of weight vector
(+) Scale of update decreases while training
(-) No |
18,419 | Pros and cons of weight normalization vs batch normalization | ERROR: type should be string, got "https://arxiv.org/abs/1709.08145\n'... a detailed comparison of BN and WN algorithms using ResNet-50 network trained on ImageNet. We found that although WN achieves better training accuracy, the final test accuracy is significantly lower (≈6%) than that of BN. This result demonstrates the surprising strength of the BN regularization effect which we were unable to compensate for using standard regularization techniques like dropout and weight decay. We also found that training of deep networks with WN algorithms is significantly less stable compared to BN, limiting their practical applications.'" | Pros and cons of weight normalization vs batch normalization | https://arxiv.org/abs/1709.08145
'... a detailed comparison of BN and WN algorithms using ResNet-50 network trained on ImageNet. We found that although WN achieves better training accuracy, the final | Pros and cons of weight normalization vs batch normalization
https://arxiv.org/abs/1709.08145
'... a detailed comparison of BN and WN algorithms using ResNet-50 network trained on ImageNet. We found that although WN achieves better training accuracy, the final test accuracy is significantly lower (≈6%) than that of BN. This result demonstrates the surprising strength of the BN regularization effect which we were unable to compensate for using standard regularization techniques like dropout and weight decay. We also found that training of deep networks with WN algorithms is significantly less stable compared to BN, limiting their practical applications.' | Pros and cons of weight normalization vs batch normalization
https://arxiv.org/abs/1709.08145
'... a detailed comparison of BN and WN algorithms using ResNet-50 network trained on ImageNet. We found that although WN achieves better training accuracy, the final |
18,420 | What is the difference between the different types of residuals in survival analysis (Cox regression)? | Cox-Snell residuals $r_{Ci}$, are used to assess a model's goodness-of-fit. By plotting the Cox-Snell residual against the cumulative hazard function a model's fit can be assessed. A well fitting model will exhibit a linear line through the origin with a unit gradient. It should be noted that it will take a particularly ill-fitting model for the Cox-Snell residuals to deviate significantly from this. It is also not uncommon to see some slight jumps occurring at the extremities of the graph. One criticism of Cox-Snell residuals is that they do not account for censored observations, therefore the adjusted Cox-Snell residuals were devised by Crowley & Hu (1977) whereby the standard Cox-Snell residual, $r_{Ci}$ could be used for uncensored observations and $r_{Ci} + \Delta$ whereby $\Delta = \log (2) = 0.693$, is used to adjust the residual.
Martingale residuals $r_{Mi}$ can be defined as $r_{Mi} = \delta_i - r_{Ci}$ where $\delta_i$ is a switch taking the value 0 if observation $i$ is censored and 1 if observation $i$ is uncensored. Martingale residuals take a value between $[1, - \infty]$ for uncensored observations and $[0,- \infty]$ for censored observations. Martingale residuals can be used to assess the true functional form of a particular covariate (Thernau et al. (1990)). It is often useful to overlay a LOESS curve over this plot as they can be noisy in plots with lots of observations. Martingale residuals can also be used to assess outliers in the data set whereby the survivor function predicts an event either too early or too late, however, it's often better to use the deviance residual for this.
A deviance residual, $r_{Di} = sgn(r_{Mi})\sqrt{-2 r_{Mi} + \delta_i \log{(\delta_i-r_{Mi})}}$ where the $sgn$ takes a value of 1 for positive martingale residuals and -1 for a negative martingale residual. A residual of high absolute value is indicative of an outlier. A positively valued deviance residual is indicative of an observation whereby the event occurred sooner than predicted; the converse is true for negatively valued residual. Unlike Martingale residuals, deviance residuals are mean centered around 0, making them significantly easier to interpret than Martingale residuals when looking for outliers. One application of deviance residuals is to jackknife the dataset with just one parameter modeled and test for significant difference in parameter coefficients as each observation are removed. A significant change would indicate a highly influential observation.
Schoenfeld residuals are slightly different in that each residual corresponds to a variable, not an observation. The use of Schoenfeld residuals is to test the proportional hazards assumption. Grambsch and Thernau (1994) proposed that scaled Schoenfeld residuals may be more useful. By plotting event time against the Schoenfeld residual for each variable, the variables adherence to the PH assumption can be assessed by fitting a LOESS curve to the plot. A straight line passing through a residual value of 0 with gradient 0 indicates that the variable satisfies the PH assumption and therefore does not depend on time. Schoenfeld residuals can also be assessed through a hypothesis test. | What is the difference between the different types of residuals in survival analysis (Cox regression | Cox-Snell residuals $r_{Ci}$, are used to assess a model's goodness-of-fit. By plotting the Cox-Snell residual against the cumulative hazard function a model's fit can be assessed. A well fitting mode | What is the difference between the different types of residuals in survival analysis (Cox regression)?
Cox-Snell residuals $r_{Ci}$, are used to assess a model's goodness-of-fit. By plotting the Cox-Snell residual against the cumulative hazard function a model's fit can be assessed. A well fitting model will exhibit a linear line through the origin with a unit gradient. It should be noted that it will take a particularly ill-fitting model for the Cox-Snell residuals to deviate significantly from this. It is also not uncommon to see some slight jumps occurring at the extremities of the graph. One criticism of Cox-Snell residuals is that they do not account for censored observations, therefore the adjusted Cox-Snell residuals were devised by Crowley & Hu (1977) whereby the standard Cox-Snell residual, $r_{Ci}$ could be used for uncensored observations and $r_{Ci} + \Delta$ whereby $\Delta = \log (2) = 0.693$, is used to adjust the residual.
Martingale residuals $r_{Mi}$ can be defined as $r_{Mi} = \delta_i - r_{Ci}$ where $\delta_i$ is a switch taking the value 0 if observation $i$ is censored and 1 if observation $i$ is uncensored. Martingale residuals take a value between $[1, - \infty]$ for uncensored observations and $[0,- \infty]$ for censored observations. Martingale residuals can be used to assess the true functional form of a particular covariate (Thernau et al. (1990)). It is often useful to overlay a LOESS curve over this plot as they can be noisy in plots with lots of observations. Martingale residuals can also be used to assess outliers in the data set whereby the survivor function predicts an event either too early or too late, however, it's often better to use the deviance residual for this.
A deviance residual, $r_{Di} = sgn(r_{Mi})\sqrt{-2 r_{Mi} + \delta_i \log{(\delta_i-r_{Mi})}}$ where the $sgn$ takes a value of 1 for positive martingale residuals and -1 for a negative martingale residual. A residual of high absolute value is indicative of an outlier. A positively valued deviance residual is indicative of an observation whereby the event occurred sooner than predicted; the converse is true for negatively valued residual. Unlike Martingale residuals, deviance residuals are mean centered around 0, making them significantly easier to interpret than Martingale residuals when looking for outliers. One application of deviance residuals is to jackknife the dataset with just one parameter modeled and test for significant difference in parameter coefficients as each observation are removed. A significant change would indicate a highly influential observation.
Schoenfeld residuals are slightly different in that each residual corresponds to a variable, not an observation. The use of Schoenfeld residuals is to test the proportional hazards assumption. Grambsch and Thernau (1994) proposed that scaled Schoenfeld residuals may be more useful. By plotting event time against the Schoenfeld residual for each variable, the variables adherence to the PH assumption can be assessed by fitting a LOESS curve to the plot. A straight line passing through a residual value of 0 with gradient 0 indicates that the variable satisfies the PH assumption and therefore does not depend on time. Schoenfeld residuals can also be assessed through a hypothesis test. | What is the difference between the different types of residuals in survival analysis (Cox regression
Cox-Snell residuals $r_{Ci}$, are used to assess a model's goodness-of-fit. By plotting the Cox-Snell residual against the cumulative hazard function a model's fit can be assessed. A well fitting mode |
18,421 | Intuition for Support Vector Machines and the hyperplane | I'm going to try to help you gain some sense of why adding dimensions helps a linear classifier do a better job of separating two classes.
Imagine you have two continuous predictors $X_1$ and $X_2$ and $n=3$, and we're doing a binary classification. This means our data looks something like this:
Now imagine assigning some of the points to class 1 and some to class 2. Note that no matter how we assign classes to points we can always draw a line that perfectly separates the two classes.
But now let's say we add a new point:
Now there are assignments of these points to two classes such that a line cannot perfectly separate them; one such assignment is given by the coloring in the figure (this is an example of an XOR pattern, a very useful one to keep in mind when evaluating classifiers). So this shows us how with $p=2$ variables we can use a linear classifier to perfectly classify any three (non-collinear) points but we cannot in general perfectly classify 4 non-collinear points.
But what happens if we now add another predictor $X_3$?
Here lighter shaded points are closer to the origin. It may be a little hard to see, but now with $p=3$ and $n=4$ we again can perfectly classify any assignment of class labels to these points.
The general result: with $p$ predictors a linear model can perfectly classify any assignment of two classes to $p+1$ points.
The point of all of this is that if we keep $n$ fixed and increase $p$ we increase the number of patterns that we can separate, until we reach the point where we can perfectly classify any assignment of labels. With kernel SVM we implicitly fit a linear classifier in a high dimensional space, so this is why we very rarely have to worry about the existence of a separation.
For a set of possible classifiers $\mathscr F$, if for a sample of $n$ points there exist functions in $\mathscr F$ that can perfectly classify any assignment of labels to these $n$ points, we say that $\mathscr F$ can shatter n points. If $\mathscr F$ is the set of all linear classifiers in $p$ variables then $\mathscr F$ can shatter up to $n=p+1$ points. If $\mathscr F$ is the space of all measurable functions of $p$ variables then it can shatter any number of points. This notion of shattering, which tells us about the complexity of a set of possible classifiers, comes from statistical learning theory and can be used to make statements about the amount of overfitting that a set of classifiers can do. If you're interested in it I highly recommend Luxburg and Schölkopf "Statistical Learning Theory: Models, Concepts, and Results" (2008). | Intuition for Support Vector Machines and the hyperplane | I'm going to try to help you gain some sense of why adding dimensions helps a linear classifier do a better job of separating two classes.
Imagine you have two continuous predictors $X_1$ and $X_2$ an | Intuition for Support Vector Machines and the hyperplane
I'm going to try to help you gain some sense of why adding dimensions helps a linear classifier do a better job of separating two classes.
Imagine you have two continuous predictors $X_1$ and $X_2$ and $n=3$, and we're doing a binary classification. This means our data looks something like this:
Now imagine assigning some of the points to class 1 and some to class 2. Note that no matter how we assign classes to points we can always draw a line that perfectly separates the two classes.
But now let's say we add a new point:
Now there are assignments of these points to two classes such that a line cannot perfectly separate them; one such assignment is given by the coloring in the figure (this is an example of an XOR pattern, a very useful one to keep in mind when evaluating classifiers). So this shows us how with $p=2$ variables we can use a linear classifier to perfectly classify any three (non-collinear) points but we cannot in general perfectly classify 4 non-collinear points.
But what happens if we now add another predictor $X_3$?
Here lighter shaded points are closer to the origin. It may be a little hard to see, but now with $p=3$ and $n=4$ we again can perfectly classify any assignment of class labels to these points.
The general result: with $p$ predictors a linear model can perfectly classify any assignment of two classes to $p+1$ points.
The point of all of this is that if we keep $n$ fixed and increase $p$ we increase the number of patterns that we can separate, until we reach the point where we can perfectly classify any assignment of labels. With kernel SVM we implicitly fit a linear classifier in a high dimensional space, so this is why we very rarely have to worry about the existence of a separation.
For a set of possible classifiers $\mathscr F$, if for a sample of $n$ points there exist functions in $\mathscr F$ that can perfectly classify any assignment of labels to these $n$ points, we say that $\mathscr F$ can shatter n points. If $\mathscr F$ is the set of all linear classifiers in $p$ variables then $\mathscr F$ can shatter up to $n=p+1$ points. If $\mathscr F$ is the space of all measurable functions of $p$ variables then it can shatter any number of points. This notion of shattering, which tells us about the complexity of a set of possible classifiers, comes from statistical learning theory and can be used to make statements about the amount of overfitting that a set of classifiers can do. If you're interested in it I highly recommend Luxburg and Schölkopf "Statistical Learning Theory: Models, Concepts, and Results" (2008). | Intuition for Support Vector Machines and the hyperplane
I'm going to try to help you gain some sense of why adding dimensions helps a linear classifier do a better job of separating two classes.
Imagine you have two continuous predictors $X_1$ and $X_2$ an |
18,422 | Intuition for Support Vector Machines and the hyperplane | It's easy to make a mistake when you take your intuition about low dimensional spaces and apply it to high dimensional spaces. Your intuition is exactly backwards in this case. It turns out to be much easier to find a separating hyperplane in the higher dimensional space than it is in the lower space.
Even though when looking at any two pairs of variables, the red and blue distributions are overlapping, when looking at all 15 variables at once it is very possible that they don't overlap at all. | Intuition for Support Vector Machines and the hyperplane | It's easy to make a mistake when you take your intuition about low dimensional spaces and apply it to high dimensional spaces. Your intuition is exactly backwards in this case. It turns out to be much | Intuition for Support Vector Machines and the hyperplane
It's easy to make a mistake when you take your intuition about low dimensional spaces and apply it to high dimensional spaces. Your intuition is exactly backwards in this case. It turns out to be much easier to find a separating hyperplane in the higher dimensional space than it is in the lower space.
Even though when looking at any two pairs of variables, the red and blue distributions are overlapping, when looking at all 15 variables at once it is very possible that they don't overlap at all. | Intuition for Support Vector Machines and the hyperplane
It's easy to make a mistake when you take your intuition about low dimensional spaces and apply it to high dimensional spaces. Your intuition is exactly backwards in this case. It turns out to be much |
18,423 | Intuition for Support Vector Machines and the hyperplane | You have 15 variables, but not all of them are equally significant for discrimination of your dependent variable (some of them might even be nearly-irrelevant).
Principal Component Analysis (PCA) recomputes a linear basis of those 15 variables, and orders them, in such a way that the first few components typically explain most of the variance. So this allows you reduce a 15-dimensional problem to (say) a 2,3,4, or 5-dimensional problem. Hence it makes plotting more intuitive; typically you can use two or three axes for numeric (or high-cardinality ordinal) variables, then use marker color, shape, and size for three extra dimensions (maybe more if you can combine low-cardinality ordinals). So plotting with the 6 most important PC's should give you a clearer visualization of your decision surface. | Intuition for Support Vector Machines and the hyperplane | You have 15 variables, but not all of them are equally significant for discrimination of your dependent variable (some of them might even be nearly-irrelevant).
Principal Component Analysis (PCA) reco | Intuition for Support Vector Machines and the hyperplane
You have 15 variables, but not all of them are equally significant for discrimination of your dependent variable (some of them might even be nearly-irrelevant).
Principal Component Analysis (PCA) recomputes a linear basis of those 15 variables, and orders them, in such a way that the first few components typically explain most of the variance. So this allows you reduce a 15-dimensional problem to (say) a 2,3,4, or 5-dimensional problem. Hence it makes plotting more intuitive; typically you can use two or three axes for numeric (or high-cardinality ordinal) variables, then use marker color, shape, and size for three extra dimensions (maybe more if you can combine low-cardinality ordinals). So plotting with the 6 most important PC's should give you a clearer visualization of your decision surface. | Intuition for Support Vector Machines and the hyperplane
You have 15 variables, but not all of them are equally significant for discrimination of your dependent variable (some of them might even be nearly-irrelevant).
Principal Component Analysis (PCA) reco |
18,424 | Explanation of I-map in a Markov/Bayesian network | From my understanding, if a DAG G is said to be the I-Map of probability distribution P, then every independence we can observe from G is encoded in P.
Let's consider a simple example:
Suppose distribution $P_1$ has independence $\{(I\perp D)_p\}$, and distribution $P_2$ has no independence, or $\emptyset$.
Now we define two DAGs: $G$ and $G'$
$G$ is I-Map of $P_1$ because $I$ and $D$ are independent in both $G$ and $P_1$. $G$ is not I-Map of $P_2$ because $P_2$ fails to satisfy the independence between I and D.
(Surprisingly?) $G'$ is I-Map of both $P_1$ and $P_2$ because the independence in $G'$ is $\emptyset$. Since $\emptyset$ is a subset of every set, both $P_1$ and $P_2$ satisfy the independence in $G'$.
Therefore, I-Map, in plain words, means that the independence shown in a DAG is a subset of the independence shown in a distribution.
Hope this helps to clear it up a bit for you. | Explanation of I-map in a Markov/Bayesian network | From my understanding, if a DAG G is said to be the I-Map of probability distribution P, then every independence we can observe from G is encoded in P.
Let's consider a simple example:
Suppose distrib | Explanation of I-map in a Markov/Bayesian network
From my understanding, if a DAG G is said to be the I-Map of probability distribution P, then every independence we can observe from G is encoded in P.
Let's consider a simple example:
Suppose distribution $P_1$ has independence $\{(I\perp D)_p\}$, and distribution $P_2$ has no independence, or $\emptyset$.
Now we define two DAGs: $G$ and $G'$
$G$ is I-Map of $P_1$ because $I$ and $D$ are independent in both $G$ and $P_1$. $G$ is not I-Map of $P_2$ because $P_2$ fails to satisfy the independence between I and D.
(Surprisingly?) $G'$ is I-Map of both $P_1$ and $P_2$ because the independence in $G'$ is $\emptyset$. Since $\emptyset$ is a subset of every set, both $P_1$ and $P_2$ satisfy the independence in $G'$.
Therefore, I-Map, in plain words, means that the independence shown in a DAG is a subset of the independence shown in a distribution.
Hope this helps to clear it up a bit for you. | Explanation of I-map in a Markov/Bayesian network
From my understanding, if a DAG G is said to be the I-Map of probability distribution P, then every independence we can observe from G is encoded in P.
Let's consider a simple example:
Suppose distrib |
18,425 | Explanation of I-map in a Markov/Bayesian network | Does this represent every possible independence in a graph given every
possible subset of the set of variables $Z$? Or are you able to define
$I(P)$ because the graph structure is specified?
No. The set of all possible conditional independencies expressed by a DAG $I(G)$ is different from the set of all possible conditional independencies we can find in a certain joint distribution $I(P)$.
The whole last sentence is confusing to me. Maybe it's because I can't
think abstractly enough, but I don't understand how the I-map of G is
a subset of the I-map of P.
Normally $I(G)\subset I(P)$ which means the set of independencies we can see from the connectivity in the graph is only a part of the independencies the joint distribution has, which indicates the soundness rather than the completeness of the d-separation. All independencies we can get from the graph easily are right and can be verified in the joint distribution but some dependencies/edges are redundant. If we don't use the graph to visually express the independencies it would be much harder for us to tackle the related problems by directly checking the joint distribution.
The use of Bayes Network is expressing conditional independence and the more conditional independencies we can express using the graph for the joint distribution we are dealing with the better. Since almost everything in the universe is to some extent dependent on each other in some way, and we can just simplify the issue by assuming some independencies, otherwise, we cannot tackle any problem.
An easy example: for every fully connected graph $I(G)=\emptyset$ and every $\emptyset$ is a subset of the set of independencies in any joint distribution and thus this always holds: $I(G)\subset I(P)$. But the graph is totally unrepresentative and useless because it tells us nothing about the independence structure in the distribution.
If $I(G)=I(P)$ the graph is a perfect graph: P-Map, which means all independencies can be perfectly expressed by the graph( and all the independencies in the graph are right for the joint distribution). | Explanation of I-map in a Markov/Bayesian network | Does this represent every possible independence in a graph given every
possible subset of the set of variables $Z$? Or are you able to define
$I(P)$ because the graph structure is specified?
No. | Explanation of I-map in a Markov/Bayesian network
Does this represent every possible independence in a graph given every
possible subset of the set of variables $Z$? Or are you able to define
$I(P)$ because the graph structure is specified?
No. The set of all possible conditional independencies expressed by a DAG $I(G)$ is different from the set of all possible conditional independencies we can find in a certain joint distribution $I(P)$.
The whole last sentence is confusing to me. Maybe it's because I can't
think abstractly enough, but I don't understand how the I-map of G is
a subset of the I-map of P.
Normally $I(G)\subset I(P)$ which means the set of independencies we can see from the connectivity in the graph is only a part of the independencies the joint distribution has, which indicates the soundness rather than the completeness of the d-separation. All independencies we can get from the graph easily are right and can be verified in the joint distribution but some dependencies/edges are redundant. If we don't use the graph to visually express the independencies it would be much harder for us to tackle the related problems by directly checking the joint distribution.
The use of Bayes Network is expressing conditional independence and the more conditional independencies we can express using the graph for the joint distribution we are dealing with the better. Since almost everything in the universe is to some extent dependent on each other in some way, and we can just simplify the issue by assuming some independencies, otherwise, we cannot tackle any problem.
An easy example: for every fully connected graph $I(G)=\emptyset$ and every $\emptyset$ is a subset of the set of independencies in any joint distribution and thus this always holds: $I(G)\subset I(P)$. But the graph is totally unrepresentative and useless because it tells us nothing about the independence structure in the distribution.
If $I(G)=I(P)$ the graph is a perfect graph: P-Map, which means all independencies can be perfectly expressed by the graph( and all the independencies in the graph are right for the joint distribution). | Explanation of I-map in a Markov/Bayesian network
Does this represent every possible independence in a graph given every
possible subset of the set of variables $Z$? Or are you able to define
$I(P)$ because the graph structure is specified?
No. |
18,426 | Gradient for logistic loss function | My answer for my question: yes, it can be shown that gradient for logistic loss is equal to difference between true values and predicted probabilities. Brief explanation was found here.
First, logistic loss is just negative log-likelihood, so we can start with expression for log-likelihood (p. 74 - this expression is log-likelihood itself, not negative log-likelihood):
$$L=y_{i}\cdot log(p_{i})+(1-y_{i})\cdot log(1-p_{i})$$
$p_{i}$ is logistic function: $p_{i}=\frac{1}{1+e^{-\hat{y}_{i}}}$, where $\hat{y}_{i}$ is predicted values before logistic transformation (i.e., log-odds):
$$L=y_{i}\cdot log\left(\frac{1}{1+e^{-\hat{y}_{i}}}\right)+(1-y_{i})\cdot log\left(\frac{e^{-\hat{y}_{i}}}{1+e^{-\hat{y}_{i}}}\right)$$
First derivative obtained using Wolfram Alpha:
$${L}'=\frac{y_{i}-(1-y_{i})\cdot e^{\hat{y}_{i}}}{1+e^{\hat{y}_{i}}}$$
After multiplying by $\frac{e^{-\hat{y}_{i}}}{e^{-\hat{y}_{i}}}$:
$${L}'=\frac{y_{i}\cdot e^{-\hat{y}_{i}}+y_{i}-1}{1+e^{-\hat{y}_{i}}}=
\frac{y_{i}\cdot (1+e^{-\hat{y}_{i}})}{1+e^{-\hat{y}_{i}}}-\frac{1}{1+e^{-\hat{y}_{i}}}=y_{i}-p_{i}$$
After changing sign we have expression for gradient of logistic loss function:
$$p_{i}-y_{i}$$ | Gradient for logistic loss function | My answer for my question: yes, it can be shown that gradient for logistic loss is equal to difference between true values and predicted probabilities. Brief explanation was found here.
First, logisti | Gradient for logistic loss function
My answer for my question: yes, it can be shown that gradient for logistic loss is equal to difference between true values and predicted probabilities. Brief explanation was found here.
First, logistic loss is just negative log-likelihood, so we can start with expression for log-likelihood (p. 74 - this expression is log-likelihood itself, not negative log-likelihood):
$$L=y_{i}\cdot log(p_{i})+(1-y_{i})\cdot log(1-p_{i})$$
$p_{i}$ is logistic function: $p_{i}=\frac{1}{1+e^{-\hat{y}_{i}}}$, where $\hat{y}_{i}$ is predicted values before logistic transformation (i.e., log-odds):
$$L=y_{i}\cdot log\left(\frac{1}{1+e^{-\hat{y}_{i}}}\right)+(1-y_{i})\cdot log\left(\frac{e^{-\hat{y}_{i}}}{1+e^{-\hat{y}_{i}}}\right)$$
First derivative obtained using Wolfram Alpha:
$${L}'=\frac{y_{i}-(1-y_{i})\cdot e^{\hat{y}_{i}}}{1+e^{\hat{y}_{i}}}$$
After multiplying by $\frac{e^{-\hat{y}_{i}}}{e^{-\hat{y}_{i}}}$:
$${L}'=\frac{y_{i}\cdot e^{-\hat{y}_{i}}+y_{i}-1}{1+e^{-\hat{y}_{i}}}=
\frac{y_{i}\cdot (1+e^{-\hat{y}_{i}})}{1+e^{-\hat{y}_{i}}}-\frac{1}{1+e^{-\hat{y}_{i}}}=y_{i}-p_{i}$$
After changing sign we have expression for gradient of logistic loss function:
$$p_{i}-y_{i}$$ | Gradient for logistic loss function
My answer for my question: yes, it can be shown that gradient for logistic loss is equal to difference between true values and predicted probabilities. Brief explanation was found here.
First, logisti |
18,427 | Gradient for logistic loss function | AdamO is correct, if you just want the gradient of the logistic loss (what the op asked for in the title), then it needs a 1/p(1-p). Unfortunately people from the DL community for some reason assume logistic loss to always be bundled with a sigmoid, and pack their gradients together and call that the logistic loss gradient (the internet is filled with posts asserting this). Since the gradient of sigmoid happens to be p(1-p) it eliminates the 1/p(1-p) of the logistic loss gradient. But if you are implementing SGD (walking back the layers), and applying the sigmoid gradient when you get to the sigmoid, then you need to start with the actual logistic loss gradient -- which has a 1/p(1-p). | Gradient for logistic loss function | AdamO is correct, if you just want the gradient of the logistic loss (what the op asked for in the title), then it needs a 1/p(1-p). Unfortunately people from the DL community for some reason assume l | Gradient for logistic loss function
AdamO is correct, if you just want the gradient of the logistic loss (what the op asked for in the title), then it needs a 1/p(1-p). Unfortunately people from the DL community for some reason assume logistic loss to always be bundled with a sigmoid, and pack their gradients together and call that the logistic loss gradient (the internet is filled with posts asserting this). Since the gradient of sigmoid happens to be p(1-p) it eliminates the 1/p(1-p) of the logistic loss gradient. But if you are implementing SGD (walking back the layers), and applying the sigmoid gradient when you get to the sigmoid, then you need to start with the actual logistic loss gradient -- which has a 1/p(1-p). | Gradient for logistic loss function
AdamO is correct, if you just want the gradient of the logistic loss (what the op asked for in the title), then it needs a 1/p(1-p). Unfortunately people from the DL community for some reason assume l |
18,428 | Best open source data visualization software to use with PowerPoint | Updated 2017-02-24:
I think that the best solutions is to use R with RStudio. ( Python and iPython notebook is an alternative ):
Data import
Excel: readxl package
Oracle: ora or RODBC package
Plotting: ggplot2
Exporting plots
Copy-and-paste: RStudio's exports plot functionality
Programatically: ReporteRs package
TLDR;
Data Import
There are numerous ways to import excel (tabular) data. For Excel data, the readxl package provides the easiest and most versatile. It generally gets the variable types correct on import.
Alternatives are to save the file as CSV and re-import. The readr package is good for this. @Nick Stauner provides perhaps the most basic solution using read.csv; the limitation is that this requires the additional step of saving a worksheet as a CSV file. This is not great if your data is spread across multiple sheets. It can get tedious though there are VBA programs for saving all sheets as CSV files. Google for them. Another limitation is getting the types of the variables correct. If you use read.csv, you often have to fix your types after importing in R.
There are a few packages that avoid these problems by allowing you to connect read/write from the spreadsheet directly or by using ODBC. Search on CRAN for excel or odbc to find the relevant one for your situation.
Plotting
In terms of getting plots into powerpoint, use Rstudio's export plot functions, the copy and paste method using Rstudio is:
export plot > copy plot to clipboard > copy as: metafile captures the plot to the the paste buffer allowing you to paste directly into Power Point.
As far as generating plots, R has numerous options. The aforementioned ggplot2 package provides a very powerful interface for creating all sorts of plots. There are additional packages for doing hundreds or thousands of other types of plots/animations/etc. One limitation is that these are often buried in CRAN packages.
An alternative is to use the ReporteRs package. | Best open source data visualization software to use with PowerPoint | Updated 2017-02-24:
I think that the best solutions is to use R with RStudio. ( Python and iPython notebook is an alternative ):
Data import
Excel: readxl package
Oracle: ora or RODBC package
Pl | Best open source data visualization software to use with PowerPoint
Updated 2017-02-24:
I think that the best solutions is to use R with RStudio. ( Python and iPython notebook is an alternative ):
Data import
Excel: readxl package
Oracle: ora or RODBC package
Plotting: ggplot2
Exporting plots
Copy-and-paste: RStudio's exports plot functionality
Programatically: ReporteRs package
TLDR;
Data Import
There are numerous ways to import excel (tabular) data. For Excel data, the readxl package provides the easiest and most versatile. It generally gets the variable types correct on import.
Alternatives are to save the file as CSV and re-import. The readr package is good for this. @Nick Stauner provides perhaps the most basic solution using read.csv; the limitation is that this requires the additional step of saving a worksheet as a CSV file. This is not great if your data is spread across multiple sheets. It can get tedious though there are VBA programs for saving all sheets as CSV files. Google for them. Another limitation is getting the types of the variables correct. If you use read.csv, you often have to fix your types after importing in R.
There are a few packages that avoid these problems by allowing you to connect read/write from the spreadsheet directly or by using ODBC. Search on CRAN for excel or odbc to find the relevant one for your situation.
Plotting
In terms of getting plots into powerpoint, use Rstudio's export plot functions, the copy and paste method using Rstudio is:
export plot > copy plot to clipboard > copy as: metafile captures the plot to the the paste buffer allowing you to paste directly into Power Point.
As far as generating plots, R has numerous options. The aforementioned ggplot2 package provides a very powerful interface for creating all sorts of plots. There are additional packages for doing hundreds or thousands of other types of plots/animations/etc. One limitation is that these are often buried in CRAN packages.
An alternative is to use the ReporteRs package. | Best open source data visualization software to use with PowerPoint
Updated 2017-02-24:
I think that the best solutions is to use R with RStudio. ( Python and iPython notebook is an alternative ):
Data import
Excel: readxl package
Oracle: ora or RODBC package
Pl |
18,429 | Best open source data visualization software to use with PowerPoint | I don't know about "best", but the software environment you're named after fits all your requirements:
I keep my data in Excel spreadsheets; when I'm ready to import them to R, I save as CSV and use read.csv().
The RODBC and ora packages are available for importing from Oracle.
Images produced in R can be copied as bitmaps or metafiles and pasted directly into PowerPoint. You can find plenty of recommendations here for R as an open-source data visualization utility:
Open source tools for visualizing multi-dimensional data?
Does anyone know any good open source software for visualizing data from database?
Software for easy-yet-robust data exploration
Resources for learning to create data visualizations?
Resources for learning to use (/create) dynamic (/interactive) statistical visualization
R: update a graph dynamically!
How can I create nice graphs automatically?
Web visualization libraries
Free treemapping software
Recommended visualization libraries for standalone applications
Other suggestions in these threads are worth considering too, but I haven't tried them.
More enthusiasm about R in general (not just for data visualization):
What are some valuable Statistical Analysis open source projects?
What is your favorite, easy to use statistical analysis website or software package?
vs. Python as a statistics workbench ... a close contest at least!
R is open-source. Though its learning curve is nontrivial, it becomes easy to use with experience. | Best open source data visualization software to use with PowerPoint | I don't know about "best", but the software environment you're named after fits all your requirements:
I keep my data in Excel spreadsheets; when I'm ready to import them to R, I save as CSV and use | Best open source data visualization software to use with PowerPoint
I don't know about "best", but the software environment you're named after fits all your requirements:
I keep my data in Excel spreadsheets; when I'm ready to import them to R, I save as CSV and use read.csv().
The RODBC and ora packages are available for importing from Oracle.
Images produced in R can be copied as bitmaps or metafiles and pasted directly into PowerPoint. You can find plenty of recommendations here for R as an open-source data visualization utility:
Open source tools for visualizing multi-dimensional data?
Does anyone know any good open source software for visualizing data from database?
Software for easy-yet-robust data exploration
Resources for learning to create data visualizations?
Resources for learning to use (/create) dynamic (/interactive) statistical visualization
R: update a graph dynamically!
How can I create nice graphs automatically?
Web visualization libraries
Free treemapping software
Recommended visualization libraries for standalone applications
Other suggestions in these threads are worth considering too, but I haven't tried them.
More enthusiasm about R in general (not just for data visualization):
What are some valuable Statistical Analysis open source projects?
What is your favorite, easy to use statistical analysis website or software package?
vs. Python as a statistics workbench ... a close contest at least!
R is open-source. Though its learning curve is nontrivial, it becomes easy to use with experience. | Best open source data visualization software to use with PowerPoint
I don't know about "best", but the software environment you're named after fits all your requirements:
I keep my data in Excel spreadsheets; when I'm ready to import them to R, I save as CSV and use |
18,430 | Best open source data visualization software to use with PowerPoint | I agree with Nick Stauner on R. And, with a username like "R Learner" I was tempted to not suggest other tools, but there are many. I'll wait to see what the answer to my questions are for more platform-specific ones, but Mondrian is a Java desktop program (so cross-platform) and supports many visualization types that you can easily get into PowerPoint. | Best open source data visualization software to use with PowerPoint | I agree with Nick Stauner on R. And, with a username like "R Learner" I was tempted to not suggest other tools, but there are many. I'll wait to see what the answer to my questions are for more platfo | Best open source data visualization software to use with PowerPoint
I agree with Nick Stauner on R. And, with a username like "R Learner" I was tempted to not suggest other tools, but there are many. I'll wait to see what the answer to my questions are for more platform-specific ones, but Mondrian is a Java desktop program (so cross-platform) and supports many visualization types that you can easily get into PowerPoint. | Best open source data visualization software to use with PowerPoint
I agree with Nick Stauner on R. And, with a username like "R Learner" I was tempted to not suggest other tools, but there are many. I'll wait to see what the answer to my questions are for more platfo |
18,431 | How to choose initial values for nonlinear least squares fit | If there was a strategy that was both good and general -- one that always worked - it would already be implemented in every nonlinear least squares program and starting values would be a non-issue.
For many specific problems or families of problems, some pretty good approaches to starting values exist; some packages come with good start value calculations for specific nonlinear models or with more general approaches that often work but may have to be helped out with more specific functions or direct input of start values.
Exploring the space is necessary in some situations but I think your situation is likely to be such that more specific strategies will likely be worthwhile - but to design a good one pretty much requires a lot of domain knowledge we're unlikely to possess.
For your particular problem, likely good strategies can be designed, but it's not a trivial process; the more information you have about the likely size and extent of the peak (the typical parameter values and typical $x$'s would give some idea), the more can be done to design a good starting value choice.
What are the typical ranges of $y$'s and $x$'s you get? What do the average results look like? What are the unusual cases? What parameter values do you know to be possible or impossible?
One example - does the Gaussian part necessarily generate a turning point (a peak or trough)? Or is it sometimes so small relative to the line that it doesn't? Is $A$ always positive? etc.
Some sample data would help - typical cases and hard ones, if you're able.
Edit: Here's an example of how you can do fairly well if the problem isn't too noisy:
Here's some data that is generated from your model (population values are A = 1.9947, B = 10, C = 2.828, D = 0.09, E = 5):
The start values I was able to estimate are
(As = 1.658, Bs = 10.001, Cs = 3.053, Ds = 0.0881, Es = 5.026)
The fit of that start model looks like this:
The steps were:
Fit a Theil regression to get a rough estimate of D and E
Subtract the fit of the Theil regression off
Use LOESS to fit a smooth curve
Find the peak to get a rough estimate of A, and the x-value
corresponding to the peak to get a rough estimate of B
Take the LOESS fits whose y-values are > 60% of the estimate of A as
observations and fit a quadratic
Use the quadratic to update the estimate of B and to estimate C
From the original data, subtract off the estimate of the Gaussian
Fit another Theil regression to that adjusted data to update the
estimate of D and E
In this case, the values are very suitable for starting a nonlinear fit.
I wrote this as R code but the same thing could be done in MATLAB.
I think better things than this are possible.
If the data are very noisy, this won't work at all well.
Edit2: This is the code I used in R, if anyone is interested:
gausslin.start <- function(x,y) {
theilreg <- function(x,y){
yy <- outer(y, y, "-")
xx <- outer(x, x, "-")
z <- yy / xx
slope <- median(z[lower.tri(z)])
intercept <- median(y - slope * x)
cbind(intercept=intercept,slope=slope)
}
tr <- theilreg(x,y1)
abline(tr,col=4)
Ds = tr[2]
Es = tr[1]
yf <- y1-Ds*x-Es
yfl <- loess(yf~x,span=.5)
# assumes there are enough points that the maximum there is 'close enough' to
# the true maximum
yflf <- yfl$fitted
locmax <- yflf==max(yflf)
Bs <- x[locmax]
As <- yflf[locmax]
qs <- yflf>.6*As
ys <- yfl$fitted[qs]
xs <- x[qs]-Bs
lf <- lm(ys~xs+I(xs^2))
bets <- lf$coefficients
Bso <- Bs
Bs <- Bso-bets[2]/bets[3]/2
Cs <- sqrt(-1/bets[3])
ystart <- As*exp(-((x-Bs)/Cs)^2)+Ds*x+Es
y1a <- y1-As*exp(-((x-Bs)/Cs)^2)
tr <- theilreg(x,y1a)
Ds <- tr[2]
Es <- tr[1]
res <- data.frame(As=As, Bs=Bs, Cs=Cs, Ds=Ds, Es=Es)
res
}
.
# population parameters: A = 1.9947 , B = 10, C = 2.828, D = 0.09, E = 5
# generate some data
set.seed(seed=3424921)
x <- runif(50,1,30)
y <- dnorm(x,10,2)*10+rnorm(50,0,.2)
y1 <- y+5+x*.09 # This is the data
xo <- order(x)
starts <- gausslin.start(x,y1)
ystart <- with(starts, As*exp(-((x-Bs)/Cs)^2)+Ds*x+Es)
plot(x,y1)
lines(x[xo],ystart[xo],col=2) | How to choose initial values for nonlinear least squares fit | If there was a strategy that was both good and general -- one that always worked - it would already be implemented in every nonlinear least squares program and starting values would be a non-issue.
F | How to choose initial values for nonlinear least squares fit
If there was a strategy that was both good and general -- one that always worked - it would already be implemented in every nonlinear least squares program and starting values would be a non-issue.
For many specific problems or families of problems, some pretty good approaches to starting values exist; some packages come with good start value calculations for specific nonlinear models or with more general approaches that often work but may have to be helped out with more specific functions or direct input of start values.
Exploring the space is necessary in some situations but I think your situation is likely to be such that more specific strategies will likely be worthwhile - but to design a good one pretty much requires a lot of domain knowledge we're unlikely to possess.
For your particular problem, likely good strategies can be designed, but it's not a trivial process; the more information you have about the likely size and extent of the peak (the typical parameter values and typical $x$'s would give some idea), the more can be done to design a good starting value choice.
What are the typical ranges of $y$'s and $x$'s you get? What do the average results look like? What are the unusual cases? What parameter values do you know to be possible or impossible?
One example - does the Gaussian part necessarily generate a turning point (a peak or trough)? Or is it sometimes so small relative to the line that it doesn't? Is $A$ always positive? etc.
Some sample data would help - typical cases and hard ones, if you're able.
Edit: Here's an example of how you can do fairly well if the problem isn't too noisy:
Here's some data that is generated from your model (population values are A = 1.9947, B = 10, C = 2.828, D = 0.09, E = 5):
The start values I was able to estimate are
(As = 1.658, Bs = 10.001, Cs = 3.053, Ds = 0.0881, Es = 5.026)
The fit of that start model looks like this:
The steps were:
Fit a Theil regression to get a rough estimate of D and E
Subtract the fit of the Theil regression off
Use LOESS to fit a smooth curve
Find the peak to get a rough estimate of A, and the x-value
corresponding to the peak to get a rough estimate of B
Take the LOESS fits whose y-values are > 60% of the estimate of A as
observations and fit a quadratic
Use the quadratic to update the estimate of B and to estimate C
From the original data, subtract off the estimate of the Gaussian
Fit another Theil regression to that adjusted data to update the
estimate of D and E
In this case, the values are very suitable for starting a nonlinear fit.
I wrote this as R code but the same thing could be done in MATLAB.
I think better things than this are possible.
If the data are very noisy, this won't work at all well.
Edit2: This is the code I used in R, if anyone is interested:
gausslin.start <- function(x,y) {
theilreg <- function(x,y){
yy <- outer(y, y, "-")
xx <- outer(x, x, "-")
z <- yy / xx
slope <- median(z[lower.tri(z)])
intercept <- median(y - slope * x)
cbind(intercept=intercept,slope=slope)
}
tr <- theilreg(x,y1)
abline(tr,col=4)
Ds = tr[2]
Es = tr[1]
yf <- y1-Ds*x-Es
yfl <- loess(yf~x,span=.5)
# assumes there are enough points that the maximum there is 'close enough' to
# the true maximum
yflf <- yfl$fitted
locmax <- yflf==max(yflf)
Bs <- x[locmax]
As <- yflf[locmax]
qs <- yflf>.6*As
ys <- yfl$fitted[qs]
xs <- x[qs]-Bs
lf <- lm(ys~xs+I(xs^2))
bets <- lf$coefficients
Bso <- Bs
Bs <- Bso-bets[2]/bets[3]/2
Cs <- sqrt(-1/bets[3])
ystart <- As*exp(-((x-Bs)/Cs)^2)+Ds*x+Es
y1a <- y1-As*exp(-((x-Bs)/Cs)^2)
tr <- theilreg(x,y1a)
Ds <- tr[2]
Es <- tr[1]
res <- data.frame(As=As, Bs=Bs, Cs=Cs, Ds=Ds, Es=Es)
res
}
.
# population parameters: A = 1.9947 , B = 10, C = 2.828, D = 0.09, E = 5
# generate some data
set.seed(seed=3424921)
x <- runif(50,1,30)
y <- dnorm(x,10,2)*10+rnorm(50,0,.2)
y1 <- y+5+x*.09 # This is the data
xo <- order(x)
starts <- gausslin.start(x,y1)
ystart <- with(starts, As*exp(-((x-Bs)/Cs)^2)+Ds*x+Es)
plot(x,y1)
lines(x[xo],ystart[xo],col=2) | How to choose initial values for nonlinear least squares fit
If there was a strategy that was both good and general -- one that always worked - it would already be implemented in every nonlinear least squares program and starting values would be a non-issue.
F |
18,432 | How to choose initial values for nonlinear least squares fit | There is a general approach to fitting these kind of nonlinear models. It involves reparameterizing the linear parameters with values of the dependent variable at say the first, last frequency value and a good point in the middle say the 6'th point. then you can hold these parameters fixed and solve for the nonlinear parameter in the first phase of the minimization and then minimize overall 5 parameters.
Schnute and I figured this out around 1982 when fitting growth models for
fish.
http://www.nrcresearchpress.com/doi/abs/10.1139/f80-172
However it is not necessary to read this paper. Due to the fact that the parameters are linear it is simply necessary to set up and solve a 3x3
linear system of equations to use the stable parameterization of the model.
For your model the linear part is determined by a matrix $M$
$$
M= \begin{pmatrix}
\exp(-((x(1)-B)/C)^2)& x(1)& 1 \\
\exp(-((x(6)-B)/C)^2)& x(6)& 1 \\
\exp(-((x(n)-B)/C)^2)& x(n)& 1 \\
\end{pmatrix}
$$
where $n=20$ in this case.
1
The code is written in AD Model Builder which is now open source.
It does automatic differentiation and supports a lot of things
which make nonlinear optimization easier, such as phases where
different parameters are kept fixed.
DATA_SECTION
init_int n
int mid
!! mid=6;
init_matrix data(1,n,1,3)
vector x(1,n)
vector y(1,n)
!! x=column(data,1);
!! y=column(data,3); //use column 3
PARAMETER_SECTION
init_number L1(3) //(3) means estimate in phase 3
init_number Lmid(3)
init_number Ln(3)
vector L(1,3)
init_number log_B // estimate in phase 1
init_number log_C(2) // estimate in phase 2
matrix M(1,3,1,3);
objective_function_value f
sdreport_vector P(1,3)
sdreport_number B
sdreport_number C
vector pred(1,n);
PROCEDURE_SECTION
L(1)=L1;
L(2)=Lmid;
L(3)=Ln;
B=exp(log_B);
C=exp(log_C);
M(1,1)=exp(-square((x(1)-B)/C));
M(1,2)=x(1);
M(1,3)=1;
M(2,1)=exp(-square((x(mid)-B)/C));
M(2,2)=x(mid);
M(2,3)=1;
M(3,1)=exp(-square((x(n)-B)/C));
M(3,2)=x(n);
M(3,3)=1;
P=solve(M,L); // solve for standard parameters
// P is vector corresponding to A,D,E
pred=P(1)*exp(-square((x-B)/C))+P(2)*x+P(3);
if (current_phase()<4)
f+=norm2(y-pred);
else
f+=0.5*n*log(norm2(y-pred)) //concentrated likelihood
The model is fit in three phases. In phase 1 only $B$ is estimated.
Here the most important thing is that $C$ is large enough so that
the model can "see" where to move $B$ to. In phase 2 both $B$ and $C$
are estimated. Finally in phase 3 all parameters are estimated.
In the plot the green line is the fit after phase 1 and the blue line is the final fit.
For your case with the bad data, it fits quite easily and the (usual) parameter estimates are:
estimate std dev
A 2.0053e-01 5.8723e-02
D 1.6537e-02 4.7684e-03
E -1.8197e-01 7.3355e-02
B 3.0609e+00 5.0197e-01
C 5.6154e+00 9.4564e-01] | How to choose initial values for nonlinear least squares fit | There is a general approach to fitting these kind of nonlinear models. It involves reparameterizing the linear parameters with values of the dependent variable at say the first, last frequency value | How to choose initial values for nonlinear least squares fit
There is a general approach to fitting these kind of nonlinear models. It involves reparameterizing the linear parameters with values of the dependent variable at say the first, last frequency value and a good point in the middle say the 6'th point. then you can hold these parameters fixed and solve for the nonlinear parameter in the first phase of the minimization and then minimize overall 5 parameters.
Schnute and I figured this out around 1982 when fitting growth models for
fish.
http://www.nrcresearchpress.com/doi/abs/10.1139/f80-172
However it is not necessary to read this paper. Due to the fact that the parameters are linear it is simply necessary to set up and solve a 3x3
linear system of equations to use the stable parameterization of the model.
For your model the linear part is determined by a matrix $M$
$$
M= \begin{pmatrix}
\exp(-((x(1)-B)/C)^2)& x(1)& 1 \\
\exp(-((x(6)-B)/C)^2)& x(6)& 1 \\
\exp(-((x(n)-B)/C)^2)& x(n)& 1 \\
\end{pmatrix}
$$
where $n=20$ in this case.
1
The code is written in AD Model Builder which is now open source.
It does automatic differentiation and supports a lot of things
which make nonlinear optimization easier, such as phases where
different parameters are kept fixed.
DATA_SECTION
init_int n
int mid
!! mid=6;
init_matrix data(1,n,1,3)
vector x(1,n)
vector y(1,n)
!! x=column(data,1);
!! y=column(data,3); //use column 3
PARAMETER_SECTION
init_number L1(3) //(3) means estimate in phase 3
init_number Lmid(3)
init_number Ln(3)
vector L(1,3)
init_number log_B // estimate in phase 1
init_number log_C(2) // estimate in phase 2
matrix M(1,3,1,3);
objective_function_value f
sdreport_vector P(1,3)
sdreport_number B
sdreport_number C
vector pred(1,n);
PROCEDURE_SECTION
L(1)=L1;
L(2)=Lmid;
L(3)=Ln;
B=exp(log_B);
C=exp(log_C);
M(1,1)=exp(-square((x(1)-B)/C));
M(1,2)=x(1);
M(1,3)=1;
M(2,1)=exp(-square((x(mid)-B)/C));
M(2,2)=x(mid);
M(2,3)=1;
M(3,1)=exp(-square((x(n)-B)/C));
M(3,2)=x(n);
M(3,3)=1;
P=solve(M,L); // solve for standard parameters
// P is vector corresponding to A,D,E
pred=P(1)*exp(-square((x-B)/C))+P(2)*x+P(3);
if (current_phase()<4)
f+=norm2(y-pred);
else
f+=0.5*n*log(norm2(y-pred)) //concentrated likelihood
The model is fit in three phases. In phase 1 only $B$ is estimated.
Here the most important thing is that $C$ is large enough so that
the model can "see" where to move $B$ to. In phase 2 both $B$ and $C$
are estimated. Finally in phase 3 all parameters are estimated.
In the plot the green line is the fit after phase 1 and the blue line is the final fit.
For your case with the bad data, it fits quite easily and the (usual) parameter estimates are:
estimate std dev
A 2.0053e-01 5.8723e-02
D 1.6537e-02 4.7684e-03
E -1.8197e-01 7.3355e-02
B 3.0609e+00 5.0197e-01
C 5.6154e+00 9.4564e-01] | How to choose initial values for nonlinear least squares fit
There is a general approach to fitting these kind of nonlinear models. It involves reparameterizing the linear parameters with values of the dependent variable at say the first, last frequency value |
18,433 | How to choose initial values for nonlinear least squares fit | If you have to do this many times then I would suggest that you use an Evolutionary Algorithm on the SSE function as a front-end to provide the starting values.
On the other hand you could use GEOGEBRA to create the function using sliders for the parameters and play with them to get starting values.
OR starting values from the data can be estimated by observation.
D and E come from the slope and intercept of the data (ignoring the Gaussian)
A is the vertical distance of the maximum of the Gaussian from the Dx+E line estimate.
B is the x value of the maximum of the Gaussian
C is half the apparent width of the Gaussian | How to choose initial values for nonlinear least squares fit | If you have to do this many times then I would suggest that you use an Evolutionary Algorithm on the SSE function as a front-end to provide the starting values.
On the other hand you could use GEOGEBR | How to choose initial values for nonlinear least squares fit
If you have to do this many times then I would suggest that you use an Evolutionary Algorithm on the SSE function as a front-end to provide the starting values.
On the other hand you could use GEOGEBRA to create the function using sliders for the parameters and play with them to get starting values.
OR starting values from the data can be estimated by observation.
D and E come from the slope and intercept of the data (ignoring the Gaussian)
A is the vertical distance of the maximum of the Gaussian from the Dx+E line estimate.
B is the x value of the maximum of the Gaussian
C is half the apparent width of the Gaussian | How to choose initial values for nonlinear least squares fit
If you have to do this many times then I would suggest that you use an Evolutionary Algorithm on the SSE function as a front-end to provide the starting values.
On the other hand you could use GEOGEBR |
18,434 | How to choose initial values for nonlinear least squares fit | The solution interval method discussed in the following paper could be used to find reasonable initial guesses for your problem. Specifically, since your model has 5 parameters, you can pick 5 data points in your data pool and plug these 5 data points into your model to solve the parameter values. Although your model is nonlinear, the equation you need to solve may be linear equations or nonlinear equations that have analytical solutions. The solutions for these 5 parameters can be used as the reasonable initial guesses for nonlinear least squares fitting. More details of this approach (e.g., how to choose 5 data points from the data pool) can be found in the following paper:
https://www.researchgate.net/publication/344778817_Taking_the_Guess_Work_Out_of_the_Initial_Guess_A_Solution_Interval_Method_for_Least_Squares_Parameter_Estimation_in_Nonlinear_Models
If you want to find the guaranteed optimal parameter estimators that correspond to the global minimum of the squared error of the fit, you can use the focused regions identification method introduced in the following paper:
https://www.researchgate.net/publication/360223546_Reducing_the_Search_Space_for_Global_Minimum_A_Focused_Regions_Identification_Method_for_Least_Squares_Parameter_Estimation_in_Nonlinear_Models | How to choose initial values for nonlinear least squares fit | The solution interval method discussed in the following paper could be used to find reasonable initial guesses for your problem. Specifically, since your model has 5 parameters, you can pick 5 data po | How to choose initial values for nonlinear least squares fit
The solution interval method discussed in the following paper could be used to find reasonable initial guesses for your problem. Specifically, since your model has 5 parameters, you can pick 5 data points in your data pool and plug these 5 data points into your model to solve the parameter values. Although your model is nonlinear, the equation you need to solve may be linear equations or nonlinear equations that have analytical solutions. The solutions for these 5 parameters can be used as the reasonable initial guesses for nonlinear least squares fitting. More details of this approach (e.g., how to choose 5 data points from the data pool) can be found in the following paper:
https://www.researchgate.net/publication/344778817_Taking_the_Guess_Work_Out_of_the_Initial_Guess_A_Solution_Interval_Method_for_Least_Squares_Parameter_Estimation_in_Nonlinear_Models
If you want to find the guaranteed optimal parameter estimators that correspond to the global minimum of the squared error of the fit, you can use the focused regions identification method introduced in the following paper:
https://www.researchgate.net/publication/360223546_Reducing_the_Search_Space_for_Global_Minimum_A_Focused_Regions_Identification_Method_for_Least_Squares_Parameter_Estimation_in_Nonlinear_Models | How to choose initial values for nonlinear least squares fit
The solution interval method discussed in the following paper could be used to find reasonable initial guesses for your problem. Specifically, since your model has 5 parameters, you can pick 5 data po |
18,435 | How to choose initial values for nonlinear least squares fit | For starting values you could do a ordinary least squares fit. Its slope and intercept would be the starting values for D and E. The largest residual would be the starting value for A. The position of the largest residual would be the starting value for B. Maybe someone else can suggest a starting value for sigma.
However, non-linear least squares without deriving any sort of mechanistic equation from subject matter knowledge is risky business, and doing a lot of separate fits makes things even more questionable. Is there any subject matter knowledge behind your proposed equation? Are there other independent variables that relate to the differences between the 100 or so separate fits? It might help if you can incorporate those differences into a single equation that will fit all of the data at once. | How to choose initial values for nonlinear least squares fit | For starting values you could do a ordinary least squares fit. Its slope and intercept would be the starting values for D and E. The largest residual would be the starting value for A. The position | How to choose initial values for nonlinear least squares fit
For starting values you could do a ordinary least squares fit. Its slope and intercept would be the starting values for D and E. The largest residual would be the starting value for A. The position of the largest residual would be the starting value for B. Maybe someone else can suggest a starting value for sigma.
However, non-linear least squares without deriving any sort of mechanistic equation from subject matter knowledge is risky business, and doing a lot of separate fits makes things even more questionable. Is there any subject matter knowledge behind your proposed equation? Are there other independent variables that relate to the differences between the 100 or so separate fits? It might help if you can incorporate those differences into a single equation that will fit all of the data at once. | How to choose initial values for nonlinear least squares fit
For starting values you could do a ordinary least squares fit. Its slope and intercept would be the starting values for D and E. The largest residual would be the starting value for A. The position |
18,436 | What is a practically good data analysis process? | My favorite "plan" or "list" is Scott Emerson's document Organizing Your Approach to a Data Analysis.
Note: the last two pages are under the heading "General Requirements for Ph.D. Applied Exam" but the advice given there generalizes to working on any analysis problem. | What is a practically good data analysis process? | My favorite "plan" or "list" is Scott Emerson's document Organizing Your Approach to a Data Analysis.
Note: the last two pages are under the heading "General Requirements for Ph.D. Applied Exam" but t | What is a practically good data analysis process?
My favorite "plan" or "list" is Scott Emerson's document Organizing Your Approach to a Data Analysis.
Note: the last two pages are under the heading "General Requirements for Ph.D. Applied Exam" but the advice given there generalizes to working on any analysis problem. | What is a practically good data analysis process?
My favorite "plan" or "list" is Scott Emerson's document Organizing Your Approach to a Data Analysis.
Note: the last two pages are under the heading "General Requirements for Ph.D. Applied Exam" but t |
18,437 | What is a practically good data analysis process? | I found The Workflow of Data Analysis Using Stata to be a good book, particularly (but non only) as a Stata user. I found much with which to disagree, but even that helped clarify why I do things certain ways. | What is a practically good data analysis process? | I found The Workflow of Data Analysis Using Stata to be a good book, particularly (but non only) as a Stata user. I found much with which to disagree, but even that helped clarify why I do things cert | What is a practically good data analysis process?
I found The Workflow of Data Analysis Using Stata to be a good book, particularly (but non only) as a Stata user. I found much with which to disagree, but even that helped clarify why I do things certain ways. | What is a practically good data analysis process?
I found The Workflow of Data Analysis Using Stata to be a good book, particularly (but non only) as a Stata user. I found much with which to disagree, but even that helped clarify why I do things cert |
18,438 | What is a practically good data analysis process? | CRISP-DM, coined by SPSS company (now belongs to IBM) is an acronym for the data mining process, which is the same as for "data analysis". SAS has a similar process called SEMMA. | What is a practically good data analysis process? | CRISP-DM, coined by SPSS company (now belongs to IBM) is an acronym for the data mining process, which is the same as for "data analysis". SAS has a similar process called SEMMA. | What is a practically good data analysis process?
CRISP-DM, coined by SPSS company (now belongs to IBM) is an acronym for the data mining process, which is the same as for "data analysis". SAS has a similar process called SEMMA. | What is a practically good data analysis process?
CRISP-DM, coined by SPSS company (now belongs to IBM) is an acronym for the data mining process, which is the same as for "data analysis". SAS has a similar process called SEMMA. |
18,439 | How do I compare bootstrapped regression slopes? | Bootstrapping is done to get a more robust picture of the sampling distribution than that which is assumed by large sample theory. When you bootstrap, there is effectively no limit to the number of `bootsamples' you take; in fact you get a better approximation to the sampling distribution the more bootsamples you take. It is common to use $B=10,000$ bootsamples, although there is nothing magical about that number. Furthermore, you don't run a test on the bootsamples; you have an estimate of the sampling distribution--use it directly. Here's an algorithm:
take a bootsample of one data set by sampling $n_1$ boot-observations with replacement. [Regarding the comments below, one relevant question is what constitutes a valid 'boot-observation' to use for your bootsample. In fact, there are several legitimate approaches; I will mention two that are robust and allow you to mirror the structure of your data: When you have observational data (i.e., the data were sampled on all dimensions, a boot-observation can be an ordered n-tuple (e.g., a row from your data set). For example, if you have one predictor variable and one response variable, you would sample $n_1$ $(x,y)$ ordered pairs. On the other hand, when working with experimental data, predictor variable values were not sampled, but experimental units were assigned to intended levels of each predictor variable. In a case like this, you can sample $n_{1j}$ $y$ values from within each of the $j$ levels of your predictor variable, then pair those $y$s with the corresponding value of that predictor level. In this manner, you would not sample over $X$.]
fit your regression model and store the slope estimate (call it $\hat\beta_1$)
take a bootsample of the other data set by sampling $n_2$ boot-observations with replacement
fit the other regression model and store the slope estimate (call it $\hat\beta_2$)
form a statistic from the two estimates (suggestion: use the slope difference $\hat\beta_1-\hat\beta_2$)
store the statistic and dump the other info so as not to waste memory
repeat steps 1 - 6, $B=10,000$ times
sort the bootstrapped sampling distribution of slope differences
compute the % of the bsd that overlaps 0 (whichever is smaller, the right tail % or the left tail %)
multiply this percentage by 2
The logic of this algorithm as a statistical test is fundamentally similar to classical tests (e.g., t-tests) but you are not assuming the the data or the resulting sampling distributions have any particular distribution. (For example, you are not assuming normality.) The primary assumption you are making is that your data are representative of the population you sampled from / want to generalize to. That is, the sample distribution is similar to the population distribution. Note that, if your data are not related to the population you're interested in, you are flat out of luck.
Some people worry about using, e.g., a regression model to determine the slope if you're not willing to assume normality. However, this concern is mistaken. The Gauss-Markov theorem tells us that the estimate is unbiased (i.e., centered on the true value), so it's fine. The lack of normality simply means that the true sampling distribution may be different from the theoretically posited one, and so the p-values are invalid. The bootstrapping procedure gives you a way to deal with this issue.
Two other issues regarding bootstrapping: If the classical assumptions are met, bootstrapping is less efficient (i.e., has less power) than a parametric test. Second, bootstrapping works best when you are exploring near the center of a distribution: means and medians are good, quartiles not so good, bootstrapping the min or max necessarily fail. Regarding the first point, you may not need to bootstrap in your situation; regarding the second point, bootstrapping the slope is perfectly fine. | How do I compare bootstrapped regression slopes? | Bootstrapping is done to get a more robust picture of the sampling distribution than that which is assumed by large sample theory. When you bootstrap, there is effectively no limit to the number of ` | How do I compare bootstrapped regression slopes?
Bootstrapping is done to get a more robust picture of the sampling distribution than that which is assumed by large sample theory. When you bootstrap, there is effectively no limit to the number of `bootsamples' you take; in fact you get a better approximation to the sampling distribution the more bootsamples you take. It is common to use $B=10,000$ bootsamples, although there is nothing magical about that number. Furthermore, you don't run a test on the bootsamples; you have an estimate of the sampling distribution--use it directly. Here's an algorithm:
take a bootsample of one data set by sampling $n_1$ boot-observations with replacement. [Regarding the comments below, one relevant question is what constitutes a valid 'boot-observation' to use for your bootsample. In fact, there are several legitimate approaches; I will mention two that are robust and allow you to mirror the structure of your data: When you have observational data (i.e., the data were sampled on all dimensions, a boot-observation can be an ordered n-tuple (e.g., a row from your data set). For example, if you have one predictor variable and one response variable, you would sample $n_1$ $(x,y)$ ordered pairs. On the other hand, when working with experimental data, predictor variable values were not sampled, but experimental units were assigned to intended levels of each predictor variable. In a case like this, you can sample $n_{1j}$ $y$ values from within each of the $j$ levels of your predictor variable, then pair those $y$s with the corresponding value of that predictor level. In this manner, you would not sample over $X$.]
fit your regression model and store the slope estimate (call it $\hat\beta_1$)
take a bootsample of the other data set by sampling $n_2$ boot-observations with replacement
fit the other regression model and store the slope estimate (call it $\hat\beta_2$)
form a statistic from the two estimates (suggestion: use the slope difference $\hat\beta_1-\hat\beta_2$)
store the statistic and dump the other info so as not to waste memory
repeat steps 1 - 6, $B=10,000$ times
sort the bootstrapped sampling distribution of slope differences
compute the % of the bsd that overlaps 0 (whichever is smaller, the right tail % or the left tail %)
multiply this percentage by 2
The logic of this algorithm as a statistical test is fundamentally similar to classical tests (e.g., t-tests) but you are not assuming the the data or the resulting sampling distributions have any particular distribution. (For example, you are not assuming normality.) The primary assumption you are making is that your data are representative of the population you sampled from / want to generalize to. That is, the sample distribution is similar to the population distribution. Note that, if your data are not related to the population you're interested in, you are flat out of luck.
Some people worry about using, e.g., a regression model to determine the slope if you're not willing to assume normality. However, this concern is mistaken. The Gauss-Markov theorem tells us that the estimate is unbiased (i.e., centered on the true value), so it's fine. The lack of normality simply means that the true sampling distribution may be different from the theoretically posited one, and so the p-values are invalid. The bootstrapping procedure gives you a way to deal with this issue.
Two other issues regarding bootstrapping: If the classical assumptions are met, bootstrapping is less efficient (i.e., has less power) than a parametric test. Second, bootstrapping works best when you are exploring near the center of a distribution: means and medians are good, quartiles not so good, bootstrapping the min or max necessarily fail. Regarding the first point, you may not need to bootstrap in your situation; regarding the second point, bootstrapping the slope is perfectly fine. | How do I compare bootstrapped regression slopes?
Bootstrapping is done to get a more robust picture of the sampling distribution than that which is assumed by large sample theory. When you bootstrap, there is effectively no limit to the number of ` |
18,440 | How do I compare bootstrapped regression slopes? | You can combine the two data sets into one regression. Let $s_i$ be an indicator for being in the first data set. Then run the regression
$$\begin{equation*}y_i = \beta_0 + \beta_1 x_i + \beta_2 s_i + \beta_3 s_i x_i + \epsilon_i \end{equation*}$$
Note that the interpretation of $\beta_3$ is the difference in slopes from the separate regressions:
$$\begin{align*} \text{E}[y_i \mid x, s_i = 1] &= (\beta_0 + \beta_2) + (\beta_1 + \beta_3) x_i \\
\text{E}[y_i \mid x, s_i = 0] &= \beta_0 + \beta_1 x_i. \end{align*}$$
You can bootstrap the distribution of $\beta_3$ if you want or just use standard testing procedures (normal/t). If using analytical solutions, you need to either assume homoskedasticity across groups or correct for heteroskedasticity. For bootstrapping to be robust to this, you need to choose $n$ observations randomly among the first group and $n$ among the second, rather than $2n$ from the whole population.
If you have correlation among the error terms, you may need to alter this procedure a bit, so write back if that is the case.
You can generalize this approach to the seemingly unrelated regressions (SUR) framework. This approach still allows the coefficients for the intercept and the slope to be arbitrarily different in the two data sets. | How do I compare bootstrapped regression slopes? | You can combine the two data sets into one regression. Let $s_i$ be an indicator for being in the first data set. Then run the regression
$$\begin{equation*}y_i = \beta_0 + \beta_1 x_i + \beta_2 s_i + | How do I compare bootstrapped regression slopes?
You can combine the two data sets into one regression. Let $s_i$ be an indicator for being in the first data set. Then run the regression
$$\begin{equation*}y_i = \beta_0 + \beta_1 x_i + \beta_2 s_i + \beta_3 s_i x_i + \epsilon_i \end{equation*}$$
Note that the interpretation of $\beta_3$ is the difference in slopes from the separate regressions:
$$\begin{align*} \text{E}[y_i \mid x, s_i = 1] &= (\beta_0 + \beta_2) + (\beta_1 + \beta_3) x_i \\
\text{E}[y_i \mid x, s_i = 0] &= \beta_0 + \beta_1 x_i. \end{align*}$$
You can bootstrap the distribution of $\beta_3$ if you want or just use standard testing procedures (normal/t). If using analytical solutions, you need to either assume homoskedasticity across groups or correct for heteroskedasticity. For bootstrapping to be robust to this, you need to choose $n$ observations randomly among the first group and $n$ among the second, rather than $2n$ from the whole population.
If you have correlation among the error terms, you may need to alter this procedure a bit, so write back if that is the case.
You can generalize this approach to the seemingly unrelated regressions (SUR) framework. This approach still allows the coefficients for the intercept and the slope to be arbitrarily different in the two data sets. | How do I compare bootstrapped regression slopes?
You can combine the two data sets into one regression. Let $s_i$ be an indicator for being in the first data set. Then run the regression
$$\begin{equation*}y_i = \beta_0 + \beta_1 x_i + \beta_2 s_i + |
18,441 | How do I compare bootstrapped regression slopes? | Doing everything in one regression is neat, and the assumption of independence is important. But calculating the point estimates in this way does not require constant variance. Try this R code;
x <- rbinom(100, 1, 0.5)
z <- rnorm(100)
y <- rnorm(100)
coef(lm(y~x*z))
coef(lm(y~z, subset= x==1))[1] - coef(lm(y~z, subset= x==0))[1]
coef(lm(y~z, subset= x==1))[2] - coef(lm(y~z, subset= x==0))[2]
We get the same point estimate either way. Estimates of standard error may require constant variance (depending on which one you use) but the bootstrapping considered here doesn't use estimated standard errors. | How do I compare bootstrapped regression slopes? | Doing everything in one regression is neat, and the assumption of independence is important. But calculating the point estimates in this way does not require constant variance. Try this R code;
x <- r | How do I compare bootstrapped regression slopes?
Doing everything in one regression is neat, and the assumption of independence is important. But calculating the point estimates in this way does not require constant variance. Try this R code;
x <- rbinom(100, 1, 0.5)
z <- rnorm(100)
y <- rnorm(100)
coef(lm(y~x*z))
coef(lm(y~z, subset= x==1))[1] - coef(lm(y~z, subset= x==0))[1]
coef(lm(y~z, subset= x==1))[2] - coef(lm(y~z, subset= x==0))[2]
We get the same point estimate either way. Estimates of standard error may require constant variance (depending on which one you use) but the bootstrapping considered here doesn't use estimated standard errors. | How do I compare bootstrapped regression slopes?
Doing everything in one regression is neat, and the assumption of independence is important. But calculating the point estimates in this way does not require constant variance. Try this R code;
x <- r |
18,442 | How can I estimate the probability of a random member from one population being "better" than a random member from a different population? | Solution
Let the two means be $\mu_x$ and $\mu_y$ and their standard deviations be $\sigma_x$ and $\sigma_y$, respectively. The difference in timings between two rides ($Y-X$) therefore has mean $\mu_y - \mu_x$ and standard deviation $\sqrt{\sigma_x^2 + \sigma_y^2}$. The standardized difference ("z score") is
$$z = \frac{\mu_y - \mu_x}{\sqrt{\sigma_x^2 + \sigma_y^2}}.$$
Unless your ride times have strange distributions, the chance that ride $Y$ takes longer than ride $X$ is approximately the Normal cumulative distribution, $\Phi$, evaluated at $z$.
Computation
You can work this probability out on one of your rides because you already have estimates of $\mu_x$ etc. :-). For this purpose it's easy to memorize a few key values of $\Phi$: $\Phi(0) = .5 = 1/2$, $\Phi(-1) \approx 0.16 \approx 1/6$, $\Phi(-2) \approx 0.022 \approx 1/40$, and $\Phi(-3) \approx 0.0013 \approx 1/750$. (The approximation may be poor for $|z|$ much larger than $2$, but knowing $\Phi(-3)$ helps with the interpolation.) In conjunction with $\Phi(z) = 1 - \Phi(-z)$ and a bit of interpolation, you can quickly estimate the probability to one significant figure, which is more than precise enough given the nature of the problem and the data.
Example
Suppose route $X$ takes 30 minutes with a standard deviation of 6 minutes and route $Y$ takes 36 minutes with a standard deviation of 8 minutes. With enough data covering a wide range of conditions, the histograms of your data might eventually approximate these:
(These are probability density functions for Gamma(25, 30/25) and Gamma (20, 36/20) variables. Observe that they are decidedly skewed to the right, as one would expect for ride times.)
Then
$$\mu_x = 30, \quad \mu_y = 36, \quad \sigma_x = 6, \quad \sigma_y = 8.$$
Whence
$$z = \frac{36 - 30}{\sqrt{6^2 + 8^2}} = 0.6.$$
We have
$$\Phi(0) = 0.5; \quad \Phi(1) = 1 - \Phi(-1) \approx 1 - 0.16 = 0.84.$$
We therefore estimate the answer is 0.6 of the way between 0.5 and 0.84: 0.5 + 0.6*(0.84 - 0.5) = approximately 0.70. (The correct but overly precise value for the Normal distribution is 0.73.)
There's about a 70% chance that route $Y$ will take longer than route $X$. Doing this calculation in your head will take your mind off the next hill. :-)
(The correct probability for the histograms shown is 72%, even though neither is Normal: this illustrates the scope and utility of the Normal approximation for the difference in trip times.) | How can I estimate the probability of a random member from one population being "better" than a rand | Solution
Let the two means be $\mu_x$ and $\mu_y$ and their standard deviations be $\sigma_x$ and $\sigma_y$, respectively. The difference in timings between two rides ($Y-X$) therefore has mean $\mu | How can I estimate the probability of a random member from one population being "better" than a random member from a different population?
Solution
Let the two means be $\mu_x$ and $\mu_y$ and their standard deviations be $\sigma_x$ and $\sigma_y$, respectively. The difference in timings between two rides ($Y-X$) therefore has mean $\mu_y - \mu_x$ and standard deviation $\sqrt{\sigma_x^2 + \sigma_y^2}$. The standardized difference ("z score") is
$$z = \frac{\mu_y - \mu_x}{\sqrt{\sigma_x^2 + \sigma_y^2}}.$$
Unless your ride times have strange distributions, the chance that ride $Y$ takes longer than ride $X$ is approximately the Normal cumulative distribution, $\Phi$, evaluated at $z$.
Computation
You can work this probability out on one of your rides because you already have estimates of $\mu_x$ etc. :-). For this purpose it's easy to memorize a few key values of $\Phi$: $\Phi(0) = .5 = 1/2$, $\Phi(-1) \approx 0.16 \approx 1/6$, $\Phi(-2) \approx 0.022 \approx 1/40$, and $\Phi(-3) \approx 0.0013 \approx 1/750$. (The approximation may be poor for $|z|$ much larger than $2$, but knowing $\Phi(-3)$ helps with the interpolation.) In conjunction with $\Phi(z) = 1 - \Phi(-z)$ and a bit of interpolation, you can quickly estimate the probability to one significant figure, which is more than precise enough given the nature of the problem and the data.
Example
Suppose route $X$ takes 30 minutes with a standard deviation of 6 minutes and route $Y$ takes 36 minutes with a standard deviation of 8 minutes. With enough data covering a wide range of conditions, the histograms of your data might eventually approximate these:
(These are probability density functions for Gamma(25, 30/25) and Gamma (20, 36/20) variables. Observe that they are decidedly skewed to the right, as one would expect for ride times.)
Then
$$\mu_x = 30, \quad \mu_y = 36, \quad \sigma_x = 6, \quad \sigma_y = 8.$$
Whence
$$z = \frac{36 - 30}{\sqrt{6^2 + 8^2}} = 0.6.$$
We have
$$\Phi(0) = 0.5; \quad \Phi(1) = 1 - \Phi(-1) \approx 1 - 0.16 = 0.84.$$
We therefore estimate the answer is 0.6 of the way between 0.5 and 0.84: 0.5 + 0.6*(0.84 - 0.5) = approximately 0.70. (The correct but overly precise value for the Normal distribution is 0.73.)
There's about a 70% chance that route $Y$ will take longer than route $X$. Doing this calculation in your head will take your mind off the next hill. :-)
(The correct probability for the histograms shown is 72%, even though neither is Normal: this illustrates the scope and utility of the Normal approximation for the difference in trip times.) | How can I estimate the probability of a random member from one population being "better" than a rand
Solution
Let the two means be $\mu_x$ and $\mu_y$ and their standard deviations be $\sigma_x$ and $\sigma_y$, respectively. The difference in timings between two rides ($Y-X$) therefore has mean $\mu |
18,443 | How can I estimate the probability of a random member from one population being "better" than a random member from a different population? | My instinctive approach may not be the most statistically sophisticated, but you may find it to be more fun :)
I would get a decent-sized sheet of graph paper, and divide up the columns into time blocks. Depending on how long your rides are - are we talking about a mean time of 5 minutes or an hour - you might use different sized blocks. Let's say each column is a block of two minutes. Pick a color for route A and a different color for route B, and after each ride, make a dot in the appropriate column. If there's already a dot of that color, move up one row. In other words, this would be a histogram in absolute numbers.
Then, you would be building a fun histogram with each ride you take, and can visually see the difference between the two routes.
My sense based on my own experience as a bike commuter (not verified through quantification) is that the times will not be normally distributed - they would have a positive skew, or in other words a long tail of upper-end times. My typical time is not that much longer than my shortest possible time, but every now and then I seem to hit all the red lights, and there's a much higher upper-end. Your experience may be different. That's why I think the histogram approach might be better, so you can observe the shape of the distribution yourself.
PS: I don't have enough rep to comment in this forum, but I love whuber's answer! He addresses my concern about skewness pretty effectively with a sample analysis. And I like the idea of calculating in your head to keep your mind off the next hill :) | How can I estimate the probability of a random member from one population being "better" than a rand | My instinctive approach may not be the most statistically sophisticated, but you may find it to be more fun :)
I would get a decent-sized sheet of graph paper, and divide up the columns into time bloc | How can I estimate the probability of a random member from one population being "better" than a random member from a different population?
My instinctive approach may not be the most statistically sophisticated, but you may find it to be more fun :)
I would get a decent-sized sheet of graph paper, and divide up the columns into time blocks. Depending on how long your rides are - are we talking about a mean time of 5 minutes or an hour - you might use different sized blocks. Let's say each column is a block of two minutes. Pick a color for route A and a different color for route B, and after each ride, make a dot in the appropriate column. If there's already a dot of that color, move up one row. In other words, this would be a histogram in absolute numbers.
Then, you would be building a fun histogram with each ride you take, and can visually see the difference between the two routes.
My sense based on my own experience as a bike commuter (not verified through quantification) is that the times will not be normally distributed - they would have a positive skew, or in other words a long tail of upper-end times. My typical time is not that much longer than my shortest possible time, but every now and then I seem to hit all the red lights, and there's a much higher upper-end. Your experience may be different. That's why I think the histogram approach might be better, so you can observe the shape of the distribution yourself.
PS: I don't have enough rep to comment in this forum, but I love whuber's answer! He addresses my concern about skewness pretty effectively with a sample analysis. And I like the idea of calculating in your head to keep your mind off the next hill :) | How can I estimate the probability of a random member from one population being "better" than a rand
My instinctive approach may not be the most statistically sophisticated, but you may find it to be more fun :)
I would get a decent-sized sheet of graph paper, and divide up the columns into time bloc |
18,444 | How can I estimate the probability of a random member from one population being "better" than a random member from a different population? | Suppose the two data sets are $X$ and $Y$. Randomly sample one person from each population, giving you $x,y$. Record a '1' if $x > y$ and 0 otherwise. Repeat this many times (say, 10000) and the mean of these indicators will give you an estimate of $P(X_{i} > Y_{j})$ where $i,j$ are randomly selected subjects from the two populations, respectively. In R, the code would go something like:
#X, Y are the two data sets
ii = rep(0,10000)
for(k in 1:10000)
{
x1 = sample(X,1)
y1 = sample(Y,1)
ii[k] = (x1>y1)
}
# this is an estimate of P(X>Y)
mean(ii) | How can I estimate the probability of a random member from one population being "better" than a rand | Suppose the two data sets are $X$ and $Y$. Randomly sample one person from each population, giving you $x,y$. Record a '1' if $x > y$ and 0 otherwise. Repeat this many times (say, 10000) and the mean | How can I estimate the probability of a random member from one population being "better" than a random member from a different population?
Suppose the two data sets are $X$ and $Y$. Randomly sample one person from each population, giving you $x,y$. Record a '1' if $x > y$ and 0 otherwise. Repeat this many times (say, 10000) and the mean of these indicators will give you an estimate of $P(X_{i} > Y_{j})$ where $i,j$ are randomly selected subjects from the two populations, respectively. In R, the code would go something like:
#X, Y are the two data sets
ii = rep(0,10000)
for(k in 1:10000)
{
x1 = sample(X,1)
y1 = sample(Y,1)
ii[k] = (x1>y1)
}
# this is an estimate of P(X>Y)
mean(ii) | How can I estimate the probability of a random member from one population being "better" than a rand
Suppose the two data sets are $X$ and $Y$. Randomly sample one person from each population, giving you $x,y$. Record a '1' if $x > y$ and 0 otherwise. Repeat this many times (say, 10000) and the mean |
18,445 | How to generate random auto correlated binary time series data? | Use a two-state Markov chain.
If the states are called 0 and 1, then the chain can be represented by a 2x2 matrix $P$ giving the transition probabilities between states, where $P_{ij}$ is the probability of moving from state $i$ to state $j$. In this matrix, each row should sum to 1.0.
From statement 2, we have $P_{11} = 0.3$, and simple conservation then says $P_{10} = 0.7$.
From statement 1, you want the long-term probability (also called equilibrium or steady-state) to be $P_1 = 0.05$. This says $$P_1 = 0.05 = 0.3 P_1 + P_{01}(1-P_1)$$ Solving gives $$P_{01} = 0.0368421$$ and a transition matrix $$P = \left(
\begin{array}{cc}
0.963158 & 0.0368421 \\
0.7 & 0.3
\end{array}
\right)$$
(You can check your transtion matrix for correctness by raising it to a high power--in this case 14 does the job--each row of the result gives the identical steady state probabilities)
Now in your random number program, start by randomly choosing state 0 or 1; this selects which row of $P$ you're using. Then use a uniform random number to determine the next state. Spit out that number, rinse, repeat as necessary. | How to generate random auto correlated binary time series data? | Use a two-state Markov chain.
If the states are called 0 and 1, then the chain can be represented by a 2x2 matrix $P$ giving the transition probabilities between states, where $P_{ij}$ is the probab | How to generate random auto correlated binary time series data?
Use a two-state Markov chain.
If the states are called 0 and 1, then the chain can be represented by a 2x2 matrix $P$ giving the transition probabilities between states, where $P_{ij}$ is the probability of moving from state $i$ to state $j$. In this matrix, each row should sum to 1.0.
From statement 2, we have $P_{11} = 0.3$, and simple conservation then says $P_{10} = 0.7$.
From statement 1, you want the long-term probability (also called equilibrium or steady-state) to be $P_1 = 0.05$. This says $$P_1 = 0.05 = 0.3 P_1 + P_{01}(1-P_1)$$ Solving gives $$P_{01} = 0.0368421$$ and a transition matrix $$P = \left(
\begin{array}{cc}
0.963158 & 0.0368421 \\
0.7 & 0.3
\end{array}
\right)$$
(You can check your transtion matrix for correctness by raising it to a high power--in this case 14 does the job--each row of the result gives the identical steady state probabilities)
Now in your random number program, start by randomly choosing state 0 or 1; this selects which row of $P$ you're using. Then use a uniform random number to determine the next state. Spit out that number, rinse, repeat as necessary. | How to generate random auto correlated binary time series data?
Use a two-state Markov chain.
If the states are called 0 and 1, then the chain can be represented by a 2x2 matrix $P$ giving the transition probabilities between states, where $P_{ij}$ is the probab |
18,446 | How to generate random auto correlated binary time series data? | I took a crack at coding @Mike Anderson answer in R. I couldn't figure out how to do it using sapply, so I used a loop. I changed the probs slightly to get a more interesting result, and I used 'A' and 'B' to represent the states. Let me know what you think.
set.seed(1234)
TransitionMatrix <- data.frame(A=c(0.9,0.7),B=c(0.1,0.3),row.names=c('A','B'))
Series <- c('A',rep(NA,99))
i <- 2
while (i <= length(Series)) {
Series[i] <- ifelse(TransitionMatrix[Series[i-1],'A']>=runif(1),'A','B')
i <- i+1
}
Series <- ifelse(Series=='A',1,0)
> Series
[1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1
[38] 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
[75] 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1
/edit: In response to Paul's comment, here's a more elegant formulation
set.seed(1234)
createSeries <- function(n, TransitionMatrix){
stopifnot(is.matrix(TransitionMatrix))
stopifnot(n>0)
Series <- c(1,rep(NA,n-1))
random <- runif(n-1)
for (i in 2:length(Series)){
Series[i] <- TransitionMatrix[Series[i-1]+1,1] >= random[i-1]
}
return(Series)
}
createSeries(100, matrix(c(0.9,0.7,0.1,0.3), ncol=2))
I wrote the original code when I was just learning R, so cut me a little slack. ;-)
Here's how you would estimate the transition matrix, given the series:
Series <- createSeries(100000, matrix(c(0.9,0.7,0.1,0.3), ncol=2))
estimateTransMatrix <- function(Series){
require(quantmod)
out <- table(Lag(Series), Series)
return(out/rowSums(out))
}
estimateTransMatrix(Series)
Series
0 1
0 0.1005085 0.8994915
1 0.2994029 0.7005971
The order is swapped vs my original transition matrix, but it gets the right probabilities. | How to generate random auto correlated binary time series data? | I took a crack at coding @Mike Anderson answer in R. I couldn't figure out how to do it using sapply, so I used a loop. I changed the probs slightly to get a more interesting result, and I used 'A' | How to generate random auto correlated binary time series data?
I took a crack at coding @Mike Anderson answer in R. I couldn't figure out how to do it using sapply, so I used a loop. I changed the probs slightly to get a more interesting result, and I used 'A' and 'B' to represent the states. Let me know what you think.
set.seed(1234)
TransitionMatrix <- data.frame(A=c(0.9,0.7),B=c(0.1,0.3),row.names=c('A','B'))
Series <- c('A',rep(NA,99))
i <- 2
while (i <= length(Series)) {
Series[i] <- ifelse(TransitionMatrix[Series[i-1],'A']>=runif(1),'A','B')
i <- i+1
}
Series <- ifelse(Series=='A',1,0)
> Series
[1] 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 1
[38] 1 1 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
[75] 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1 1 1 1 0 1 1 1 1 1 1 1
/edit: In response to Paul's comment, here's a more elegant formulation
set.seed(1234)
createSeries <- function(n, TransitionMatrix){
stopifnot(is.matrix(TransitionMatrix))
stopifnot(n>0)
Series <- c(1,rep(NA,n-1))
random <- runif(n-1)
for (i in 2:length(Series)){
Series[i] <- TransitionMatrix[Series[i-1]+1,1] >= random[i-1]
}
return(Series)
}
createSeries(100, matrix(c(0.9,0.7,0.1,0.3), ncol=2))
I wrote the original code when I was just learning R, so cut me a little slack. ;-)
Here's how you would estimate the transition matrix, given the series:
Series <- createSeries(100000, matrix(c(0.9,0.7,0.1,0.3), ncol=2))
estimateTransMatrix <- function(Series){
require(quantmod)
out <- table(Lag(Series), Series)
return(out/rowSums(out))
}
estimateTransMatrix(Series)
Series
0 1
0 0.1005085 0.8994915
1 0.2994029 0.7005971
The order is swapped vs my original transition matrix, but it gets the right probabilities. | How to generate random auto correlated binary time series data?
I took a crack at coding @Mike Anderson answer in R. I couldn't figure out how to do it using sapply, so I used a loop. I changed the probs slightly to get a more interesting result, and I used 'A' |
18,447 | How to generate random auto correlated binary time series data? | Here is an answer based on the markovchain package that can be generalized to more complex dependence structures.
library(markovchain)
library(dplyr)
# define the states
states_excitation = c("steady", "excited")
# transition probability matrix
tpm_excitation = matrix(
data = c(0.2, 0.8, 0.2, 0.8),
byrow = TRUE,
nrow = 2,
dimnames = list(states_excitation, states_excitation)
)
# markovchain object
mc_excitation = new(
"markovchain",
states = states_excitation,
transitionMatrix = tpm_excitation,
name = "Excitation Transition Model"
)
# simulate
df_excitation = data_frame(
datetime = seq.POSIXt(as.POSIXct("01-01-2016 00:00:00",
format = "%d-%m-%Y %H:%M:%S",
tz = "UTC"),
as.POSIXct("01-01-2016 23:59:00",
format = "%d-%m-%Y %H:%M:%S",
tz = "UTC"), by = "min"),
excitation = rmarkovchain(n = 1440, mc_excitation))
# plot
df_excitation %>%
ggplot(aes(x = datetime, y = as.numeric(factor(excitation)))) +
geom_step(stat = "identity") +
theme_bw() +
scale_y_discrete(name = "State", breaks = c(1, 2),
labels = states_excitation)
This gives you: | How to generate random auto correlated binary time series data? | Here is an answer based on the markovchain package that can be generalized to more complex dependence structures.
library(markovchain)
library(dplyr)
# define the states
states_excitation = c("steady | How to generate random auto correlated binary time series data?
Here is an answer based on the markovchain package that can be generalized to more complex dependence structures.
library(markovchain)
library(dplyr)
# define the states
states_excitation = c("steady", "excited")
# transition probability matrix
tpm_excitation = matrix(
data = c(0.2, 0.8, 0.2, 0.8),
byrow = TRUE,
nrow = 2,
dimnames = list(states_excitation, states_excitation)
)
# markovchain object
mc_excitation = new(
"markovchain",
states = states_excitation,
transitionMatrix = tpm_excitation,
name = "Excitation Transition Model"
)
# simulate
df_excitation = data_frame(
datetime = seq.POSIXt(as.POSIXct("01-01-2016 00:00:00",
format = "%d-%m-%Y %H:%M:%S",
tz = "UTC"),
as.POSIXct("01-01-2016 23:59:00",
format = "%d-%m-%Y %H:%M:%S",
tz = "UTC"), by = "min"),
excitation = rmarkovchain(n = 1440, mc_excitation))
# plot
df_excitation %>%
ggplot(aes(x = datetime, y = as.numeric(factor(excitation)))) +
geom_step(stat = "identity") +
theme_bw() +
scale_y_discrete(name = "State", breaks = c(1, 2),
labels = states_excitation)
This gives you: | How to generate random auto correlated binary time series data?
Here is an answer based on the markovchain package that can be generalized to more complex dependence structures.
library(markovchain)
library(dplyr)
# define the states
states_excitation = c("steady |
18,448 | How to generate random auto correlated binary time series data? | I've lost track of the paper where this approach was described, but here goes.
Decompose the transition matrix into
$$
\begin{aligned}
T &= (1-p_t) \left[ \begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix} \right] + p_t \left[ \begin{matrix} p_0 & p_0 \\ (1-p_0) & (1-p_0) \end{matrix} \right] \\
&= (1-p_t) I + p_t E
\end{aligned}
$$
which, intuitively, corresponds to the idea that there is some probability $1-p_t$ that the system stays in the same state, and a probability $p_t$ that the state gets randomized, where randomized means making an independent draw from the equilibrium distribution for the next state ($p_0$ is the equilibrium probability for being in the first state).
Note that from the data you've specified you need to solve for $p_t$ from the specified $T_{11}$ via $T_{11} = (1-p_t)+p_t(1-p_0)$.
One of the useful features of this decomposition is that it pretty straightforwardly generalizes to class of correlated Markov models in higher dimensional problems. | How to generate random auto correlated binary time series data? | I've lost track of the paper where this approach was described, but here goes.
Decompose the transition matrix into
$$
\begin{aligned}
T &= (1-p_t) \left[ \begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix} \r | How to generate random auto correlated binary time series data?
I've lost track of the paper where this approach was described, but here goes.
Decompose the transition matrix into
$$
\begin{aligned}
T &= (1-p_t) \left[ \begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix} \right] + p_t \left[ \begin{matrix} p_0 & p_0 \\ (1-p_0) & (1-p_0) \end{matrix} \right] \\
&= (1-p_t) I + p_t E
\end{aligned}
$$
which, intuitively, corresponds to the idea that there is some probability $1-p_t$ that the system stays in the same state, and a probability $p_t$ that the state gets randomized, where randomized means making an independent draw from the equilibrium distribution for the next state ($p_0$ is the equilibrium probability for being in the first state).
Note that from the data you've specified you need to solve for $p_t$ from the specified $T_{11}$ via $T_{11} = (1-p_t)+p_t(1-p_0)$.
One of the useful features of this decomposition is that it pretty straightforwardly generalizes to class of correlated Markov models in higher dimensional problems. | How to generate random auto correlated binary time series data?
I've lost track of the paper where this approach was described, but here goes.
Decompose the transition matrix into
$$
\begin{aligned}
T &= (1-p_t) \left[ \begin{matrix} 1 & 0 \\ 0 & 1 \end{matrix} \r |
18,449 | How to NOT use statistics | To make conclusions about a group based on the population the group must be representative of the population and independent. Others have discussed this, so I will not dwell on this piece.
One other thing to consider is the non-intuitivness of probabilities. Let's assume that we have a group of 10 people who are independent and representative of the population (random sample) and that we know that in the population 10% have a particular characteristic. Therefore each of the 10 people has a 10% chance of having the characteristic. The common assumption is that it is fairly certain that at least 1 will have the characteristic. But that is a simple binomial problem, we can calculate the probability that none of the 10 have the characteristic, it is about 35% (converges to 1/e for bigger group/smaller probability) which is much higher than most people would guess. There is also a 26% chance that 2 or more people have the characteristic. | How to NOT use statistics | To make conclusions about a group based on the population the group must be representative of the population and independent. Others have discussed this, so I will not dwell on this piece.
One other | How to NOT use statistics
To make conclusions about a group based on the population the group must be representative of the population and independent. Others have discussed this, so I will not dwell on this piece.
One other thing to consider is the non-intuitivness of probabilities. Let's assume that we have a group of 10 people who are independent and representative of the population (random sample) and that we know that in the population 10% have a particular characteristic. Therefore each of the 10 people has a 10% chance of having the characteristic. The common assumption is that it is fairly certain that at least 1 will have the characteristic. But that is a simple binomial problem, we can calculate the probability that none of the 10 have the characteristic, it is about 35% (converges to 1/e for bigger group/smaller probability) which is much higher than most people would guess. There is also a 26% chance that 2 or more people have the characteristic. | How to NOT use statistics
To make conclusions about a group based on the population the group must be representative of the population and independent. Others have discussed this, so I will not dwell on this piece.
One other |
18,450 | How to NOT use statistics | Unless the people in the room are a random sample of the world's population, any conclusions based on statistics about the world's population are going to be very suspect. One out of every 5 people in the world is Chinese, but none of my five children are... | How to NOT use statistics | Unless the people in the room are a random sample of the world's population, any conclusions based on statistics about the world's population are going to be very suspect. One out of every 5 people i | How to NOT use statistics
Unless the people in the room are a random sample of the world's population, any conclusions based on statistics about the world's population are going to be very suspect. One out of every 5 people in the world is Chinese, but none of my five children are... | How to NOT use statistics
Unless the people in the room are a random sample of the world's population, any conclusions based on statistics about the world's population are going to be very suspect. One out of every 5 people i |
18,451 | How to NOT use statistics | To address overapplication of statistics to small samples, I recommend countering with well-known jokes ("I am so excited, my mother is pregnant again and my baby sibling will be Chinese." "Why?" "I have read that every fourth baby is Chinese.").
Actually, I recommend jokes to address all kinds of misconception in statistics, see http://xkcd.com/552/ for correlation and causation.
The problem with newspaper articles is rarely the fact that they treat a rare phenomenon.
Simpsons's paradox comes to mind as example that statistics can rarely be used without analysis of the causes. | How to NOT use statistics | To address overapplication of statistics to small samples, I recommend countering with well-known jokes ("I am so excited, my mother is pregnant again and my baby sibling will be Chinese." "Why?" "I h | How to NOT use statistics
To address overapplication of statistics to small samples, I recommend countering with well-known jokes ("I am so excited, my mother is pregnant again and my baby sibling will be Chinese." "Why?" "I have read that every fourth baby is Chinese.").
Actually, I recommend jokes to address all kinds of misconception in statistics, see http://xkcd.com/552/ for correlation and causation.
The problem with newspaper articles is rarely the fact that they treat a rare phenomenon.
Simpsons's paradox comes to mind as example that statistics can rarely be used without analysis of the causes. | How to NOT use statistics
To address overapplication of statistics to small samples, I recommend countering with well-known jokes ("I am so excited, my mother is pregnant again and my baby sibling will be Chinese." "Why?" "I h |
18,452 | How to NOT use statistics | There is an interesting article by Mary Gray on misuse of statistics in court cases and things like that...
Gray, Mary W.; Statistics and the Law. Math. Mag. 56 (1983), no. 2, 67–81 | How to NOT use statistics | There is an interesting article by Mary Gray on misuse of statistics in court cases and things like that...
Gray, Mary W.; Statistics and the Law. Math. Mag. 56 (1983), no. 2, 67–81 | How to NOT use statistics
There is an interesting article by Mary Gray on misuse of statistics in court cases and things like that...
Gray, Mary W.; Statistics and the Law. Math. Mag. 56 (1983), no. 2, 67–81 | How to NOT use statistics
There is an interesting article by Mary Gray on misuse of statistics in court cases and things like that...
Gray, Mary W.; Statistics and the Law. Math. Mag. 56 (1983), no. 2, 67–81 |
18,453 | How to NOT use statistics | When it comes to logic and common sense, be careful, those two are rare. With certain "discussions" you might recognize something......the point of the argument is the argument.
http://www.wired.com/wiredscience/2011/05/the-sad-reason-we-reason/ | How to NOT use statistics | When it comes to logic and common sense, be careful, those two are rare. With certain "discussions" you might recognize something......the point of the argument is the argument.
http://www.wired.com | How to NOT use statistics
When it comes to logic and common sense, be careful, those two are rare. With certain "discussions" you might recognize something......the point of the argument is the argument.
http://www.wired.com/wiredscience/2011/05/the-sad-reason-we-reason/ | How to NOT use statistics
When it comes to logic and common sense, be careful, those two are rare. With certain "discussions" you might recognize something......the point of the argument is the argument.
http://www.wired.com |
18,454 | How to NOT use statistics | Statistical analysis or statistical data?
I think this example in your question relates to statistical data: "I read that 10% of the world population has this disease". In other words, in this example some one is using numbers to help communicate quantity more effectively than just saying 'many people'.
My guess is that the answer to your question is hidden in the motivation of the speaker on why she is using numbers. It could be to communicate some notion better or it could be to show authority or it could be to dazzle the listener. The good thing about stating numbers rather than saying 'very big' is that people can refute the number. See Popper's idea on refutation. | How to NOT use statistics | Statistical analysis or statistical data?
I think this example in your question relates to statistical data: "I read that 10% of the world population has this disease". In other words, in this example | How to NOT use statistics
Statistical analysis or statistical data?
I think this example in your question relates to statistical data: "I read that 10% of the world population has this disease". In other words, in this example some one is using numbers to help communicate quantity more effectively than just saying 'many people'.
My guess is that the answer to your question is hidden in the motivation of the speaker on why she is using numbers. It could be to communicate some notion better or it could be to show authority or it could be to dazzle the listener. The good thing about stating numbers rather than saying 'very big' is that people can refute the number. See Popper's idea on refutation. | How to NOT use statistics
Statistical analysis or statistical data?
I think this example in your question relates to statistical data: "I read that 10% of the world population has this disease". In other words, in this example |
18,455 | How to NOT use statistics | Hypothesis: $A$
(Textbook) Result: Do no reject $A$ ($\sigma = c$)
Your Statement: $A$ holds with probability $\sigma$!
Correct would be: In this case, you know nothing. If you want to "prove" $A$, your hypothesis has to be $\neg A$; reject it with $\sigma$ to get the desired statement. | How to NOT use statistics | Hypothesis: $A$
(Textbook) Result: Do no reject $A$ ($\sigma = c$)
Your Statement: $A$ holds with probability $\sigma$!
Correct would be: In this case, you know nothing. If you want to "prove" $A$, yo | How to NOT use statistics
Hypothesis: $A$
(Textbook) Result: Do no reject $A$ ($\sigma = c$)
Your Statement: $A$ holds with probability $\sigma$!
Correct would be: In this case, you know nothing. If you want to "prove" $A$, your hypothesis has to be $\neg A$; reject it with $\sigma$ to get the desired statement. | How to NOT use statistics
Hypothesis: $A$
(Textbook) Result: Do no reject $A$ ($\sigma = c$)
Your Statement: $A$ holds with probability $\sigma$!
Correct would be: In this case, you know nothing. If you want to "prove" $A$, yo |
18,456 | How to NOT use statistics | From what I understand of statistics it's never applicable to a single member of a population
It's not true. It depends on the application.
Example: nuclear decay in physics. The rate of decay, defines the probability of a decay of every single nucleus. You take any nucleus and it'll have exactly the same probability of decay, which you established by experimentation on the sample. | How to NOT use statistics | From what I understand of statistics it's never applicable to a single member of a population
It's not true. It depends on the application.
Example: nuclear decay in physics. The rate of decay, defin | How to NOT use statistics
From what I understand of statistics it's never applicable to a single member of a population
It's not true. It depends on the application.
Example: nuclear decay in physics. The rate of decay, defines the probability of a decay of every single nucleus. You take any nucleus and it'll have exactly the same probability of decay, which you established by experimentation on the sample. | How to NOT use statistics
From what I understand of statistics it's never applicable to a single member of a population
It's not true. It depends on the application.
Example: nuclear decay in physics. The rate of decay, defin |
18,457 | What is the origin of the "receiver operating characteristic" (ROC) terminology? | The earliest book reference that I know of is
Woodward, P. M. (1953). Probability and information theory with applications to radar. London: Pergamon Press.
but the concept, which was developed during World War II for the analysis of radar receivers, might have been published earlier than 1953 in journal articles (after the War was over) or in the multivolume series of texts published by the MIT Radiation Laboratory about their research during World War II. | What is the origin of the "receiver operating characteristic" (ROC) terminology? | The earliest book reference that I know of is
Woodward, P. M. (1953). Probability and information theory with applications to radar. London: Pergamon Press.
but the concept, which was developed dur | What is the origin of the "receiver operating characteristic" (ROC) terminology?
The earliest book reference that I know of is
Woodward, P. M. (1953). Probability and information theory with applications to radar. London: Pergamon Press.
but the concept, which was developed during World War II for the analysis of radar receivers, might have been published earlier than 1953 in journal articles (after the War was over) or in the multivolume series of texts published by the MIT Radiation Laboratory about their research during World War II. | What is the origin of the "receiver operating characteristic" (ROC) terminology?
The earliest book reference that I know of is
Woodward, P. M. (1953). Probability and information theory with applications to radar. London: Pergamon Press.
but the concept, which was developed dur |
18,458 | What is the origin of the "receiver operating characteristic" (ROC) terminology? | Earliest article I can find is from 1954:
Peterson, W., Birdsall, T., Fox, W. (1954). The theory of signal detectability, Transactions of the IRE Professional Group on Information Theory, 4, 4, pp. 171 - 212.
Abstract:
An optimum observer required to give a yes or no answer simply chooses an operating level and concludes that the receiver input arose from signal plus noise only when this level is exceeded by the output of his likelihood ratio receiver. Associated with each such operating level are conditional probabilities that the answer is a false alarm and the conditional probability of detection. Graphs of these quantities called receiver operating characteristic, or ROC, curves are convenient for evaluating a receiver. If the detection problem is changed by varying, for example, the signal power, then a family of ROC curves is generated. Such things as betting curves can easily be obtained from such a family. | What is the origin of the "receiver operating characteristic" (ROC) terminology? | Earliest article I can find is from 1954:
Peterson, W., Birdsall, T., Fox, W. (1954). The theory of signal detectability, Transactions of the IRE Professional Group on Information Theory, 4, 4, pp. | What is the origin of the "receiver operating characteristic" (ROC) terminology?
Earliest article I can find is from 1954:
Peterson, W., Birdsall, T., Fox, W. (1954). The theory of signal detectability, Transactions of the IRE Professional Group on Information Theory, 4, 4, pp. 171 - 212.
Abstract:
An optimum observer required to give a yes or no answer simply chooses an operating level and concludes that the receiver input arose from signal plus noise only when this level is exceeded by the output of his likelihood ratio receiver. Associated with each such operating level are conditional probabilities that the answer is a false alarm and the conditional probability of detection. Graphs of these quantities called receiver operating characteristic, or ROC, curves are convenient for evaluating a receiver. If the detection problem is changed by varying, for example, the signal power, then a family of ROC curves is generated. Such things as betting curves can easily be obtained from such a family. | What is the origin of the "receiver operating characteristic" (ROC) terminology?
Earliest article I can find is from 1954:
Peterson, W., Birdsall, T., Fox, W. (1954). The theory of signal detectability, Transactions of the IRE Professional Group on Information Theory, 4, 4, pp. |
18,459 | What is the origin of the "receiver operating characteristic" (ROC) terminology? | Carleton Douglas Creelman writes in History of Signal Detection Theory
Subsequent to the end of WWII, and the end of tight security around theoretical work in the field, academically connected laboratories, such as those at MIT and the University of Michigan, published work describing ways to analyze faint, noise-contaminated signals. From the Research Laboratory of Electronics at MIT, van Meter and Middleton (1954) published their analysis of the problem of extracting a signal that is imbedded in noise. Davenport and Root (1958) provided a comprehensive text on the engineering issues surrounding signal extraction. At the University of Michigan, Peterson et al. (1954) addressed the same issues, arriving at some of the same theoretical findings, and built on them by adding consideration of the decision processes required. The technique involved two core ideas: detection should involve correlation of a known signal with the noise-masked input and the input signal could be fully characterized by sampling at a rate determined by the bandwidth of the noise and the duration of the observation. | What is the origin of the "receiver operating characteristic" (ROC) terminology? | Carleton Douglas Creelman writes in History of Signal Detection Theory
Subsequent to the end of WWII, and the end of tight security around theoretical work in the field, academically connected labora | What is the origin of the "receiver operating characteristic" (ROC) terminology?
Carleton Douglas Creelman writes in History of Signal Detection Theory
Subsequent to the end of WWII, and the end of tight security around theoretical work in the field, academically connected laboratories, such as those at MIT and the University of Michigan, published work describing ways to analyze faint, noise-contaminated signals. From the Research Laboratory of Electronics at MIT, van Meter and Middleton (1954) published their analysis of the problem of extracting a signal that is imbedded in noise. Davenport and Root (1958) provided a comprehensive text on the engineering issues surrounding signal extraction. At the University of Michigan, Peterson et al. (1954) addressed the same issues, arriving at some of the same theoretical findings, and built on them by adding consideration of the decision processes required. The technique involved two core ideas: detection should involve correlation of a known signal with the noise-masked input and the input signal could be fully characterized by sampling at a rate determined by the bandwidth of the noise and the duration of the observation. | What is the origin of the "receiver operating characteristic" (ROC) terminology?
Carleton Douglas Creelman writes in History of Signal Detection Theory
Subsequent to the end of WWII, and the end of tight security around theoretical work in the field, academically connected labora |
18,460 | K-nearest-neighbour with continuous and binary variables | It's ok combining categorical and continuous variables (features).
Somehow, there is not much theoretical ground for a method such as k-NN. The heuristic is that if two points are close to each-other (according to some distance), then they have something in common in terms of output. Maybe yes, maybe no. And it depends on the distance you use.
In your example you define a distance between two points $(a,b,c)$ and $(a',b',c')$ such as :
take the squared distance between $a$ and $a'$ : $(a-a')^2$
Add +2 if $b$ and $b'$ are different, +0 if equal (because you count a difference of 1 for each category)
Add +2 if $c$ and $c'$ are different, +0 is equal (same)
This corresponds to giving weights implicitly to each feature.
Note that if $a$ takes large values (like 1000, 2000...) with big variance then the weights of binary features will be negligible compared to the weight of $a$. Only the distance between $a$ and $a'$ will really matter. And the other way around : if $a$ takes small values like 0.001 : only binary features will count.
You may normalize the behaviour by reweighing: dividing each feature by its standard deviation. This applies both to continuous and binary variables. You may also provide your own preferred weights.
Note that R function kNN() does it for you : https://www.rdocumentation.org/packages/DMwR/versions/0.4.1/topics/kNN
As a first attempt, just use basically norm=true (normalization). This will avoid most non-sense that may appear when combining continuous and categorical features. | K-nearest-neighbour with continuous and binary variables | It's ok combining categorical and continuous variables (features).
Somehow, there is not much theoretical ground for a method such as k-NN. The heuristic is that if two points are close to each-other | K-nearest-neighbour with continuous and binary variables
It's ok combining categorical and continuous variables (features).
Somehow, there is not much theoretical ground for a method such as k-NN. The heuristic is that if two points are close to each-other (according to some distance), then they have something in common in terms of output. Maybe yes, maybe no. And it depends on the distance you use.
In your example you define a distance between two points $(a,b,c)$ and $(a',b',c')$ such as :
take the squared distance between $a$ and $a'$ : $(a-a')^2$
Add +2 if $b$ and $b'$ are different, +0 if equal (because you count a difference of 1 for each category)
Add +2 if $c$ and $c'$ are different, +0 is equal (same)
This corresponds to giving weights implicitly to each feature.
Note that if $a$ takes large values (like 1000, 2000...) with big variance then the weights of binary features will be negligible compared to the weight of $a$. Only the distance between $a$ and $a'$ will really matter. And the other way around : if $a$ takes small values like 0.001 : only binary features will count.
You may normalize the behaviour by reweighing: dividing each feature by its standard deviation. This applies both to continuous and binary variables. You may also provide your own preferred weights.
Note that R function kNN() does it for you : https://www.rdocumentation.org/packages/DMwR/versions/0.4.1/topics/kNN
As a first attempt, just use basically norm=true (normalization). This will avoid most non-sense that may appear when combining continuous and categorical features. | K-nearest-neighbour with continuous and binary variables
It's ok combining categorical and continuous variables (features).
Somehow, there is not much theoretical ground for a method such as k-NN. The heuristic is that if two points are close to each-other |
18,461 | K-nearest-neighbour with continuous and binary variables | Yes, you certainly can use KNN with both binary and continuous data, but there are some important considerations you should be aware of when doing so.
The results are going to be heavily informed by the binary splits relative to the dispersion among the real-valued results (for 0-1 scaled, unweighted vectors), as illustrated below:
You can see in this example that an individual observation's nearest neighbors by distance would be MUCH more heavily informed by the binary variable than by the scaled real-value variable.
Furthermore, this extends to multiple binary variables- if we change one of the real-valued variables to binary, we can see that the distances will by much more informed by matching on all of the binary variables involved than in nearness of the real values:
You'll want to include only critical binary variables- you are, in effect, asking "of all of the observations that match this configuration of binary variables (if any), which have the nearest real-valued values?" This is a reasonable formulation of many problems that could be addressed with KNN, and a very poor formulation of other problems.
#code to reproduce plots:
library(scatterplot3d)
scalevector <- function(x){(x-min(x))/(max(x)-min(x))}
x <- scalevector(rnorm(100))
y <- scalevector(rnorm(100))
z <- ifelse(sign(rnorm(100))==-1, 0, 1)
df <- data.frame(cbind(x,y,z))
scatterplot3d(df$x, df$z, df$y, pch=16, highlight.3d=FALSE,
type="h", angle =235, xlab='', ylab='', zlab='')
x <- scalevector(rnorm(100))
y <- ifelse(sign(rnorm(100))==-1, 0, 1)
z <- ifelse(sign(rnorm(100))==-1, 0, 1)
df <- data.frame(cbind(x,y,z))
scatterplot3d(df$x, df$z, df$y, pch=16, highlight.3d=FALSE,
type="h", angle =235, xlab='', ylab='', zlab='') | K-nearest-neighbour with continuous and binary variables | Yes, you certainly can use KNN with both binary and continuous data, but there are some important considerations you should be aware of when doing so.
The results are going to be heavily informed by t | K-nearest-neighbour with continuous and binary variables
Yes, you certainly can use KNN with both binary and continuous data, but there are some important considerations you should be aware of when doing so.
The results are going to be heavily informed by the binary splits relative to the dispersion among the real-valued results (for 0-1 scaled, unweighted vectors), as illustrated below:
You can see in this example that an individual observation's nearest neighbors by distance would be MUCH more heavily informed by the binary variable than by the scaled real-value variable.
Furthermore, this extends to multiple binary variables- if we change one of the real-valued variables to binary, we can see that the distances will by much more informed by matching on all of the binary variables involved than in nearness of the real values:
You'll want to include only critical binary variables- you are, in effect, asking "of all of the observations that match this configuration of binary variables (if any), which have the nearest real-valued values?" This is a reasonable formulation of many problems that could be addressed with KNN, and a very poor formulation of other problems.
#code to reproduce plots:
library(scatterplot3d)
scalevector <- function(x){(x-min(x))/(max(x)-min(x))}
x <- scalevector(rnorm(100))
y <- scalevector(rnorm(100))
z <- ifelse(sign(rnorm(100))==-1, 0, 1)
df <- data.frame(cbind(x,y,z))
scatterplot3d(df$x, df$z, df$y, pch=16, highlight.3d=FALSE,
type="h", angle =235, xlab='', ylab='', zlab='')
x <- scalevector(rnorm(100))
y <- ifelse(sign(rnorm(100))==-1, 0, 1)
z <- ifelse(sign(rnorm(100))==-1, 0, 1)
df <- data.frame(cbind(x,y,z))
scatterplot3d(df$x, df$z, df$y, pch=16, highlight.3d=FALSE,
type="h", angle =235, xlab='', ylab='', zlab='') | K-nearest-neighbour with continuous and binary variables
Yes, you certainly can use KNN with both binary and continuous data, but there are some important considerations you should be aware of when doing so.
The results are going to be heavily informed by t |
18,462 | What do the terms "dense" and "sparse" mean in the context of neural networks? | In mathematics, "sparse" and "dense" often refer to the number of zero vs. non-zero elements in an array (e.g. vector or matrix). A sparse array is one that contains mostly zeros and few non-zero entries. A dense array contains mostly non-zeros.
There's no hard threshold for what counts as sparse; it's a loose term, but can be made more specific. For example, a vector is $k$-sparse if it contains at most $k$ non-zero entries. Another way of saying this is that the vector's $\ell_0$ norm is $k$.
The usage of these terms in the context of neural networks is similar to their usage in other fields. In the context of NNs, things that may be described as sparse or dense include the activations of units within a particular layer, the weights, and the data. One could also talk about "sparse connectivity", which refers to the situation where only a small subset of units are connected to each other. This is a similar concept to sparse weights, because a connection with zero weight is effectively unconnected.
"Sparse array" can also refer to a class of data types that are efficient for representing arrays that are sparse. This is a concept within the domain of programming languages. It's related to, but distinct from the mathematical concept. | What do the terms "dense" and "sparse" mean in the context of neural networks? | In mathematics, "sparse" and "dense" often refer to the number of zero vs. non-zero elements in an array (e.g. vector or matrix). A sparse array is one that contains mostly zeros and few non-zero entr | What do the terms "dense" and "sparse" mean in the context of neural networks?
In mathematics, "sparse" and "dense" often refer to the number of zero vs. non-zero elements in an array (e.g. vector or matrix). A sparse array is one that contains mostly zeros and few non-zero entries. A dense array contains mostly non-zeros.
There's no hard threshold for what counts as sparse; it's a loose term, but can be made more specific. For example, a vector is $k$-sparse if it contains at most $k$ non-zero entries. Another way of saying this is that the vector's $\ell_0$ norm is $k$.
The usage of these terms in the context of neural networks is similar to their usage in other fields. In the context of NNs, things that may be described as sparse or dense include the activations of units within a particular layer, the weights, and the data. One could also talk about "sparse connectivity", which refers to the situation where only a small subset of units are connected to each other. This is a similar concept to sparse weights, because a connection with zero weight is effectively unconnected.
"Sparse array" can also refer to a class of data types that are efficient for representing arrays that are sparse. This is a concept within the domain of programming languages. It's related to, but distinct from the mathematical concept. | What do the terms "dense" and "sparse" mean in the context of neural networks?
In mathematics, "sparse" and "dense" often refer to the number of zero vs. non-zero elements in an array (e.g. vector or matrix). A sparse array is one that contains mostly zeros and few non-zero entr |
18,463 | What do the terms "dense" and "sparse" mean in the context of neural networks? | Think of the term matrix as an image of a person's face. Sparse would mean that there are a few very distinct pixels in the image that carry a lot of meaning regarding the identity of a person. This could be e.g. the tip of your nose, the center of your pupils or the corners of your mouth. The sparse model would only use those pixels for distinguishing person A from person B. In comparison the term dense would imply that all the image pixels are evaluated to identify a person - e.g. by comparing the gradient of all adjacent pixels.
While sparse is usually more efficient because less data needs to be evaluated the dense model is more effective, but often times is too computation heavy for the task at hand.
In @user20160's answer the unused pixels would be the zeroes of my 2D image matrix. | What do the terms "dense" and "sparse" mean in the context of neural networks? | Think of the term matrix as an image of a person's face. Sparse would mean that there are a few very distinct pixels in the image that carry a lot of meaning regarding the identity of a person. This c | What do the terms "dense" and "sparse" mean in the context of neural networks?
Think of the term matrix as an image of a person's face. Sparse would mean that there are a few very distinct pixels in the image that carry a lot of meaning regarding the identity of a person. This could be e.g. the tip of your nose, the center of your pupils or the corners of your mouth. The sparse model would only use those pixels for distinguishing person A from person B. In comparison the term dense would imply that all the image pixels are evaluated to identify a person - e.g. by comparing the gradient of all adjacent pixels.
While sparse is usually more efficient because less data needs to be evaluated the dense model is more effective, but often times is too computation heavy for the task at hand.
In @user20160's answer the unused pixels would be the zeroes of my 2D image matrix. | What do the terms "dense" and "sparse" mean in the context of neural networks?
Think of the term matrix as an image of a person's face. Sparse would mean that there are a few very distinct pixels in the image that carry a lot of meaning regarding the identity of a person. This c |
18,464 | PCA before Random Forest Regression provide better predictive scores for my dataset than just Random Forest Regression, how to explain it? [duplicate] | Using Random Forest in a dataset as the one you described has two major problems:
Random Forest does not perform well when features are monotonic transformations of other features (this makes the trees of the forest less independent from each other).
The same happens when you have more features than samples: random forest will probably overfit the dataset, and you will have a poor out of bag performance.
When using PCA you get rid of these two problems that are lowering the performance of Random Forest:
you reduce the number of features.
you get rid of collinear
features. (all collinear features will end up in a single PCA
component). | PCA before Random Forest Regression provide better predictive scores for my dataset than just Random | Using Random Forest in a dataset as the one you described has two major problems:
Random Forest does not perform well when features are monotonic transformations of other features (this makes the tre | PCA before Random Forest Regression provide better predictive scores for my dataset than just Random Forest Regression, how to explain it? [duplicate]
Using Random Forest in a dataset as the one you described has two major problems:
Random Forest does not perform well when features are monotonic transformations of other features (this makes the trees of the forest less independent from each other).
The same happens when you have more features than samples: random forest will probably overfit the dataset, and you will have a poor out of bag performance.
When using PCA you get rid of these two problems that are lowering the performance of Random Forest:
you reduce the number of features.
you get rid of collinear
features. (all collinear features will end up in a single PCA
component). | PCA before Random Forest Regression provide better predictive scores for my dataset than just Random
Using Random Forest in a dataset as the one you described has two major problems:
Random Forest does not perform well when features are monotonic transformations of other features (this makes the tre |
18,465 | PCA before Random Forest Regression provide better predictive scores for my dataset than just Random Forest Regression, how to explain it? [duplicate] | I think you just answered yourself. In general RF are not good in high dimensional settings or when you have more features than samples, therefore reducing your features from 400 to 8 will help, especially if you have lot's of noisy collinear features. You have also less chance to overfit in this case, but beware of double-dipping and model selection bias. So that you run lot's of models and choose the best one, which might be best just by chance and wouldn't generalize on unseen data. | PCA before Random Forest Regression provide better predictive scores for my dataset than just Random | I think you just answered yourself. In general RF are not good in high dimensional settings or when you have more features than samples, therefore reducing your features from 400 to 8 will help, espec | PCA before Random Forest Regression provide better predictive scores for my dataset than just Random Forest Regression, how to explain it? [duplicate]
I think you just answered yourself. In general RF are not good in high dimensional settings or when you have more features than samples, therefore reducing your features from 400 to 8 will help, especially if you have lot's of noisy collinear features. You have also less chance to overfit in this case, but beware of double-dipping and model selection bias. So that you run lot's of models and choose the best one, which might be best just by chance and wouldn't generalize on unseen data. | PCA before Random Forest Regression provide better predictive scores for my dataset than just Random
I think you just answered yourself. In general RF are not good in high dimensional settings or when you have more features than samples, therefore reducing your features from 400 to 8 will help, espec |
18,466 | Justification for conjugate prior? | Maybe satisfying the category "heuristic" justification, conjugate priors are useful because, among others, of the "fictitious sample interpretation".
For example, in the Beta-Bernoulli case, the conjugate prior is Beta with density $$ \pi \left( \theta \right) =\frac{\Gamma \left( \alpha _{0}+\beta _{0}\right)
}{\Gamma \left( \alpha _{0}\right) \Gamma \left( \beta _{0}\right) }\theta
^{\alpha _{0}-1}\left( 1-\theta \right) ^{\beta _{0}-1} $$
This can be interpreted as the information contained in a sample of size $\underline{n}=\alpha _{0}+\beta _{0}-2$ (loosely so, as $\underline{n}$ need not be integer of course) with $\alpha _{0}-1$ successes:
$$ \pi \left( \theta \right) =\frac{\Gamma \left( \alpha _{0}+\beta _{0}\right)
}{\Gamma \left( \alpha _{0}\right) \Gamma \left( \beta _{0}\right) }\theta
^{\alpha _{0}-1}\left( 1-\theta \right) ^{\underline{n}-(\alpha _{0}-1)} \propto f(y|\theta),$$
where $f(y|\theta)$ is the likelihood function.
This may give you some indication about how to pick the prior parameters: in some cases, you may be able to say that, for example, you are as sure about the fairness of a coin as if you had tossed it, say, 20 times and seen 10 heads. That is, of course, a different strength of prior belief than if you are as sure about its fairness as if you had tossed it 100 times and seen 50 heads. | Justification for conjugate prior? | Maybe satisfying the category "heuristic" justification, conjugate priors are useful because, among others, of the "fictitious sample interpretation".
For example, in the Beta-Bernoulli case, the con | Justification for conjugate prior?
Maybe satisfying the category "heuristic" justification, conjugate priors are useful because, among others, of the "fictitious sample interpretation".
For example, in the Beta-Bernoulli case, the conjugate prior is Beta with density $$ \pi \left( \theta \right) =\frac{\Gamma \left( \alpha _{0}+\beta _{0}\right)
}{\Gamma \left( \alpha _{0}\right) \Gamma \left( \beta _{0}\right) }\theta
^{\alpha _{0}-1}\left( 1-\theta \right) ^{\beta _{0}-1} $$
This can be interpreted as the information contained in a sample of size $\underline{n}=\alpha _{0}+\beta _{0}-2$ (loosely so, as $\underline{n}$ need not be integer of course) with $\alpha _{0}-1$ successes:
$$ \pi \left( \theta \right) =\frac{\Gamma \left( \alpha _{0}+\beta _{0}\right)
}{\Gamma \left( \alpha _{0}\right) \Gamma \left( \beta _{0}\right) }\theta
^{\alpha _{0}-1}\left( 1-\theta \right) ^{\underline{n}-(\alpha _{0}-1)} \propto f(y|\theta),$$
where $f(y|\theta)$ is the likelihood function.
This may give you some indication about how to pick the prior parameters: in some cases, you may be able to say that, for example, you are as sure about the fairness of a coin as if you had tossed it, say, 20 times and seen 10 heads. That is, of course, a different strength of prior belief than if you are as sure about its fairness as if you had tossed it 100 times and seen 50 heads. | Justification for conjugate prior?
Maybe satisfying the category "heuristic" justification, conjugate priors are useful because, among others, of the "fictitious sample interpretation".
For example, in the Beta-Bernoulli case, the con |
18,467 | Justification for conjugate prior? | By a result due to Diaconis and Ylvisaker (1979), we know that in the setting of a likelihood being an exponential family, linear estimators are Bayes if and only if the prior is conjugate.
This suggests some fundamental important of using the conjugate prior when the estimator turns out to be linear. | Justification for conjugate prior? | By a result due to Diaconis and Ylvisaker (1979), we know that in the setting of a likelihood being an exponential family, linear estimators are Bayes if and only if the prior is conjugate.
This sugg | Justification for conjugate prior?
By a result due to Diaconis and Ylvisaker (1979), we know that in the setting of a likelihood being an exponential family, linear estimators are Bayes if and only if the prior is conjugate.
This suggests some fundamental important of using the conjugate prior when the estimator turns out to be linear. | Justification for conjugate prior?
By a result due to Diaconis and Ylvisaker (1979), we know that in the setting of a likelihood being an exponential family, linear estimators are Bayes if and only if the prior is conjugate.
This sugg |
18,468 | Do CART trees capture interactions among predictors? | CART can capture interaction effects. An interaction effect between $X_1$ and $X_2$ occurs when the effect of explanatory variable $X_1$ on response variable $Y$ depends on the level of $X_2$. This happens in the following example:
The effect of poor economic conditions (call this $X_1$) depends on what type of building is being purchased ($X_2$). When investing in an office building, poor economic conditions decrease the predicted value of the investment by 140,000 dollars. But when investing in an apartment building, the predicted value of the investment decreases by 20,000 dollars. The effect of poor economic conditions on the predicted value of your investment depends on the type of property being bought. This is an interaction effect. | Do CART trees capture interactions among predictors? | CART can capture interaction effects. An interaction effect between $X_1$ and $X_2$ occurs when the effect of explanatory variable $X_1$ on response variable $Y$ depends on the level of $X_2$. This ha | Do CART trees capture interactions among predictors?
CART can capture interaction effects. An interaction effect between $X_1$ and $X_2$ occurs when the effect of explanatory variable $X_1$ on response variable $Y$ depends on the level of $X_2$. This happens in the following example:
The effect of poor economic conditions (call this $X_1$) depends on what type of building is being purchased ($X_2$). When investing in an office building, poor economic conditions decrease the predicted value of the investment by 140,000 dollars. But when investing in an apartment building, the predicted value of the investment decreases by 20,000 dollars. The effect of poor economic conditions on the predicted value of your investment depends on the type of property being bought. This is an interaction effect. | Do CART trees capture interactions among predictors?
CART can capture interaction effects. An interaction effect between $X_1$ and $X_2$ occurs when the effect of explanatory variable $X_1$ on response variable $Y$ depends on the level of $X_2$. This ha |
18,469 | Do CART trees capture interactions among predictors? | Short answer
CARTs need help with capturing interactions.
Long answer
Take the exact greedy algorithm (Chen and Guestrin, 2016):
The mean on the leaf will be a conditional expectation, but every split on the way to the leaf is independent of the other. If Feature A does not matter by itself but it matters in interaction with Feature B, the algorithm will not split on Feature A. Without this split, the algorithm cannot foresee the split on Feature B, necessary to generate the interaction.
Trees can pick interactions in the simplest scenarios. If you have a dataset with two features $x_1, x_2$ and target $y = XOR(x_1, x_2)$, the algorithm have nothing to split on but $x_1$ and $x_2$, therefore, you will get four leaves with $XOR$ estimated properly.
With many features, regularization, and the hard limit on the number of splits, the same algorithm can omit interactions.
Workarounds
Explicit interactions as new features
An example from Zhang ("Winning Data Science Competitions", 2015):
Non-greedy tree algorithms
In the other question, Simone suggests lookahead-based algorithms and oblique decision trees.
A different learning approach
Some learning methods handle interactions better.
Here's a table from The Elements of Statistical Learning (line "Ability to extract linear combinations of features"): | Do CART trees capture interactions among predictors? | Short answer
CARTs need help with capturing interactions.
Long answer
Take the exact greedy algorithm (Chen and Guestrin, 2016):
The mean on the leaf will be a conditional expectation, but every spli | Do CART trees capture interactions among predictors?
Short answer
CARTs need help with capturing interactions.
Long answer
Take the exact greedy algorithm (Chen and Guestrin, 2016):
The mean on the leaf will be a conditional expectation, but every split on the way to the leaf is independent of the other. If Feature A does not matter by itself but it matters in interaction with Feature B, the algorithm will not split on Feature A. Without this split, the algorithm cannot foresee the split on Feature B, necessary to generate the interaction.
Trees can pick interactions in the simplest scenarios. If you have a dataset with two features $x_1, x_2$ and target $y = XOR(x_1, x_2)$, the algorithm have nothing to split on but $x_1$ and $x_2$, therefore, you will get four leaves with $XOR$ estimated properly.
With many features, regularization, and the hard limit on the number of splits, the same algorithm can omit interactions.
Workarounds
Explicit interactions as new features
An example from Zhang ("Winning Data Science Competitions", 2015):
Non-greedy tree algorithms
In the other question, Simone suggests lookahead-based algorithms and oblique decision trees.
A different learning approach
Some learning methods handle interactions better.
Here's a table from The Elements of Statistical Learning (line "Ability to extract linear combinations of features"): | Do CART trees capture interactions among predictors?
Short answer
CARTs need help with capturing interactions.
Long answer
Take the exact greedy algorithm (Chen and Guestrin, 2016):
The mean on the leaf will be a conditional expectation, but every spli |
18,470 | ML estimate of exponential distribution (with censored data) | You can still estimate parameters by using the likelihood directly. Let the observations be $x_1, \dots, x_n$ with the exponential distribution with rate $\lambda>0$ and unknown.
The density function is $f(x;\lambda)= \lambda e^{-\lambda x}$, cumulative distribution function $F(x;\lambda)=1-e^{-\lambda x}$ and tail function $G(x;\lambda)=1-F(x;\lambda) = e^{-\lambda x}$. Assume the first $r$ observations are fully observed, while for $x_{r+1}, \dots, x_n$ we only know that $x_j > t_j$ for some known positive constants $t_j$. As always, the likelihood is the "probability of the observed data", for the censored observations, that is given by $P(X_j > t_j) = G(t_j;\lambda)$, so the full likelihood function is
$$
L(\lambda) = \prod_{i=1}^r f(x_i;\lambda) \cdot \prod_{i=r+1}^n G(t_j;\lambda)
$$
The loglikelihood function then becomes
$$
l(\lambda) = r\log\lambda -\lambda(x_1+\dots+x_r+t_{r+1}+\dots+ t_n)
$$
which has the same form as the loglikelihood for the usual, fully observed case, except from the first term $r\log\lambda$ in place of $n\log\lambda$. Writing $T$ for the mean of observations and censoring times, the maximum likelihood estimator of $\lambda$ becomes $\hat{\lambda}=\frac{r}{nT}$, which you yourself can compare with the fully observed case.
EDIT
To try to answer the question in comments: If all observations were censored, that is, we did not wait long enough to observe any event (death), what can we do? In that case, $r=0$, so the loglikelihood becomes
$$
l(\lambda) = -nT \lambda
$$
that is, it is linear decreasing in $\lambda$. So the maximum must be for $\lambda=0$! But, zero is not a valid value for the rate parameter $\lambda$ since it do not correspond to any exponential distribution. We must conclude that in this case the maximum likelihood estimator do not exist! Maybe one could try to construct some sort of confidence interval for $\lambda$ based on that loglikelihood function? For that, look below.
But, in any case, the real conclusion from the data in that case is that we should wait more time until we get some events ...
Here is how we can construct a (one-sided) confidence interval for $\lambda$ in case all observations get censored. The likelihood function in that case is $e^{-\lambda n T}$, which has the same form as the likelihood function from a binomial experiment where we got all successes, which is $p^n$ (see also Confidence interval around binomial estimate of 0 or 1). In that case we want a one-sided confidence interval for $p$ of the form $[\underset{\bar{}}{p}, 1]$. Then we get an interval for $\lambda$ by solving $\log p = -\lambda T$.
We get the confidence interval for $p$ by solving
$$
P(X=n) = p^n \ge 0.95 ~~~~\text{(say)}
$$
so that $ n\log p \ge \log 0.95 $. This give finally the confidence interval for $\lambda$:
$$
\lambda \le \frac{-\log 0.95}{n T}.
$$ | ML estimate of exponential distribution (with censored data) | You can still estimate parameters by using the likelihood directly. Let the observations be $x_1, \dots, x_n$ with the exponential distribution with rate $\lambda>0$ and unknown.
The density functio | ML estimate of exponential distribution (with censored data)
You can still estimate parameters by using the likelihood directly. Let the observations be $x_1, \dots, x_n$ with the exponential distribution with rate $\lambda>0$ and unknown.
The density function is $f(x;\lambda)= \lambda e^{-\lambda x}$, cumulative distribution function $F(x;\lambda)=1-e^{-\lambda x}$ and tail function $G(x;\lambda)=1-F(x;\lambda) = e^{-\lambda x}$. Assume the first $r$ observations are fully observed, while for $x_{r+1}, \dots, x_n$ we only know that $x_j > t_j$ for some known positive constants $t_j$. As always, the likelihood is the "probability of the observed data", for the censored observations, that is given by $P(X_j > t_j) = G(t_j;\lambda)$, so the full likelihood function is
$$
L(\lambda) = \prod_{i=1}^r f(x_i;\lambda) \cdot \prod_{i=r+1}^n G(t_j;\lambda)
$$
The loglikelihood function then becomes
$$
l(\lambda) = r\log\lambda -\lambda(x_1+\dots+x_r+t_{r+1}+\dots+ t_n)
$$
which has the same form as the loglikelihood for the usual, fully observed case, except from the first term $r\log\lambda$ in place of $n\log\lambda$. Writing $T$ for the mean of observations and censoring times, the maximum likelihood estimator of $\lambda$ becomes $\hat{\lambda}=\frac{r}{nT}$, which you yourself can compare with the fully observed case.
EDIT
To try to answer the question in comments: If all observations were censored, that is, we did not wait long enough to observe any event (death), what can we do? In that case, $r=0$, so the loglikelihood becomes
$$
l(\lambda) = -nT \lambda
$$
that is, it is linear decreasing in $\lambda$. So the maximum must be for $\lambda=0$! But, zero is not a valid value for the rate parameter $\lambda$ since it do not correspond to any exponential distribution. We must conclude that in this case the maximum likelihood estimator do not exist! Maybe one could try to construct some sort of confidence interval for $\lambda$ based on that loglikelihood function? For that, look below.
But, in any case, the real conclusion from the data in that case is that we should wait more time until we get some events ...
Here is how we can construct a (one-sided) confidence interval for $\lambda$ in case all observations get censored. The likelihood function in that case is $e^{-\lambda n T}$, which has the same form as the likelihood function from a binomial experiment where we got all successes, which is $p^n$ (see also Confidence interval around binomial estimate of 0 or 1). In that case we want a one-sided confidence interval for $p$ of the form $[\underset{\bar{}}{p}, 1]$. Then we get an interval for $\lambda$ by solving $\log p = -\lambda T$.
We get the confidence interval for $p$ by solving
$$
P(X=n) = p^n \ge 0.95 ~~~~\text{(say)}
$$
so that $ n\log p \ge \log 0.95 $. This give finally the confidence interval for $\lambda$:
$$
\lambda \le \frac{-\log 0.95}{n T}.
$$ | ML estimate of exponential distribution (with censored data)
You can still estimate parameters by using the likelihood directly. Let the observations be $x_1, \dots, x_n$ with the exponential distribution with rate $\lambda>0$ and unknown.
The density functio |
18,471 | Why Adaboost with Decision Trees? | I talked about this in an answer to a related SO question. Decision trees are just generally a very good fit for boosting, much more so than other algorithms. The bullet point/ summary version is this:
Decision trees are non-linear. Boosting with linear models simply doesn't work well.
The weak learner needs to be consistently better than random guessing. You don't normal need to do any parameter tuning to a decision tree to get that behavior. Training an SVM really does need a parameter search. Since the data is re-weighted on each iteration, you likely need to do another parameter search on each iteration. So you are increasing the amount of work you have to do by a large margin.
Decision trees are reasonably fast to train. Since we are going to be building 100s or 1000s of them, thats a good property. They are also fast to classify, which is again important when you need 100s or 1000s to run before you can output your decision.
By changing the depth you have a simple and easy control over the bias/variance trade off, knowing that boosting can reduce bias but also significantly reduces variance. Boosting is known to overfit, so the easy nob to tune is helpful in that regard. | Why Adaboost with Decision Trees? | I talked about this in an answer to a related SO question. Decision trees are just generally a very good fit for boosting, much more so than other algorithms. The bullet point/ summary version is this | Why Adaboost with Decision Trees?
I talked about this in an answer to a related SO question. Decision trees are just generally a very good fit for boosting, much more so than other algorithms. The bullet point/ summary version is this:
Decision trees are non-linear. Boosting with linear models simply doesn't work well.
The weak learner needs to be consistently better than random guessing. You don't normal need to do any parameter tuning to a decision tree to get that behavior. Training an SVM really does need a parameter search. Since the data is re-weighted on each iteration, you likely need to do another parameter search on each iteration. So you are increasing the amount of work you have to do by a large margin.
Decision trees are reasonably fast to train. Since we are going to be building 100s or 1000s of them, thats a good property. They are also fast to classify, which is again important when you need 100s or 1000s to run before you can output your decision.
By changing the depth you have a simple and easy control over the bias/variance trade off, knowing that boosting can reduce bias but also significantly reduces variance. Boosting is known to overfit, so the easy nob to tune is helpful in that regard. | Why Adaboost with Decision Trees?
I talked about this in an answer to a related SO question. Decision trees are just generally a very good fit for boosting, much more so than other algorithms. The bullet point/ summary version is this |
18,472 | Why Adaboost with Decision Trees? | I do not have a text-book answer. However here are some thoughts.
Boosting can be seen in direct comparison with bagging. These are two different approaches of the bias variance tradeoff dilemma. While bagging have as weak learners, some learners with low bias and high variance, by averaging the bagging ensemble decrease the variance for a little bias. Boosting on the other hand works well with different weak learners. The boosting weak learners have high bias and low variance. By building up one learner on the top of another, the boosting ensemble tries to decrease the bias, for a little variance.
As a consequence, if you consider for example to use bagging and boosting with trees as weak learners, the best way to use is small/short trees with boosting and very detailed trees with bagging. This is why very often a boosting procedure uses a decision stump as weak learner, which is the shortest possible tree (a single if condition on a single dimension). This decision stump is very stable, so it has very low variance.
I do not see any reason to use trees with boosting procedures. However, short trees are simple, easy to implement and easy to understand. However, I think that in order to be succesfull with a boosting procedure, your weak learner has to have low variance, has to be rigid, with very few degrees of freedom. For example I see no point to have as a weak learner a neural network.
Additionally, you have to note that for some kind of boosting procedures, gradient boosting for example, Breiman found that if the weak learner is a tree, some optimization in the way how boosting works can be done. Thus we have gradient boosting trees. There is a nice exposure of boosting in the ESTL book. | Why Adaboost with Decision Trees? | I do not have a text-book answer. However here are some thoughts.
Boosting can be seen in direct comparison with bagging. These are two different approaches of the bias variance tradeoff dilemma. Whil | Why Adaboost with Decision Trees?
I do not have a text-book answer. However here are some thoughts.
Boosting can be seen in direct comparison with bagging. These are two different approaches of the bias variance tradeoff dilemma. While bagging have as weak learners, some learners with low bias and high variance, by averaging the bagging ensemble decrease the variance for a little bias. Boosting on the other hand works well with different weak learners. The boosting weak learners have high bias and low variance. By building up one learner on the top of another, the boosting ensemble tries to decrease the bias, for a little variance.
As a consequence, if you consider for example to use bagging and boosting with trees as weak learners, the best way to use is small/short trees with boosting and very detailed trees with bagging. This is why very often a boosting procedure uses a decision stump as weak learner, which is the shortest possible tree (a single if condition on a single dimension). This decision stump is very stable, so it has very low variance.
I do not see any reason to use trees with boosting procedures. However, short trees are simple, easy to implement and easy to understand. However, I think that in order to be succesfull with a boosting procedure, your weak learner has to have low variance, has to be rigid, with very few degrees of freedom. For example I see no point to have as a weak learner a neural network.
Additionally, you have to note that for some kind of boosting procedures, gradient boosting for example, Breiman found that if the weak learner is a tree, some optimization in the way how boosting works can be done. Thus we have gradient boosting trees. There is a nice exposure of boosting in the ESTL book. | Why Adaboost with Decision Trees?
I do not have a text-book answer. However here are some thoughts.
Boosting can be seen in direct comparison with bagging. These are two different approaches of the bias variance tradeoff dilemma. Whil |
18,473 | Derivation of Regularized Linear Regression Cost Function per Coursera Machine Learning Course | Actually if you check the lecture notes just after the video , it shows the formula correctly .
The slides that you have lined here shows the exact slide of the video. | Derivation of Regularized Linear Regression Cost Function per Coursera Machine Learning Course | Actually if you check the lecture notes just after the video , it shows the formula correctly .
The slides that you have lined here shows the exact slide of the video. | Derivation of Regularized Linear Regression Cost Function per Coursera Machine Learning Course
Actually if you check the lecture notes just after the video , it shows the formula correctly .
The slides that you have lined here shows the exact slide of the video. | Derivation of Regularized Linear Regression Cost Function per Coursera Machine Learning Course
Actually if you check the lecture notes just after the video , it shows the formula correctly .
The slides that you have lined here shows the exact slide of the video. |
18,474 | Derivation of Regularized Linear Regression Cost Function per Coursera Machine Learning Course | $J(\theta) = \frac{1}{2m}[\sum_{i=1}^m(h_\theta (x^{(i)}) - y^{(i)})^2 + \lambda\sum_{j=1}^n\theta^2_j]$
Now
$\frac{\partial}{\partial \theta_j}(h_\theta (x^{(i)}) - y^{(i)})^2=2[(h_\theta (x^{(i)}) - y^{(i)})\frac{\partial}{\partial \theta_j}\{h_\theta(x^{(i)})\}]$
Note that in a linear model (being discussed on the pages you mention), $\frac{\partial}{\partial \theta_j}(h_\theta(x^{(i)})=[x^{(i)}]_j$
$\frac{\partial}{\partial \theta_j}\lambda\sum_{j=1}^n\theta_j^2=2\lambda\theta_j$
So for the linear case
$\frac{\partial}{\partial \theta_j}J(\theta) = \frac{1}{m}[\sum_{i=1}^m(h_\theta (x^{(i)}) - y^{(i)})x^{(i)}_j + \lambda\theta_j]$
Looks like perhaps both you and Andrew might have typos. Well, at least two of the three of us seem to. | Derivation of Regularized Linear Regression Cost Function per Coursera Machine Learning Course | $J(\theta) = \frac{1}{2m}[\sum_{i=1}^m(h_\theta (x^{(i)}) - y^{(i)})^2 + \lambda\sum_{j=1}^n\theta^2_j]$
Now
$\frac{\partial}{\partial \theta_j}(h_\theta (x^{(i)}) - y^{(i)})^2=2[(h_\theta (x^{(i)}) - | Derivation of Regularized Linear Regression Cost Function per Coursera Machine Learning Course
$J(\theta) = \frac{1}{2m}[\sum_{i=1}^m(h_\theta (x^{(i)}) - y^{(i)})^2 + \lambda\sum_{j=1}^n\theta^2_j]$
Now
$\frac{\partial}{\partial \theta_j}(h_\theta (x^{(i)}) - y^{(i)})^2=2[(h_\theta (x^{(i)}) - y^{(i)})\frac{\partial}{\partial \theta_j}\{h_\theta(x^{(i)})\}]$
Note that in a linear model (being discussed on the pages you mention), $\frac{\partial}{\partial \theta_j}(h_\theta(x^{(i)})=[x^{(i)}]_j$
$\frac{\partial}{\partial \theta_j}\lambda\sum_{j=1}^n\theta_j^2=2\lambda\theta_j$
So for the linear case
$\frac{\partial}{\partial \theta_j}J(\theta) = \frac{1}{m}[\sum_{i=1}^m(h_\theta (x^{(i)}) - y^{(i)})x^{(i)}_j + \lambda\theta_j]$
Looks like perhaps both you and Andrew might have typos. Well, at least two of the three of us seem to. | Derivation of Regularized Linear Regression Cost Function per Coursera Machine Learning Course
$J(\theta) = \frac{1}{2m}[\sum_{i=1}^m(h_\theta (x^{(i)}) - y^{(i)})^2 + \lambda\sum_{j=1}^n\theta^2_j]$
Now
$\frac{\partial}{\partial \theta_j}(h_\theta (x^{(i)}) - y^{(i)})^2=2[(h_\theta (x^{(i)}) - |
18,475 | Derivation of Regularized Linear Regression Cost Function per Coursera Machine Learning Course | Actually, I think that's just a typo.
On slide #16 he writes the derivative of the cost function (with the regularization term) with respect to theta but it's in the context of the Gradient Descent algorithm. Hence, he's also multiplying this derivative by $-\alpha$. Notice: On the second line (of slide 16) he has $-\lambda\theta$ (as you've written), multiplied by $-\alpha$. However, by the third line the multiplied term is still negative even though--if the second line were correct--the negative signs would've cancelled out.
Make sense? | Derivation of Regularized Linear Regression Cost Function per Coursera Machine Learning Course | Actually, I think that's just a typo.
On slide #16 he writes the derivative of the cost function (with the regularization term) with respect to theta but it's in the context of the Gradient Descent a | Derivation of Regularized Linear Regression Cost Function per Coursera Machine Learning Course
Actually, I think that's just a typo.
On slide #16 he writes the derivative of the cost function (with the regularization term) with respect to theta but it's in the context of the Gradient Descent algorithm. Hence, he's also multiplying this derivative by $-\alpha$. Notice: On the second line (of slide 16) he has $-\lambda\theta$ (as you've written), multiplied by $-\alpha$. However, by the third line the multiplied term is still negative even though--if the second line were correct--the negative signs would've cancelled out.
Make sense? | Derivation of Regularized Linear Regression Cost Function per Coursera Machine Learning Course
Actually, I think that's just a typo.
On slide #16 he writes the derivative of the cost function (with the regularization term) with respect to theta but it's in the context of the Gradient Descent a |
18,476 | How did Karl Pearson come up with the chi-squared statistic? | Pearson's 1900 paper is out of copyright, so we can read it online.
You should begin by noting that this paper is about the goodness of fit test, not the test of independence or homogeneity.
He proceeds by working with the multivariate normal, and the chi-square arises as a sum of squared standardized normal variates.
You can see from the discussion on p160-161 he's clearly discussing applying the test to multinomial distributed data (I don't think he uses that term anywhere). He apparently understands the approximate multivariate normality of the multinomial (certainly he knows the margins are approximately normal - that's a very old result - and knows the means, variances and covariances, since they're stated in the paper); my guess is that most of that stuff is already old hat by 1900. (Note that the chi-squared distribution itself dates back to work by Helmert in the mid-1870s.)
Then by the bottom of p163 he derives a chi-square statistic as "a measure of goodness of fit" (the statistic itself appears in the exponent of the multivariate normal approximation).
He then goes on to discuss how to evaluate the p-value*, and then he correctly gives the upper tail area of a $\chi^2_{12}$ beyond 43.87 as 0.000016. [You should keep in mind, however, that he didn't correctly understand how to adjust degrees of freedom for parameter estimation at that stage, so some of the examples in his papers use too high a d.f.]
*(note that neither Fisherian nor Neyman-Pearson testing paradigms exist, we nevertheless clearly see him apply the concept of a p-value already.)
You'll note that he doesn't explicitly write terms like $(O_i-E_i)^2/E_i$. Instead, he writes $m_1$, $m_2$ etc for the expected counts and for the observed quantities he uses $m'_1$ and so forth. He then defines $e = m-m'$ (bottom half p160) and computes $e^2/m$ for each cell (see eq. (xv) p163 and the last column of the table at the bottom of p167) ... equivalent quantities, but in different notation.
Much of the present way of understanding the chi-square test is not yet in place, but on the other hand, quite a bit is already there (at least if you know what to look for). A lot happened in the 1920s (and onward) that changed the way we look at these things.
As for why we divide by $E_i$ in the multinomial case, it happens that even though the variance of the individual components in a multinomial are smaller than $E_i$, when we account for the covariances, it's equivalent to just dividing by $E_i$, making for a nice simplification.
Added in edit:
The 1983 paper by Plackett gives a good deal of historical context, and something of a guide to the paper. I highly recommend taking a look at it. It looks like it's free online via JStor (if you sign in), so you shouldn't even need access via an institution to read it.
Plackett, R. L. (1983),
"Karl Pearson and the Chi-Squared Test,"
International Statistical Review,
Vol. 51, No. 1 (Apr), pp. 59-72 | How did Karl Pearson come up with the chi-squared statistic? | Pearson's 1900 paper is out of copyright, so we can read it online.
You should begin by noting that this paper is about the goodness of fit test, not the test of independence or homogeneity.
He proce | How did Karl Pearson come up with the chi-squared statistic?
Pearson's 1900 paper is out of copyright, so we can read it online.
You should begin by noting that this paper is about the goodness of fit test, not the test of independence or homogeneity.
He proceeds by working with the multivariate normal, and the chi-square arises as a sum of squared standardized normal variates.
You can see from the discussion on p160-161 he's clearly discussing applying the test to multinomial distributed data (I don't think he uses that term anywhere). He apparently understands the approximate multivariate normality of the multinomial (certainly he knows the margins are approximately normal - that's a very old result - and knows the means, variances and covariances, since they're stated in the paper); my guess is that most of that stuff is already old hat by 1900. (Note that the chi-squared distribution itself dates back to work by Helmert in the mid-1870s.)
Then by the bottom of p163 he derives a chi-square statistic as "a measure of goodness of fit" (the statistic itself appears in the exponent of the multivariate normal approximation).
He then goes on to discuss how to evaluate the p-value*, and then he correctly gives the upper tail area of a $\chi^2_{12}$ beyond 43.87 as 0.000016. [You should keep in mind, however, that he didn't correctly understand how to adjust degrees of freedom for parameter estimation at that stage, so some of the examples in his papers use too high a d.f.]
*(note that neither Fisherian nor Neyman-Pearson testing paradigms exist, we nevertheless clearly see him apply the concept of a p-value already.)
You'll note that he doesn't explicitly write terms like $(O_i-E_i)^2/E_i$. Instead, he writes $m_1$, $m_2$ etc for the expected counts and for the observed quantities he uses $m'_1$ and so forth. He then defines $e = m-m'$ (bottom half p160) and computes $e^2/m$ for each cell (see eq. (xv) p163 and the last column of the table at the bottom of p167) ... equivalent quantities, but in different notation.
Much of the present way of understanding the chi-square test is not yet in place, but on the other hand, quite a bit is already there (at least if you know what to look for). A lot happened in the 1920s (and onward) that changed the way we look at these things.
As for why we divide by $E_i$ in the multinomial case, it happens that even though the variance of the individual components in a multinomial are smaller than $E_i$, when we account for the covariances, it's equivalent to just dividing by $E_i$, making for a nice simplification.
Added in edit:
The 1983 paper by Plackett gives a good deal of historical context, and something of a guide to the paper. I highly recommend taking a look at it. It looks like it's free online via JStor (if you sign in), so you shouldn't even need access via an institution to read it.
Plackett, R. L. (1983),
"Karl Pearson and the Chi-Squared Test,"
International Statistical Review,
Vol. 51, No. 1 (Apr), pp. 59-72 | How did Karl Pearson come up with the chi-squared statistic?
Pearson's 1900 paper is out of copyright, so we can read it online.
You should begin by noting that this paper is about the goodness of fit test, not the test of independence or homogeneity.
He proce |
18,477 | From a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true parameter with 95% probability? | Update: With the benefit of a few years' hindsight, I've penned a more concise treatment of essentially the same material in response to a similar question.
How to Construct a Confidence Region
Let us begin with a general method for constructing confidence regions. It can be applied to a single parameter, to yield a confidence interval or set of intervals; and it can be applied to two or more parameters, to yield higher dimensional confidence regions.
We assert that the observed statistics $D$ originate from a distribution with parameters $\theta$, namely the sampling distribution $s(d|\theta)$ over possible statistics $d$, and seek a confidence region for $\theta$ in the set of possible values $\Theta$. Define a Highest Density Region (HDR): the $h$-HDR of a PDF is the smallest subset of its domain that supports probability $h$. Denote the $h$-HDR of $s(d|\psi)$ as $H_\psi$, for any $\psi \in \Theta$. Then, the $h$ confidence region for $\theta$, given data $D$, is the set $C_D = \{ \phi : D \in H_\phi \}$. A typical value of $h$ would be 0.95.
A Frequentist Interpretation
From the preceding definition of a confidence region follows
$$
d \in H_\psi \longleftrightarrow \psi \in C_d
$$
with $C_d = \{ \phi : d \in H_\phi \}$. Now imagine a large set of (imaginary) observations $\{D_i\}$, taken under similar circumstances to $D$. i.e. They are samples from $s(d|\theta)$. Since $H_\theta$ supports probability mass $h$ of the PDF $s(d|\theta)$, $P(D_i \in H_\theta) = h$ for all $i$. Therefore, the fraction of $\{D_i\}$ for which $D_i \in H_\theta$ is $h$. And so, using the equivalence above, the fraction of $\{D_i\}$ for which $\theta \in C_{D_i}$ is also $h$.
This, then, is what the frequentist claim for the $h$ confidence region for $\theta$ amounts to:
Take a large number of imaginary observations $\{D_i\}$ from the sampling distribution $s(d|\theta)$ that gave rise to the observed statistics $D$. Then, $\theta$ lies within a fraction $h$ of the analogous but imaginary confidence regions $\{C_{D_i}\}$.
The confidence region $C_D$ therefore does not make any claim about the probability that $\theta$ lies somewhere! The reason is simply that there is nothing in the fomulation that allows us to speak of a probability distribution over $\theta$. The interpretation is just elaborate superstructure, which does not improve the base. The base is only $s(d | \theta)$ and $D$, where $\theta$ does not appear as a distributed quantity, and there is no information we can use to address that. There are basically two ways to get a distribution over $\theta$:
Assign a distribution directly from the information at hand: $p(\theta | I)$.
Relate $\theta$ to another distributed quantity: $p(\theta | I) = \int p(\theta x | I) dx = \int p(\theta | x I) p(x | I) dx$.
In both cases, $\theta$ must appear on the left somewhere. Frequentists cannot use either method, because they both require a heretical prior.
A Bayesian View
The most a Bayesian can make of the $h$ confidence region $C_D$, given without qualification, is simply the direct interpretation: that it is the set of $\phi$ for which $D$ falls in the $h$-HDR $H_\phi$ of the sampling distribution $s(d|\phi)$. It does not necessarily tell us much about $\theta$, and here's why.
The probability that $\theta \in C_D$, given $D$ and the background information $I$, is:
\begin{align*}
P(\theta \in C_D | DI) &= \int_{C_D} p(\theta | DI) d\theta \\
&= \int_{C_D} \frac{p(D | \theta I) p(\theta | I)}{p(D | I)} d\theta
\end{align*}
Notice that, unlike the frequentist interpretation, we have immediately demanded a distribution over $\theta$. The background information $I$ tells us, as before, that the sampling distribution is $s(d | \theta)$:
\begin{align*}
P(\theta \in C_D | DI) &= \int_{C_D} \frac{s(D | \theta) p(\theta | I)}{p(D | I)} d \theta \\
&= \frac{\int_{C_D} s(D | \theta) p(\theta | I) d\theta}{p(D | I)} \\
\text{i.e.} \quad\quad P(\theta \in C_D | DI) &= \frac{\int_{C_D} s(D | \theta) p(\theta | I) d\theta}{\int s(D | \theta) p(\theta | I) d\theta}
\end{align*}
Now this expression does not in general evaluate to $h$, which is to say, the $h$ confidence region $C_D$ does not always contain $\theta$ with probability $h$. In fact it can be starkly different from $h$. There are, however, many common situations in which it does evaluate to $h$, which is why confidence regions are often consistent with our probabilistic intuitions.
For example, suppose that the prior joint PDF of $d$ and $\theta$ is symmetric in that $p_{d,\theta}(d,\theta | I) = p_{d,\theta}(\theta,d | I)$. (Clearly this involves an assumption that the PDF ranges over the same domain in $d$ and $\theta$.) Then, if the prior is $p(\theta | I) = f(\theta)$, we have $s(D | \theta) p(\theta | I) = s(D | \theta) f(\theta) = s(\theta | D) f(D)$. Hence
\begin{align*}
P(\theta \in C_D | DI) &= \frac{\int_{C_D} s(\theta | D) d\theta}{\int s(\theta | D) d\theta} \\
\text{i.e.} \quad\quad P(\theta \in C_D | DI) &= \int_{C_D} s(\theta | D) d\theta
\end{align*}
From the definition of an HDR we know that for any $\psi \in \Theta$
\begin{align*}
\int_{H_\psi} s(d | \psi) dd &= h \\
\text{and therefore that} \quad\quad \int_{H_D} s(d | D) dd &= h \\
\text{or equivalently} \quad\quad \int_{H_D} s(\theta | D) d\theta &= h
\end{align*}
Therefore, given that $s(d | \theta) f(\theta) = s(\theta | d) f(d)$, $C_D = H_D$ implies $P(\theta \in C_D | DI) = h$. The antecedent satisfies
$$
C_D = H_D \longleftrightarrow \forall \psi \; [ \psi \in C_D \leftrightarrow \psi \in H_D ]
$$
Applying the equivalence near the top:
$$
C_D = H_D \longleftrightarrow \forall \psi \; [ D \in H_\psi \leftrightarrow \psi \in H_D ]
$$
Thus, the confidence region $C_D$ contains $\theta$ with probability $h$ if for all possible values $\psi$ of $\theta$, the $h$-HDR of $s(d | \psi)$ contains $D$ if and only if the $h$-HDR of $s(d | D)$ contains $\psi$.
Now the symmetric relation $D \in H_\psi \leftrightarrow \psi \in H_D$ is satisfied for all $\psi$ when $s(\psi + \delta | \psi) = s(D - \delta | D)$ for all $\delta$ that span the support of $s(d | D)$ and $s(d | \psi)$. We can therefore form the following argument:
$s(d | \theta) f(\theta) = s(\theta | d) f(d)$ (premise)
$\forall \psi \; \forall \delta \; [ s(\psi + \delta | \psi) = s(D - \delta | D) ]$ (premise)
$\forall \psi \; \forall \delta \; [ s(\psi + \delta | \psi) = s(D - \delta | D) ] \longrightarrow \forall \psi \; [ D \in H_\psi \leftrightarrow \psi \in H_D ]$
$\therefore \quad \forall \psi \; [ D \in H_\psi \leftrightarrow \psi \in H_D ]$
$\forall \psi \; [ D \in H_\psi \leftrightarrow \psi \in H_D ] \longrightarrow C_D = H_D$
$\therefore \quad C_D = H_D$
$[s(d | \theta) f(\theta) = s(\theta | d) f(d) \wedge C_D = H_D] \longrightarrow P(\theta \in C_D | DI) = h$
$\therefore \quad P(\theta \in C_D | DI) = h$
Let's apply the argument to a confidence interval on the mean of a 1-D normal distribution $(\mu, \sigma)$, given a sample mean $\bar{x}$ from $n$ measurements. We have $\theta = \mu$ and $d = \bar{x}$, so that the sampling distribution is
$$
s(d | \theta) = \frac{\sqrt{n}}{\sigma \sqrt{2 \pi}} e^{-\frac{n}{2 \sigma^2} { \left( d - \theta \right) }^2 }
$$
Suppose also that we know nothing about $\theta$ before taking the data (except that it's a location parameter) and therefore assign a uniform prior: $f(\theta) = k$. Clearly we now have $s(d | \theta) f(\theta) = s(\theta | d) f(d)$, so the first premise is satisfied. Let $s(d | \theta) = g\left( (d - \theta)^2 \right)$. (i.e. It can be written in that form.) Then
\begin{gather*}
s(\psi + \delta | \psi) = g \left( (\psi + \delta - \psi)^2 \right) = g(\delta^2) \\
\text{and} \quad\quad s(D - \delta | D) = g \left( (D - \delta - D)^2 \right) = g(\delta^2) \\
\text{so that} \quad\quad \forall \psi \; \forall \delta \; [s(\psi + \delta | \psi) = s(D - \delta | D)]
\end{gather*}
whereupon the second premise is satisfied. Both premises being true, the eight-point argument leads us to conclude that the probability that $\theta$ lies in the confidence interval $C_D$ is $h$!
We therefore have an amusing irony:
The frequentist who assigns the $h$ confidence interval cannot say that $P(\theta \in C_D) = h$, no matter how innocently uniform $\theta$ looks before incorporating the data.
The Bayesian who would not assign an $h$ confidence interval in that way knows anyhow that $P(\theta \in C_D | DI) = h$.
Final Remarks
We have identified conditions (i.e. the two premises) under which the $h$ confidence region does indeed yield probability $h$ that $\theta \in C_D$. A frequentist will baulk at the first premise, because it involves a prior on $\theta$, and this sort of deal-breaker is inescapable on the route to a probability. But for a Bayesian, it is acceptable---nay, essential. These conditions are sufficient but not necessary, so there are many other circumstances under which the Bayesian $P(\theta \in C_D | DI)$ equals $h$. Equally though, there are many circumstances in which $P(\theta \in C_D | DI) \ne h$, especially when the prior information is significant.
We have applied a Bayesian analysis just as a consistent Bayesian would, given the information at hand, including statistics $D$. But a Bayesian, if he possibly can, will apply his methods to the raw measurements instead---to the $\{x_i\}$, rather than $\bar{x}$. Oftentimes, collapsing the raw data into summary statistics $D$ destroys information in the data; and then the summary statistics are incapable of speaking as eloquently as the original data about the parameters $\theta$. | From a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true para | Update: With the benefit of a few years' hindsight, I've penned a more concise treatment of essentially the same material in response to a similar question.
How to Construct a Confidence Region
Let u | From a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true parameter with 95% probability?
Update: With the benefit of a few years' hindsight, I've penned a more concise treatment of essentially the same material in response to a similar question.
How to Construct a Confidence Region
Let us begin with a general method for constructing confidence regions. It can be applied to a single parameter, to yield a confidence interval or set of intervals; and it can be applied to two or more parameters, to yield higher dimensional confidence regions.
We assert that the observed statistics $D$ originate from a distribution with parameters $\theta$, namely the sampling distribution $s(d|\theta)$ over possible statistics $d$, and seek a confidence region for $\theta$ in the set of possible values $\Theta$. Define a Highest Density Region (HDR): the $h$-HDR of a PDF is the smallest subset of its domain that supports probability $h$. Denote the $h$-HDR of $s(d|\psi)$ as $H_\psi$, for any $\psi \in \Theta$. Then, the $h$ confidence region for $\theta$, given data $D$, is the set $C_D = \{ \phi : D \in H_\phi \}$. A typical value of $h$ would be 0.95.
A Frequentist Interpretation
From the preceding definition of a confidence region follows
$$
d \in H_\psi \longleftrightarrow \psi \in C_d
$$
with $C_d = \{ \phi : d \in H_\phi \}$. Now imagine a large set of (imaginary) observations $\{D_i\}$, taken under similar circumstances to $D$. i.e. They are samples from $s(d|\theta)$. Since $H_\theta$ supports probability mass $h$ of the PDF $s(d|\theta)$, $P(D_i \in H_\theta) = h$ for all $i$. Therefore, the fraction of $\{D_i\}$ for which $D_i \in H_\theta$ is $h$. And so, using the equivalence above, the fraction of $\{D_i\}$ for which $\theta \in C_{D_i}$ is also $h$.
This, then, is what the frequentist claim for the $h$ confidence region for $\theta$ amounts to:
Take a large number of imaginary observations $\{D_i\}$ from the sampling distribution $s(d|\theta)$ that gave rise to the observed statistics $D$. Then, $\theta$ lies within a fraction $h$ of the analogous but imaginary confidence regions $\{C_{D_i}\}$.
The confidence region $C_D$ therefore does not make any claim about the probability that $\theta$ lies somewhere! The reason is simply that there is nothing in the fomulation that allows us to speak of a probability distribution over $\theta$. The interpretation is just elaborate superstructure, which does not improve the base. The base is only $s(d | \theta)$ and $D$, where $\theta$ does not appear as a distributed quantity, and there is no information we can use to address that. There are basically two ways to get a distribution over $\theta$:
Assign a distribution directly from the information at hand: $p(\theta | I)$.
Relate $\theta$ to another distributed quantity: $p(\theta | I) = \int p(\theta x | I) dx = \int p(\theta | x I) p(x | I) dx$.
In both cases, $\theta$ must appear on the left somewhere. Frequentists cannot use either method, because they both require a heretical prior.
A Bayesian View
The most a Bayesian can make of the $h$ confidence region $C_D$, given without qualification, is simply the direct interpretation: that it is the set of $\phi$ for which $D$ falls in the $h$-HDR $H_\phi$ of the sampling distribution $s(d|\phi)$. It does not necessarily tell us much about $\theta$, and here's why.
The probability that $\theta \in C_D$, given $D$ and the background information $I$, is:
\begin{align*}
P(\theta \in C_D | DI) &= \int_{C_D} p(\theta | DI) d\theta \\
&= \int_{C_D} \frac{p(D | \theta I) p(\theta | I)}{p(D | I)} d\theta
\end{align*}
Notice that, unlike the frequentist interpretation, we have immediately demanded a distribution over $\theta$. The background information $I$ tells us, as before, that the sampling distribution is $s(d | \theta)$:
\begin{align*}
P(\theta \in C_D | DI) &= \int_{C_D} \frac{s(D | \theta) p(\theta | I)}{p(D | I)} d \theta \\
&= \frac{\int_{C_D} s(D | \theta) p(\theta | I) d\theta}{p(D | I)} \\
\text{i.e.} \quad\quad P(\theta \in C_D | DI) &= \frac{\int_{C_D} s(D | \theta) p(\theta | I) d\theta}{\int s(D | \theta) p(\theta | I) d\theta}
\end{align*}
Now this expression does not in general evaluate to $h$, which is to say, the $h$ confidence region $C_D$ does not always contain $\theta$ with probability $h$. In fact it can be starkly different from $h$. There are, however, many common situations in which it does evaluate to $h$, which is why confidence regions are often consistent with our probabilistic intuitions.
For example, suppose that the prior joint PDF of $d$ and $\theta$ is symmetric in that $p_{d,\theta}(d,\theta | I) = p_{d,\theta}(\theta,d | I)$. (Clearly this involves an assumption that the PDF ranges over the same domain in $d$ and $\theta$.) Then, if the prior is $p(\theta | I) = f(\theta)$, we have $s(D | \theta) p(\theta | I) = s(D | \theta) f(\theta) = s(\theta | D) f(D)$. Hence
\begin{align*}
P(\theta \in C_D | DI) &= \frac{\int_{C_D} s(\theta | D) d\theta}{\int s(\theta | D) d\theta} \\
\text{i.e.} \quad\quad P(\theta \in C_D | DI) &= \int_{C_D} s(\theta | D) d\theta
\end{align*}
From the definition of an HDR we know that for any $\psi \in \Theta$
\begin{align*}
\int_{H_\psi} s(d | \psi) dd &= h \\
\text{and therefore that} \quad\quad \int_{H_D} s(d | D) dd &= h \\
\text{or equivalently} \quad\quad \int_{H_D} s(\theta | D) d\theta &= h
\end{align*}
Therefore, given that $s(d | \theta) f(\theta) = s(\theta | d) f(d)$, $C_D = H_D$ implies $P(\theta \in C_D | DI) = h$. The antecedent satisfies
$$
C_D = H_D \longleftrightarrow \forall \psi \; [ \psi \in C_D \leftrightarrow \psi \in H_D ]
$$
Applying the equivalence near the top:
$$
C_D = H_D \longleftrightarrow \forall \psi \; [ D \in H_\psi \leftrightarrow \psi \in H_D ]
$$
Thus, the confidence region $C_D$ contains $\theta$ with probability $h$ if for all possible values $\psi$ of $\theta$, the $h$-HDR of $s(d | \psi)$ contains $D$ if and only if the $h$-HDR of $s(d | D)$ contains $\psi$.
Now the symmetric relation $D \in H_\psi \leftrightarrow \psi \in H_D$ is satisfied for all $\psi$ when $s(\psi + \delta | \psi) = s(D - \delta | D)$ for all $\delta$ that span the support of $s(d | D)$ and $s(d | \psi)$. We can therefore form the following argument:
$s(d | \theta) f(\theta) = s(\theta | d) f(d)$ (premise)
$\forall \psi \; \forall \delta \; [ s(\psi + \delta | \psi) = s(D - \delta | D) ]$ (premise)
$\forall \psi \; \forall \delta \; [ s(\psi + \delta | \psi) = s(D - \delta | D) ] \longrightarrow \forall \psi \; [ D \in H_\psi \leftrightarrow \psi \in H_D ]$
$\therefore \quad \forall \psi \; [ D \in H_\psi \leftrightarrow \psi \in H_D ]$
$\forall \psi \; [ D \in H_\psi \leftrightarrow \psi \in H_D ] \longrightarrow C_D = H_D$
$\therefore \quad C_D = H_D$
$[s(d | \theta) f(\theta) = s(\theta | d) f(d) \wedge C_D = H_D] \longrightarrow P(\theta \in C_D | DI) = h$
$\therefore \quad P(\theta \in C_D | DI) = h$
Let's apply the argument to a confidence interval on the mean of a 1-D normal distribution $(\mu, \sigma)$, given a sample mean $\bar{x}$ from $n$ measurements. We have $\theta = \mu$ and $d = \bar{x}$, so that the sampling distribution is
$$
s(d | \theta) = \frac{\sqrt{n}}{\sigma \sqrt{2 \pi}} e^{-\frac{n}{2 \sigma^2} { \left( d - \theta \right) }^2 }
$$
Suppose also that we know nothing about $\theta$ before taking the data (except that it's a location parameter) and therefore assign a uniform prior: $f(\theta) = k$. Clearly we now have $s(d | \theta) f(\theta) = s(\theta | d) f(d)$, so the first premise is satisfied. Let $s(d | \theta) = g\left( (d - \theta)^2 \right)$. (i.e. It can be written in that form.) Then
\begin{gather*}
s(\psi + \delta | \psi) = g \left( (\psi + \delta - \psi)^2 \right) = g(\delta^2) \\
\text{and} \quad\quad s(D - \delta | D) = g \left( (D - \delta - D)^2 \right) = g(\delta^2) \\
\text{so that} \quad\quad \forall \psi \; \forall \delta \; [s(\psi + \delta | \psi) = s(D - \delta | D)]
\end{gather*}
whereupon the second premise is satisfied. Both premises being true, the eight-point argument leads us to conclude that the probability that $\theta$ lies in the confidence interval $C_D$ is $h$!
We therefore have an amusing irony:
The frequentist who assigns the $h$ confidence interval cannot say that $P(\theta \in C_D) = h$, no matter how innocently uniform $\theta$ looks before incorporating the data.
The Bayesian who would not assign an $h$ confidence interval in that way knows anyhow that $P(\theta \in C_D | DI) = h$.
Final Remarks
We have identified conditions (i.e. the two premises) under which the $h$ confidence region does indeed yield probability $h$ that $\theta \in C_D$. A frequentist will baulk at the first premise, because it involves a prior on $\theta$, and this sort of deal-breaker is inescapable on the route to a probability. But for a Bayesian, it is acceptable---nay, essential. These conditions are sufficient but not necessary, so there are many other circumstances under which the Bayesian $P(\theta \in C_D | DI)$ equals $h$. Equally though, there are many circumstances in which $P(\theta \in C_D | DI) \ne h$, especially when the prior information is significant.
We have applied a Bayesian analysis just as a consistent Bayesian would, given the information at hand, including statistics $D$. But a Bayesian, if he possibly can, will apply his methods to the raw measurements instead---to the $\{x_i\}$, rather than $\bar{x}$. Oftentimes, collapsing the raw data into summary statistics $D$ destroys information in the data; and then the summary statistics are incapable of speaking as eloquently as the original data about the parameters $\theta$. | From a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true para
Update: With the benefit of a few years' hindsight, I've penned a more concise treatment of essentially the same material in response to a similar question.
How to Construct a Confidence Region
Let u |
18,478 | From a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true parameter with 95% probability? | from a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true parameter with 95% probability?
Two answers to this, the first being less helpful than the second
There are no confidence intervals in Bayesian statistics, so the question doesn't pertain.
In Bayesian statistics, there are however credible intervals, which play a similar role to confidence intervals. If you view priors and posteriors in Bayesian statistics as quantifying the reasonable belief that a parameter takes on certain values, then the answer to your question is yes, a 95% credible interval represents an interval within which a parameter is believed to lie with 95% probability.
If I have a process that I know produces a correct answer 95% of the time then the probability of the next answer being correct is 0.95 (given that I don't have any extra information regarding the process).
yes, the process guesses a right answer with 95% probability
Similarly if someone shows me a confidence interval that is created by a process that will contain the true parameter 95% of the time, should I not be right in saying that it contains the true parameter with 0.95 probability, given what I know?
Just the same as your process, the confidence interval guesses the correct answer with 95% probability. We're back in the world of classical statistics here: before you gather the data you can say there's a 95% probability of randomly gathered data determining the bounds of the confidence interval such that the mean is within the bounds.
With your process, after you've gotten your answer, you can't say based on whatever your guess was, that the true answer is the same as your guess with 95% probability. The guess is either right or wrong.
And just the same as your process, in the confidence interval case, after you've gotten the data and have an actual lower and upper bound, the mean is either within those bounds or it isn't, i.e. the chance of the mean being within those particular bounds is either 1 or 0. (Having skimmed the question you refer to it seems this is covered in much more detail there.)
How to interpret a confidence interval given to you if you subscribe to a Bayesian view of probability.
There are a couple of ways of looking at this
Technically, the confidence interval hasn't been produced using a prior and Bayes theorem, so if you had a prior belief about the parameter concerned, there would be no way you could interpret the confidence interval in the Bayesian framework.
Another widely used and respected interpretation of confidence intervals is that they provide a "plausible range" of values for the parameter (see, e.g., here). This de-emphasises the "repeated experiments" interpretation.
Moreover, under certain circumstances, notably when the prior is uninformative (doesn't tell you anything, e.g. flat), confidence intervals can produce exactly the same interval as a credible interval. In these circumstances, as a Bayesianist you could argue that had you taken the Bayesian route you would have gotten exactly the same results and you could interpret the confidence interval in the same way as a credible interval. | From a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true para | from a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true parameter with 95% probability?
Two answers to this, the first being less helpful than the second
Ther | From a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true parameter with 95% probability?
from a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true parameter with 95% probability?
Two answers to this, the first being less helpful than the second
There are no confidence intervals in Bayesian statistics, so the question doesn't pertain.
In Bayesian statistics, there are however credible intervals, which play a similar role to confidence intervals. If you view priors and posteriors in Bayesian statistics as quantifying the reasonable belief that a parameter takes on certain values, then the answer to your question is yes, a 95% credible interval represents an interval within which a parameter is believed to lie with 95% probability.
If I have a process that I know produces a correct answer 95% of the time then the probability of the next answer being correct is 0.95 (given that I don't have any extra information regarding the process).
yes, the process guesses a right answer with 95% probability
Similarly if someone shows me a confidence interval that is created by a process that will contain the true parameter 95% of the time, should I not be right in saying that it contains the true parameter with 0.95 probability, given what I know?
Just the same as your process, the confidence interval guesses the correct answer with 95% probability. We're back in the world of classical statistics here: before you gather the data you can say there's a 95% probability of randomly gathered data determining the bounds of the confidence interval such that the mean is within the bounds.
With your process, after you've gotten your answer, you can't say based on whatever your guess was, that the true answer is the same as your guess with 95% probability. The guess is either right or wrong.
And just the same as your process, in the confidence interval case, after you've gotten the data and have an actual lower and upper bound, the mean is either within those bounds or it isn't, i.e. the chance of the mean being within those particular bounds is either 1 or 0. (Having skimmed the question you refer to it seems this is covered in much more detail there.)
How to interpret a confidence interval given to you if you subscribe to a Bayesian view of probability.
There are a couple of ways of looking at this
Technically, the confidence interval hasn't been produced using a prior and Bayes theorem, so if you had a prior belief about the parameter concerned, there would be no way you could interpret the confidence interval in the Bayesian framework.
Another widely used and respected interpretation of confidence intervals is that they provide a "plausible range" of values for the parameter (see, e.g., here). This de-emphasises the "repeated experiments" interpretation.
Moreover, under certain circumstances, notably when the prior is uninformative (doesn't tell you anything, e.g. flat), confidence intervals can produce exactly the same interval as a credible interval. In these circumstances, as a Bayesianist you could argue that had you taken the Bayesian route you would have gotten exactly the same results and you could interpret the confidence interval in the same way as a credible interval. | From a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true para
from a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true parameter with 95% probability?
Two answers to this, the first being less helpful than the second
Ther |
18,479 | From a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true parameter with 95% probability? | I'll give you an extreme example where they are different.
Suppose I create my 95% confidence interval for a parameter $\theta $ as follows. Start by sampling the data. Then generate a random number between $0 $ and $1 $. Call this number $ u $. If $ u $ is less than $0.95 $ then return the interval $(-\infty,\infty) $. Otherwise return the "null" interval.
Now over continued repititions, 95% of the CIs will be "all numbers" and hence contain the true value. The other 5% contain no values, hence have zero coverage. Overall, this is a useless, but technically correct 95% CI.
The Bayesian credible interval will be either 100% or 0%. Not 95%. | From a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true para | I'll give you an extreme example where they are different.
Suppose I create my 95% confidence interval for a parameter $\theta $ as follows. Start by sampling the data. Then generate a random number | From a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true parameter with 95% probability?
I'll give you an extreme example where they are different.
Suppose I create my 95% confidence interval for a parameter $\theta $ as follows. Start by sampling the data. Then generate a random number between $0 $ and $1 $. Call this number $ u $. If $ u $ is less than $0.95 $ then return the interval $(-\infty,\infty) $. Otherwise return the "null" interval.
Now over continued repititions, 95% of the CIs will be "all numbers" and hence contain the true value. The other 5% contain no values, hence have zero coverage. Overall, this is a useless, but technically correct 95% CI.
The Bayesian credible interval will be either 100% or 0%. Not 95%. | From a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true para
I'll give you an extreme example where they are different.
Suppose I create my 95% confidence interval for a parameter $\theta $ as follows. Start by sampling the data. Then generate a random number |
18,480 | From a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true parameter with 95% probability? | "from a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true parameter with 95% probability? "
In Bayesian Statistics the parameter is not a unknown value, it is a Distribution. There is no interval containing the "true value", for a Bayesian point of view it does not even make sense. The parameter it's a random variable, you can perfectly know the probability of that value to be between x_inf an x_max if you know the distribuition. It's just a diferent mindset about the parameters, usually Bayesians used the median or average value of the distribuition of the parameter as a "estimate". There is not a confidence interval in Bayesian Statistics, something similar is called credibility interval.
Now from a frequencist point of view, the parameter is a "Fixed Value", not a random variable, can you really obtain probability interval (a 95% one) ? Remember that it's a fixed value not a random variable with a known distribution. Thats why you past the text :"A confidence interval does not predict that the true value of the parameter has a particular probability of being in the confidence interval given the data actually obtained."
The idea of repeating the experience over and over... is not Bayesian reasoning it's a Frequencist one. Imagine a real live experiment that you can only do once in your life time, can you/should you built that confidence interval (from the classical point of view )?.
But... in real life the results could get pretty close ( Bayesian vs Frequencist), maybe thats why It could be confusing. | From a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true para | "from a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true parameter with 95% probability? "
In Bayesian Statistics the parameter is not a unknown value, it is a | From a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true parameter with 95% probability?
"from a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true parameter with 95% probability? "
In Bayesian Statistics the parameter is not a unknown value, it is a Distribution. There is no interval containing the "true value", for a Bayesian point of view it does not even make sense. The parameter it's a random variable, you can perfectly know the probability of that value to be between x_inf an x_max if you know the distribuition. It's just a diferent mindset about the parameters, usually Bayesians used the median or average value of the distribuition of the parameter as a "estimate". There is not a confidence interval in Bayesian Statistics, something similar is called credibility interval.
Now from a frequencist point of view, the parameter is a "Fixed Value", not a random variable, can you really obtain probability interval (a 95% one) ? Remember that it's a fixed value not a random variable with a known distribution. Thats why you past the text :"A confidence interval does not predict that the true value of the parameter has a particular probability of being in the confidence interval given the data actually obtained."
The idea of repeating the experience over and over... is not Bayesian reasoning it's a Frequencist one. Imagine a real live experiment that you can only do once in your life time, can you/should you built that confidence interval (from the classical point of view )?.
But... in real life the results could get pretty close ( Bayesian vs Frequencist), maybe thats why It could be confusing. | From a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true para
"from a Bayesian probability perspective, why doesn't a 95% confidence interval contain the true parameter with 95% probability? "
In Bayesian Statistics the parameter is not a unknown value, it is a |
18,481 | Calculating likelihood from RMSE | The root mean squared error and the likelihood are actually closely related. Say you have a dataset of $\lbrace x_i, z_i \rbrace$ pairs and you want to model their relationship using the model $f$. You decide to minimize the quadratic error
$$\sum_i \left(f(x_i) - z_i\right)^2$$
Isn't this choice totally arbitrary? Sure, you want to penalize estimates that are completely wrong more than those that are about right. But there is a very good reason to use the squared error.
Remember the Gaussian density: $\frac{1}{Z}\exp \frac{-(x - \mu)^2}{2\sigma^2}$ where $Z$ is the normalization constant that we do not care about for now. Let's asume that your target data $z$ is distributed according to a Gaussian. So we can write down the likelihood of the data.
$$\mathcal{L} = \prod_i \frac{1}{Z}\exp \frac{-(f(x_i) - z_i)^2}{2\sigma^2}$$
Now if you take the logarithm of this...
$$\log \mathcal{L} = \sum_i \frac{-(f(x_i) - z_i)^2}{2\sigma^2} - \log Z$$
... it turns out that it is very closely related to the rms: the only differences are some constant terms, a square root and a multiplication.
Long story short: Minimizing the root mean squared error is equivalent to maximizing the log likelihood of the data. | Calculating likelihood from RMSE | The root mean squared error and the likelihood are actually closely related. Say you have a dataset of $\lbrace x_i, z_i \rbrace$ pairs and you want to model their relationship using the model $f$. Yo | Calculating likelihood from RMSE
The root mean squared error and the likelihood are actually closely related. Say you have a dataset of $\lbrace x_i, z_i \rbrace$ pairs and you want to model their relationship using the model $f$. You decide to minimize the quadratic error
$$\sum_i \left(f(x_i) - z_i\right)^2$$
Isn't this choice totally arbitrary? Sure, you want to penalize estimates that are completely wrong more than those that are about right. But there is a very good reason to use the squared error.
Remember the Gaussian density: $\frac{1}{Z}\exp \frac{-(x - \mu)^2}{2\sigma^2}$ where $Z$ is the normalization constant that we do not care about for now. Let's asume that your target data $z$ is distributed according to a Gaussian. So we can write down the likelihood of the data.
$$\mathcal{L} = \prod_i \frac{1}{Z}\exp \frac{-(f(x_i) - z_i)^2}{2\sigma^2}$$
Now if you take the logarithm of this...
$$\log \mathcal{L} = \sum_i \frac{-(f(x_i) - z_i)^2}{2\sigma^2} - \log Z$$
... it turns out that it is very closely related to the rms: the only differences are some constant terms, a square root and a multiplication.
Long story short: Minimizing the root mean squared error is equivalent to maximizing the log likelihood of the data. | Calculating likelihood from RMSE
The root mean squared error and the likelihood are actually closely related. Say you have a dataset of $\lbrace x_i, z_i \rbrace$ pairs and you want to model their relationship using the model $f$. Yo |
18,482 | Closed form formula for distribution function including skewness and kurtosis? | There are many such formulas. The first successful attempt at solving precisely this problem was made by Karl Pearson in 1895, eventually leading to the system of Pearson distributions. This family can be parameterized by the mean, variance, skewness, and kurtosis. It includes, as familiar special cases, Normal, Student-t, Chi-square, Inverse Gamma, and F distributions. Kendall & Stuart Vol 1 give details and examples. | Closed form formula for distribution function including skewness and kurtosis? | There are many such formulas. The first successful attempt at solving precisely this problem was made by Karl Pearson in 1895, eventually leading to the system of Pearson distributions. This family | Closed form formula for distribution function including skewness and kurtosis?
There are many such formulas. The first successful attempt at solving precisely this problem was made by Karl Pearson in 1895, eventually leading to the system of Pearson distributions. This family can be parameterized by the mean, variance, skewness, and kurtosis. It includes, as familiar special cases, Normal, Student-t, Chi-square, Inverse Gamma, and F distributions. Kendall & Stuart Vol 1 give details and examples. | Closed form formula for distribution function including skewness and kurtosis?
There are many such formulas. The first successful attempt at solving precisely this problem was made by Karl Pearson in 1895, eventually leading to the system of Pearson distributions. This family |
18,483 | Closed form formula for distribution function including skewness and kurtosis? | This sounds like a 'moment-matching' approach to fitting a distribution to data. It is generally regarded as not a great idea (the title of John Cook's blog post is 'a statistical dead end'). | Closed form formula for distribution function including skewness and kurtosis? | This sounds like a 'moment-matching' approach to fitting a distribution to data. It is generally regarded as not a great idea (the title of John Cook's blog post is 'a statistical dead end'). | Closed form formula for distribution function including skewness and kurtosis?
This sounds like a 'moment-matching' approach to fitting a distribution to data. It is generally regarded as not a great idea (the title of John Cook's blog post is 'a statistical dead end'). | Closed form formula for distribution function including skewness and kurtosis?
This sounds like a 'moment-matching' approach to fitting a distribution to data. It is generally regarded as not a great idea (the title of John Cook's blog post is 'a statistical dead end'). |
18,484 | Closed form formula for distribution function including skewness and kurtosis? | D’Agostino’s K2 test will tell you whether a sample distribution came from a normal distribution based on the sample's skewness and kurtosis.
If you want to do a test assuming a non-normal distribution (perhaps with high skewness or kurtosis), you'll need to figure out what the distribution is. You can look at the skew normal distribution and the generalized normal distribution. If you do this, you consider other distributions too. | Closed form formula for distribution function including skewness and kurtosis? | D’Agostino’s K2 test will tell you whether a sample distribution came from a normal distribution based on the sample's skewness and kurtosis.
If you want to do a test assuming a non-normal distributio | Closed form formula for distribution function including skewness and kurtosis?
D’Agostino’s K2 test will tell you whether a sample distribution came from a normal distribution based on the sample's skewness and kurtosis.
If you want to do a test assuming a non-normal distribution (perhaps with high skewness or kurtosis), you'll need to figure out what the distribution is. You can look at the skew normal distribution and the generalized normal distribution. If you do this, you consider other distributions too. | Closed form formula for distribution function including skewness and kurtosis?
D’Agostino’s K2 test will tell you whether a sample distribution came from a normal distribution based on the sample's skewness and kurtosis.
If you want to do a test assuming a non-normal distributio |
18,485 | Interpreting range bars in R's plot.stl? | Here is an example to discuss specifics against:
> plot(stl(nottem, "per"))
So on the upper panel, we might consider the bar as 1 unit of variation. The bar on the seasonal panel is only slightly larger than that on the data panel, indicating that the seasonal signal is large relative to the variation in the data. In other words, if we shrunk the seasonal panel such that the box became the same size as that in the data panel, the range of variation on the shrunk seasonal panel would be similar to but slightly smaller than that on the data panel.
Now consider the trend panel; the grey box is now much larger than either of the ones on the data or seasonal panel, indicating the variation attributed to the trend is much smaller than the seasonal component and consequently only a small part of the variation in the data series. The variation attributed to the trend is considerably smaller than the stochastic component (the remainders). As such, we can deduce that these data do not exhibit a trend.
Now look at another example:
> plot(stl(co2, "per"))
which gives
If we look at the relative sizes of the bars on this plot, we note that the trend dominates the data series and consequently the grey bars are of similar size. Of next greatest importance is variation at the seasonal scale, although variation at this scale is a much smaller component of the variation exhibited in the original data. The residuals (remainder) represent only small stochastic fluctuations as the grey bar is very large relative to the other panels.
So the general idea is that if you scaled all the panels such that the grey bars were all the same size, you would be able to determine the relative magnitude of the variations in each of the components and how much of the variation in the original data they contained. But because the plot draws each component on it's own scale, we need the bars to give us a relative scale for comparison.
Does this help any? | Interpreting range bars in R's plot.stl? | Here is an example to discuss specifics against:
> plot(stl(nottem, "per"))
So on the upper panel, we might consider the bar as 1 unit of variation. The bar on the seasonal panel is only slightly la | Interpreting range bars in R's plot.stl?
Here is an example to discuss specifics against:
> plot(stl(nottem, "per"))
So on the upper panel, we might consider the bar as 1 unit of variation. The bar on the seasonal panel is only slightly larger than that on the data panel, indicating that the seasonal signal is large relative to the variation in the data. In other words, if we shrunk the seasonal panel such that the box became the same size as that in the data panel, the range of variation on the shrunk seasonal panel would be similar to but slightly smaller than that on the data panel.
Now consider the trend panel; the grey box is now much larger than either of the ones on the data or seasonal panel, indicating the variation attributed to the trend is much smaller than the seasonal component and consequently only a small part of the variation in the data series. The variation attributed to the trend is considerably smaller than the stochastic component (the remainders). As such, we can deduce that these data do not exhibit a trend.
Now look at another example:
> plot(stl(co2, "per"))
which gives
If we look at the relative sizes of the bars on this plot, we note that the trend dominates the data series and consequently the grey bars are of similar size. Of next greatest importance is variation at the seasonal scale, although variation at this scale is a much smaller component of the variation exhibited in the original data. The residuals (remainder) represent only small stochastic fluctuations as the grey bar is very large relative to the other panels.
So the general idea is that if you scaled all the panels such that the grey bars were all the same size, you would be able to determine the relative magnitude of the variations in each of the components and how much of the variation in the original data they contained. But because the plot draws each component on it's own scale, we need the bars to give us a relative scale for comparison.
Does this help any? | Interpreting range bars in R's plot.stl?
Here is an example to discuss specifics against:
> plot(stl(nottem, "per"))
So on the upper panel, we might consider the bar as 1 unit of variation. The bar on the seasonal panel is only slightly la |
18,486 | What if there is no true data-generating process? | Have you heard the "all models are wrong, but some are useful" quote? It's one o the most famous quotes in statistics.
Let's use human language as an example. What you say, is a result of many parallel and concurring processes. It is influenced by the rules governing the language, your fluency in the language, educational background, the books you've read in your lifetime, cultural factors, context, whom you're talking to, psychological and physiological factors influencing you at the moment of speaking, and many, many more things, and you may be quotig or misquoting someone who was influenced by them in the past, etc. There's no a function, process, or distribution that "generated" the words that came out of your mouth.
Playing an Advocatus Diaboli, now think of forecasting weather. It is hard, because weather is influenced that many interacting factors. Weather is a chaotic system. But maybe the system as a whole can be thought as a process the generates the weather?
It's a philosophical discussion. It's also an unnecessary one, at least form a practical point of view. We don't really need to believe that there's a distribution or process that generates our data. It's a mathematical abstraction. We wouldn’t be able to talk about statistical properties of estimators such as bias and variance (to give only one example), without introducing some abstract, mathematical objects for the things that are modeled. We are using mathematical functions to approximate something, this something needs also to be considered as a function, so can it be discussed in mathematical terms. We are not claiming that there exists a process that "generates" the data for us, we are just using an abstract concept to talk about it.
So yes, ale models are misspecified, wrong. They are only approximations. The "things" they approximate are just abstract concepts. If you want to really go all the way to the rabbit hole, there is no such things as sound, colors, wind, or trees, or us. We are just particles surrounded by other particles and we assign some meanings to groups particles that at a particular moment stay close to each other, but do those things exist? Maybe should we be building particle-level models of reality? A related xkcd below. | What if there is no true data-generating process? | Have you heard the "all models are wrong, but some are useful" quote? It's one o the most famous quotes in statistics.
Let's use human language as an example. What you say, is a result of many paralle | What if there is no true data-generating process?
Have you heard the "all models are wrong, but some are useful" quote? It's one o the most famous quotes in statistics.
Let's use human language as an example. What you say, is a result of many parallel and concurring processes. It is influenced by the rules governing the language, your fluency in the language, educational background, the books you've read in your lifetime, cultural factors, context, whom you're talking to, psychological and physiological factors influencing you at the moment of speaking, and many, many more things, and you may be quotig or misquoting someone who was influenced by them in the past, etc. There's no a function, process, or distribution that "generated" the words that came out of your mouth.
Playing an Advocatus Diaboli, now think of forecasting weather. It is hard, because weather is influenced that many interacting factors. Weather is a chaotic system. But maybe the system as a whole can be thought as a process the generates the weather?
It's a philosophical discussion. It's also an unnecessary one, at least form a practical point of view. We don't really need to believe that there's a distribution or process that generates our data. It's a mathematical abstraction. We wouldn’t be able to talk about statistical properties of estimators such as bias and variance (to give only one example), without introducing some abstract, mathematical objects for the things that are modeled. We are using mathematical functions to approximate something, this something needs also to be considered as a function, so can it be discussed in mathematical terms. We are not claiming that there exists a process that "generates" the data for us, we are just using an abstract concept to talk about it.
So yes, ale models are misspecified, wrong. They are only approximations. The "things" they approximate are just abstract concepts. If you want to really go all the way to the rabbit hole, there is no such things as sound, colors, wind, or trees, or us. We are just particles surrounded by other particles and we assign some meanings to groups particles that at a particular moment stay close to each other, but do those things exist? Maybe should we be building particle-level models of reality? A related xkcd below. | What if there is no true data-generating process?
Have you heard the "all models are wrong, but some are useful" quote? It's one o the most famous quotes in statistics.
Let's use human language as an example. What you say, is a result of many paralle |
18,487 | What if there is no true data-generating process? | Looking at it the other way, if there were no true data generating process, how did the data get generated?
The inability of standard estimating techniques to accurately approximate the true data-generating process doesn't mean that the data generating process doesn't exist, it just means that we don't have enough data to determine the parameters of the model (or more generally the correct form of the model).
However, when we make a model, our goal is not to exactly capture the true data generating process, only to make a simplified representation or abstraction of the important features of the true data generating process (TDGP) that we can use to understand the TDGP or to make predictions/forecast of how it will behave in some situation we have not directly observed. Our brains are very limited, we can't understand the detail of the TDGP, so we need abstractions and simplified models to maximise what we are able to understand.
Rather than say there is not TDGP, I would say there is no such thing as "randomness" (except perhaps at a quantum level, but even that might not be random either, although the Bell experiment suggests it probably is). We use the concept of "random" to explain the results of deterministic systems that we can't predict because of a lack of information. So the purpose of a statistical model is express our limited state of knowledge regarding the deterministic system. For example, flipping a coin isn't random, whether it comes down heads or tails is just physics, depending on the properties of the coin and the forces applied to it. It only seems random because we don't have full knowledge of those properties or forces.
At the end of the day, the more data we have, in principle the more information we can extract from it (with diminishing returns), and the better our state of knowledge about the TDGP.
The reason averaging helps is that the error of the model is composed of bias and variance, c.f. @Tim's answer (+1). If we don't have much data, the variance component will be high, but that variance will not be coherent for models trained on different samples, and so will partially cancel when model predictions are averaged. This is not telling you anything about the TDGP, it is telling you about the estimation of model parameters (and that you should get more data if you can). | What if there is no true data-generating process? | Looking at it the other way, if there were no true data generating process, how did the data get generated?
The inability of standard estimating techniques to accurately approximate the true data-gene | What if there is no true data-generating process?
Looking at it the other way, if there were no true data generating process, how did the data get generated?
The inability of standard estimating techniques to accurately approximate the true data-generating process doesn't mean that the data generating process doesn't exist, it just means that we don't have enough data to determine the parameters of the model (or more generally the correct form of the model).
However, when we make a model, our goal is not to exactly capture the true data generating process, only to make a simplified representation or abstraction of the important features of the true data generating process (TDGP) that we can use to understand the TDGP or to make predictions/forecast of how it will behave in some situation we have not directly observed. Our brains are very limited, we can't understand the detail of the TDGP, so we need abstractions and simplified models to maximise what we are able to understand.
Rather than say there is not TDGP, I would say there is no such thing as "randomness" (except perhaps at a quantum level, but even that might not be random either, although the Bell experiment suggests it probably is). We use the concept of "random" to explain the results of deterministic systems that we can't predict because of a lack of information. So the purpose of a statistical model is express our limited state of knowledge regarding the deterministic system. For example, flipping a coin isn't random, whether it comes down heads or tails is just physics, depending on the properties of the coin and the forces applied to it. It only seems random because we don't have full knowledge of those properties or forces.
At the end of the day, the more data we have, in principle the more information we can extract from it (with diminishing returns), and the better our state of knowledge about the TDGP.
The reason averaging helps is that the error of the model is composed of bias and variance, c.f. @Tim's answer (+1). If we don't have much data, the variance component will be high, but that variance will not be coherent for models trained on different samples, and so will partially cancel when model predictions are averaged. This is not telling you anything about the TDGP, it is telling you about the estimation of model parameters (and that you should get more data if you can). | What if there is no true data-generating process?
Looking at it the other way, if there were no true data generating process, how did the data get generated?
The inability of standard estimating techniques to accurately approximate the true data-gene |
18,488 | Issue with complete separation in logistic regression (in R) | TL;DR: The warning is not occurring because of complete separation.
library("tidyverse")
library("broom")
# semicolon delimited but period for decimal
ratios <- read_delim("data/W0krtTYM.txt", delim=";")
# filter out the ones with missing values to make it easier to see what's going on
ratios.complete <- filter(ratios, !is.na(ROS), !is.na(ROI), !is.na(debt_ratio))
glm0<-glm(Default~ROS+ROI+debt_ratio,data=ratios.complete,family=binomial)
#> Warning: glm.fit: fitted probabilities numerically 0 or 1 occurred
summary(glm0)
#>
#> Call:
#> glm(formula = Default ~ ROS + ROI + debt_ratio, family = binomial,
#> data = ratios.complete)
#>
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -2.8773 -0.3133 -0.2868 -0.2355 3.6160
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept) -3.759154 0.306226 -12.276 < 2e-16 ***
#> ROS -0.919294 0.245712 -3.741 0.000183 ***
#> ROI -0.044447 0.008981 -4.949 7.45e-07 ***
#> debt_ratio 0.868707 0.291368 2.981 0.002869 **
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for binomial family taken to be 1)
#>
#> Null deviance: 604.89 on 998 degrees of freedom
#> Residual deviance: 372.43 on 995 degrees of freedom
#> AIC: 380.43
#>
#> Number of Fisher Scoring iterations: 8
When does that warning occur? Looking at the source code for glm.fit() we find
eps <- 10 * .Machine$double.eps
if (family$family == "binomial") {
if (any(mu > 1 - eps) || any(mu < eps))
warning("glm.fit: fitted probabilities numerically 0 or 1 occurred",
call. = FALSE)
}
The warning will arise whenever a predicted probability is effectively indistinguishable from 1.
The problem is on the top end:
glm0.resids <- augment(glm0) %>%
mutate(p = 1 / (1 + exp(-.fitted)),
warning = p > 1-eps)
arrange(glm0.resids, desc(.fitted)) %>%
select(2:5, p, warning) %>%
slice(1:10)
#> # A tibble: 10 x 6
#> ROS ROI debt_ratio .fitted p warning
#> <dbl> <dbl> <dbl> <dbl> <dbl> <lgl>
#> 1 - 25.0 -10071 452 860 1.00 T
#> 2 -292 - 17.9 0.0896 266 1.00 T
#> 3 - 96.0 - 176 0.0219 92.3 1.00 T
#> 4 - 25.4 - 548 6.43 49.5 1.00 T
#> 5 - 1.80 - 238 21.2 26.9 1.000 F
#> 6 - 5.65 - 344 11.3 26.6 1.000 F
#> 7 - 0.597 - 345 4.43 16.0 1.000 F
#> 8 - 2.62 - 359 0.444 15.0 1.000 F
#> 9 - 0.470 - 193 9.87 13.8 1.000 F
#> 10 - 2.46 - 176 3.64 9.50 1.000 F
So there are four observations that are causing the issue. They all have extreme values of one or more covariates.
But there are lots of other observations that are similarly close to 1.
There are some observations with high leverage -- what do they look like?
arrange(glm0.resids, desc(.hat)) %>%
select(2:4, .hat, p, warning) %>%
slice(1:10)
#> # A tibble: 10 x 6
#> ROS ROI debt_ratio .hat p warning
#> <dbl> <dbl> <dbl> <dbl> <dbl> <lgl>
#> 1 0.995 - 2.46 4.96 0.358 0.437 F
#> 2 -3.01 - 0.633 1.36 0.138 0.555 F
#> 3 -3.08 -14.6 0.0686 0.136 0.444 F
#> 4 -2.64 - 0.113 1.90 0.126 0.579 F
#> 5 -2.95 -13.9 0.773 0.112 0.561 F
#> 6 -0.0132 -14.9 3.12 0.0936 0.407 F
#> 7 -2.60 -10.9 0.856 0.0881 0.464 F
#> 8 -3.41 -26.4 1.12 0.0846 0.821 F
#> 9 -1.63 - 1.02 2.14 0.0746 0.413 F
#> 10 -0.146 -17.6 8.02 0.0644 0.984 F
None of those are problematic. Eliminate the four observations that trigger the warning; does the answer change?
ratios2 <- filter(ratios.complete, !glm0.resids$warning)
glm1<-glm(Default~ROS+ROI+debt_ratio,data=ratios2,family=binomial)
summary(glm1)
#>
#> Call:
#> glm(formula = Default ~ ROS + ROI + debt_ratio, family = binomial,
#> data = ratios2)
#>
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -2.8773 -0.3133 -0.2872 -0.2363 3.6160
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept) -3.75915 0.30621 -12.277 < 2e-16 ***
#> ROS -0.91929 0.24571 -3.741 0.000183 ***
#> ROI -0.04445 0.00898 -4.949 7.45e-07 ***
#> debt_ratio 0.86871 0.29135 2.982 0.002867 **
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for binomial family taken to be 1)
#>
#> Null deviance: 585.47 on 994 degrees of freedom
#> Residual deviance: 372.43 on 991 degrees of freedom
#> AIC: 380.43
#>
#> Number of Fisher Scoring iterations: 6
tidy(glm1)[,2] - tidy(glm0)[,2]
#> [1] 2.058958e-08 4.158585e-09 -1.119948e-11 -2.013056e-08
None of the coefficients changed by more than 10^-8! So essentially unchanged results. I'll go out on a limb here, but I think that's a "false positive" warning, nothing to worry about.
This warning arises with complete separation, but in that case I would expect to see the coefficient for one or more covariates get very large, with a standard error
that is even larger. That's not occurring here, and from your plots you can see that the defaults occur across overlapping ranges of all covariates.
So the warning occurs because a few observations have very extreme values of the covariates. That could be a problem if those observations were also
highly influential. But they're not.
In the comments you asked "Why does standardization blow up the standard errors?". Standardizing your covariates changes the scale. The coefficients and standard errors
refer to a one unit change in the covariate, always. So if the variance of your covariate is larger than 1, then standardizing is going to shrink the scale.
A one unit change on the standardized scale is the same as a much larger change on the unstandardized scale. So the coefficients and standard errors will get larger. Look at the
z values -- they should not change even if you standardize. The z value of the intercept changes if you also center the covariates, because now it is estimating
a different point (at the mean of the covariates, instead of at 0)
ratios.complete2 <- mutate(ratios.complete,
scROS = (ROS - mean(ROS))/sd(ROS),
scROI = (ROI - mean(ROI))/sd(ROI),
scdebt_ratio = (debt_ratio - mean(debt_ratio))/sd(debt_ratio))
glm2<-glm(Default~scROS+scROI+scdebt_ratio,data=ratios.complete2,family=binomial)
#> Warning: glm.fit: fitted probabilities numerically 0 or 1 occurred
# compare z values
tidy(glm2)[,4] - tidy(glm0)[,4]
#> [1] 4.203563e+00 8.881784e-16 1.776357e-15 -6.217249e-15
Created on 2018-03-25 by the reprex package (v0.2.0). | Issue with complete separation in logistic regression (in R) | TL;DR: The warning is not occurring because of complete separation.
library("tidyverse")
library("broom")
# semicolon delimited but period for decimal
ratios <- read_delim("data/W0krtTYM.txt", delim=" | Issue with complete separation in logistic regression (in R)
TL;DR: The warning is not occurring because of complete separation.
library("tidyverse")
library("broom")
# semicolon delimited but period for decimal
ratios <- read_delim("data/W0krtTYM.txt", delim=";")
# filter out the ones with missing values to make it easier to see what's going on
ratios.complete <- filter(ratios, !is.na(ROS), !is.na(ROI), !is.na(debt_ratio))
glm0<-glm(Default~ROS+ROI+debt_ratio,data=ratios.complete,family=binomial)
#> Warning: glm.fit: fitted probabilities numerically 0 or 1 occurred
summary(glm0)
#>
#> Call:
#> glm(formula = Default ~ ROS + ROI + debt_ratio, family = binomial,
#> data = ratios.complete)
#>
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -2.8773 -0.3133 -0.2868 -0.2355 3.6160
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept) -3.759154 0.306226 -12.276 < 2e-16 ***
#> ROS -0.919294 0.245712 -3.741 0.000183 ***
#> ROI -0.044447 0.008981 -4.949 7.45e-07 ***
#> debt_ratio 0.868707 0.291368 2.981 0.002869 **
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for binomial family taken to be 1)
#>
#> Null deviance: 604.89 on 998 degrees of freedom
#> Residual deviance: 372.43 on 995 degrees of freedom
#> AIC: 380.43
#>
#> Number of Fisher Scoring iterations: 8
When does that warning occur? Looking at the source code for glm.fit() we find
eps <- 10 * .Machine$double.eps
if (family$family == "binomial") {
if (any(mu > 1 - eps) || any(mu < eps))
warning("glm.fit: fitted probabilities numerically 0 or 1 occurred",
call. = FALSE)
}
The warning will arise whenever a predicted probability is effectively indistinguishable from 1.
The problem is on the top end:
glm0.resids <- augment(glm0) %>%
mutate(p = 1 / (1 + exp(-.fitted)),
warning = p > 1-eps)
arrange(glm0.resids, desc(.fitted)) %>%
select(2:5, p, warning) %>%
slice(1:10)
#> # A tibble: 10 x 6
#> ROS ROI debt_ratio .fitted p warning
#> <dbl> <dbl> <dbl> <dbl> <dbl> <lgl>
#> 1 - 25.0 -10071 452 860 1.00 T
#> 2 -292 - 17.9 0.0896 266 1.00 T
#> 3 - 96.0 - 176 0.0219 92.3 1.00 T
#> 4 - 25.4 - 548 6.43 49.5 1.00 T
#> 5 - 1.80 - 238 21.2 26.9 1.000 F
#> 6 - 5.65 - 344 11.3 26.6 1.000 F
#> 7 - 0.597 - 345 4.43 16.0 1.000 F
#> 8 - 2.62 - 359 0.444 15.0 1.000 F
#> 9 - 0.470 - 193 9.87 13.8 1.000 F
#> 10 - 2.46 - 176 3.64 9.50 1.000 F
So there are four observations that are causing the issue. They all have extreme values of one or more covariates.
But there are lots of other observations that are similarly close to 1.
There are some observations with high leverage -- what do they look like?
arrange(glm0.resids, desc(.hat)) %>%
select(2:4, .hat, p, warning) %>%
slice(1:10)
#> # A tibble: 10 x 6
#> ROS ROI debt_ratio .hat p warning
#> <dbl> <dbl> <dbl> <dbl> <dbl> <lgl>
#> 1 0.995 - 2.46 4.96 0.358 0.437 F
#> 2 -3.01 - 0.633 1.36 0.138 0.555 F
#> 3 -3.08 -14.6 0.0686 0.136 0.444 F
#> 4 -2.64 - 0.113 1.90 0.126 0.579 F
#> 5 -2.95 -13.9 0.773 0.112 0.561 F
#> 6 -0.0132 -14.9 3.12 0.0936 0.407 F
#> 7 -2.60 -10.9 0.856 0.0881 0.464 F
#> 8 -3.41 -26.4 1.12 0.0846 0.821 F
#> 9 -1.63 - 1.02 2.14 0.0746 0.413 F
#> 10 -0.146 -17.6 8.02 0.0644 0.984 F
None of those are problematic. Eliminate the four observations that trigger the warning; does the answer change?
ratios2 <- filter(ratios.complete, !glm0.resids$warning)
glm1<-glm(Default~ROS+ROI+debt_ratio,data=ratios2,family=binomial)
summary(glm1)
#>
#> Call:
#> glm(formula = Default ~ ROS + ROI + debt_ratio, family = binomial,
#> data = ratios2)
#>
#> Deviance Residuals:
#> Min 1Q Median 3Q Max
#> -2.8773 -0.3133 -0.2872 -0.2363 3.6160
#>
#> Coefficients:
#> Estimate Std. Error z value Pr(>|z|)
#> (Intercept) -3.75915 0.30621 -12.277 < 2e-16 ***
#> ROS -0.91929 0.24571 -3.741 0.000183 ***
#> ROI -0.04445 0.00898 -4.949 7.45e-07 ***
#> debt_ratio 0.86871 0.29135 2.982 0.002867 **
#> ---
#> Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
#>
#> (Dispersion parameter for binomial family taken to be 1)
#>
#> Null deviance: 585.47 on 994 degrees of freedom
#> Residual deviance: 372.43 on 991 degrees of freedom
#> AIC: 380.43
#>
#> Number of Fisher Scoring iterations: 6
tidy(glm1)[,2] - tidy(glm0)[,2]
#> [1] 2.058958e-08 4.158585e-09 -1.119948e-11 -2.013056e-08
None of the coefficients changed by more than 10^-8! So essentially unchanged results. I'll go out on a limb here, but I think that's a "false positive" warning, nothing to worry about.
This warning arises with complete separation, but in that case I would expect to see the coefficient for one or more covariates get very large, with a standard error
that is even larger. That's not occurring here, and from your plots you can see that the defaults occur across overlapping ranges of all covariates.
So the warning occurs because a few observations have very extreme values of the covariates. That could be a problem if those observations were also
highly influential. But they're not.
In the comments you asked "Why does standardization blow up the standard errors?". Standardizing your covariates changes the scale. The coefficients and standard errors
refer to a one unit change in the covariate, always. So if the variance of your covariate is larger than 1, then standardizing is going to shrink the scale.
A one unit change on the standardized scale is the same as a much larger change on the unstandardized scale. So the coefficients and standard errors will get larger. Look at the
z values -- they should not change even if you standardize. The z value of the intercept changes if you also center the covariates, because now it is estimating
a different point (at the mean of the covariates, instead of at 0)
ratios.complete2 <- mutate(ratios.complete,
scROS = (ROS - mean(ROS))/sd(ROS),
scROI = (ROI - mean(ROI))/sd(ROI),
scdebt_ratio = (debt_ratio - mean(debt_ratio))/sd(debt_ratio))
glm2<-glm(Default~scROS+scROI+scdebt_ratio,data=ratios.complete2,family=binomial)
#> Warning: glm.fit: fitted probabilities numerically 0 or 1 occurred
# compare z values
tidy(glm2)[,4] - tidy(glm0)[,4]
#> [1] 4.203563e+00 8.881784e-16 1.776357e-15 -6.217249e-15
Created on 2018-03-25 by the reprex package (v0.2.0). | Issue with complete separation in logistic regression (in R)
TL;DR: The warning is not occurring because of complete separation.
library("tidyverse")
library("broom")
# semicolon delimited but period for decimal
ratios <- read_delim("data/W0krtTYM.txt", delim=" |
18,489 | Issue with complete separation in logistic regression (in R) | I got the same issue while I am working with logistic regression in R, The issue seems that the predictor variable completely depends on some of the independent variables. I searched and found people are using GLM nets. I resolved my issue using the Quasi Binomial model in GLM function like:
glm(Default~ROS+ROI+debt_ratio,data=ratios.complete,family=quasibinomial)
get more about quasibinomial model here. | Issue with complete separation in logistic regression (in R) | I got the same issue while I am working with logistic regression in R, The issue seems that the predictor variable completely depends on some of the independent variables. I searched and found people | Issue with complete separation in logistic regression (in R)
I got the same issue while I am working with logistic regression in R, The issue seems that the predictor variable completely depends on some of the independent variables. I searched and found people are using GLM nets. I resolved my issue using the Quasi Binomial model in GLM function like:
glm(Default~ROS+ROI+debt_ratio,data=ratios.complete,family=quasibinomial)
get more about quasibinomial model here. | Issue with complete separation in logistic regression (in R)
I got the same issue while I am working with logistic regression in R, The issue seems that the predictor variable completely depends on some of the independent variables. I searched and found people |
18,490 | Can a 3D joint distribution be reconstructed by 2D marginals? | No. Perhaps the simplest counterexample concerns the distribution of three independent $\text{Bernoulli}(1/2)$ variables $X_i$, for which all eight possible outcomes from $(0,0,0)$ through $(1,1,1)$ are equally likely. This makes all four marginal distributions uniform on $\{(0,0),(0,1),(1,0),(1,1)\}$.
Consider the random variables $(Y_1,Y_2,Y_3)$ which are uniformly distributed on the set $\{(1,0,0),(0,1,0), (0,0,1),(1,1,1)\}$. These have the same marginals as $(X_1,X_2,X_3)$.
The cover of Douglas Hofstadter's Godel, Escher, Bach hints at the possibilities.
The three orthogonal projections (shadows) of each of these solids onto the coordinate planes are the same, but the solids obviously differ. Although shadows aren't quite the same thing as marginal distributions, they function in rather a similar way to restrict, but not completely determine, the 3D object that casts them. | Can a 3D joint distribution be reconstructed by 2D marginals? | No. Perhaps the simplest counterexample concerns the distribution of three independent $\text{Bernoulli}(1/2)$ variables $X_i$, for which all eight possible outcomes from $(0,0,0)$ through $(1,1,1)$ | Can a 3D joint distribution be reconstructed by 2D marginals?
No. Perhaps the simplest counterexample concerns the distribution of three independent $\text{Bernoulli}(1/2)$ variables $X_i$, for which all eight possible outcomes from $(0,0,0)$ through $(1,1,1)$ are equally likely. This makes all four marginal distributions uniform on $\{(0,0),(0,1),(1,0),(1,1)\}$.
Consider the random variables $(Y_1,Y_2,Y_3)$ which are uniformly distributed on the set $\{(1,0,0),(0,1,0), (0,0,1),(1,1,1)\}$. These have the same marginals as $(X_1,X_2,X_3)$.
The cover of Douglas Hofstadter's Godel, Escher, Bach hints at the possibilities.
The three orthogonal projections (shadows) of each of these solids onto the coordinate planes are the same, but the solids obviously differ. Although shadows aren't quite the same thing as marginal distributions, they function in rather a similar way to restrict, but not completely determine, the 3D object that casts them. | Can a 3D joint distribution be reconstructed by 2D marginals?
No. Perhaps the simplest counterexample concerns the distribution of three independent $\text{Bernoulli}(1/2)$ variables $X_i$, for which all eight possible outcomes from $(0,0,0)$ through $(1,1,1)$ |
18,491 | Can a 3D joint distribution be reconstructed by 2D marginals? | In the same spirit as whuber's answer,
Consider jointly continuous random variables $U, V, W$ with joint density function
\begin{align}
f_{U,V,W}(u,v,w) = \begin{cases} 2\phi(u)\phi(v)\phi(w)
& ~~~~\text{if}~ u \geq 0, v\geq 0, w \geq 0,\\
& \text{or if}~ u < 0, v < 0, w \geq 0,\\
& \text{or if}~ u < 0, v\geq 0, w < 0,\\
& \text{or if}~ u \geq 0, v< 0, w < 0,\\
& \\
0 & \text{otherwise}
\end{cases}\tag{1}
\end{align}
where $\phi(\cdot)$ denotes the standard normal density function.
It is clear that $U, V$, and $W$ are dependent
random variables. It is also clear that they are not
jointly normal random variables.
However, all three pairs $(U,V), (U,W), (V,W)$
are pairwise independent random variables: in fact,
independent standard normal random variables (and thus
pairwise jointly normal random variables).
In short,
$U,V,W$ are an example of pairwise independent but not
mutually independent standard normal random variables.
See this answer of mine
for more details.
In contrast, if $X,Y,Z$ are mutually independent standard normal random variables, then they are also pairwise independent random variables but their joint density is
$$f_{X,Y,Z}(u,v,w) = \phi(u)\phi(v)\phi(w), ~~u,v,w \in \mathbb R \tag{2}$$ which is not the same as the joint density in $(1)$. So, NO, we cannot deduce the trivariate joint pdf from the bivariate pdfs even in the case when the marginal univariate distributions are standard normal and the random variables are pairwise independent. | Can a 3D joint distribution be reconstructed by 2D marginals? | In the same spirit as whuber's answer,
Consider jointly continuous random variables $U, V, W$ with joint density function
\begin{align}
f_{U,V,W}(u,v,w) = \begin{cases} 2\phi(u)\phi(v)\phi(w)
& ~~~~ | Can a 3D joint distribution be reconstructed by 2D marginals?
In the same spirit as whuber's answer,
Consider jointly continuous random variables $U, V, W$ with joint density function
\begin{align}
f_{U,V,W}(u,v,w) = \begin{cases} 2\phi(u)\phi(v)\phi(w)
& ~~~~\text{if}~ u \geq 0, v\geq 0, w \geq 0,\\
& \text{or if}~ u < 0, v < 0, w \geq 0,\\
& \text{or if}~ u < 0, v\geq 0, w < 0,\\
& \text{or if}~ u \geq 0, v< 0, w < 0,\\
& \\
0 & \text{otherwise}
\end{cases}\tag{1}
\end{align}
where $\phi(\cdot)$ denotes the standard normal density function.
It is clear that $U, V$, and $W$ are dependent
random variables. It is also clear that they are not
jointly normal random variables.
However, all three pairs $(U,V), (U,W), (V,W)$
are pairwise independent random variables: in fact,
independent standard normal random variables (and thus
pairwise jointly normal random variables).
In short,
$U,V,W$ are an example of pairwise independent but not
mutually independent standard normal random variables.
See this answer of mine
for more details.
In contrast, if $X,Y,Z$ are mutually independent standard normal random variables, then they are also pairwise independent random variables but their joint density is
$$f_{X,Y,Z}(u,v,w) = \phi(u)\phi(v)\phi(w), ~~u,v,w \in \mathbb R \tag{2}$$ which is not the same as the joint density in $(1)$. So, NO, we cannot deduce the trivariate joint pdf from the bivariate pdfs even in the case when the marginal univariate distributions are standard normal and the random variables are pairwise independent. | Can a 3D joint distribution be reconstructed by 2D marginals?
In the same spirit as whuber's answer,
Consider jointly continuous random variables $U, V, W$ with joint density function
\begin{align}
f_{U,V,W}(u,v,w) = \begin{cases} 2\phi(u)\phi(v)\phi(w)
& ~~~~ |
18,492 | Can a 3D joint distribution be reconstructed by 2D marginals? | You're basically asking if CAT reconstruction is possible using only images along the 3 main axes.
It is not... otherwise that's what they would do. :-) See the Radon transform for more literature. | Can a 3D joint distribution be reconstructed by 2D marginals? | You're basically asking if CAT reconstruction is possible using only images along the 3 main axes.
It is not... otherwise that's what they would do. :-) See the Radon transform for more literature. | Can a 3D joint distribution be reconstructed by 2D marginals?
You're basically asking if CAT reconstruction is possible using only images along the 3 main axes.
It is not... otherwise that's what they would do. :-) See the Radon transform for more literature. | Can a 3D joint distribution be reconstructed by 2D marginals?
You're basically asking if CAT reconstruction is possible using only images along the 3 main axes.
It is not... otherwise that's what they would do. :-) See the Radon transform for more literature. |
18,493 | What are some important uses of random number generation in computational statistics? | There are many, many examples. Way too many to list, and probably too many for anyone to know completely (besides possibly @whuber, who should never be underestimated).
As you mention, in controlled experiments we avoid sampling bias by randomly partitioning subjects into treatment and control groups.
In bootstrapping we approximate repeated sampling from a population by randomly sampling with replacement from a fixed sample. This lets us estimate the variance of our estimates, among other things.
In cross validation we estimate the out of sample error of an estimate by randomly partitioning our data into slices and assembling random training and testing sets.
In permutation testing we use random permutations to sample under the null hypothesis, allowing to perform nonparametric hypothesis tests in a wide variety of situations.
In bagging we control the variance of an estimate by repeatedly performing estimation on bootstrap samples of training data, and then averaging results.
In random forests we further control the variance of an estimate by also randomly sampling from the available predictors at every decision point.
In simulation we ask a fit model to randomly generate new data sets which we can compare to training or testing data, helping validate the fit and assumptions in a model.
In Markov chain Monte Carlo we sample from a distribution by exploring the space of possible outcomes using a Markov chain (thanks to @Ben Bolker for this example).
Those are just the common, everyday applications that come to mind immediately. If I dug deep, I could probably double the length of that list. Randomness is both an important object of study, and an important tool to wield. | What are some important uses of random number generation in computational statistics? | There are many, many examples. Way too many to list, and probably too many for anyone to know completely (besides possibly @whuber, who should never be underestimated).
As you mention, in controlled | What are some important uses of random number generation in computational statistics?
There are many, many examples. Way too many to list, and probably too many for anyone to know completely (besides possibly @whuber, who should never be underestimated).
As you mention, in controlled experiments we avoid sampling bias by randomly partitioning subjects into treatment and control groups.
In bootstrapping we approximate repeated sampling from a population by randomly sampling with replacement from a fixed sample. This lets us estimate the variance of our estimates, among other things.
In cross validation we estimate the out of sample error of an estimate by randomly partitioning our data into slices and assembling random training and testing sets.
In permutation testing we use random permutations to sample under the null hypothesis, allowing to perform nonparametric hypothesis tests in a wide variety of situations.
In bagging we control the variance of an estimate by repeatedly performing estimation on bootstrap samples of training data, and then averaging results.
In random forests we further control the variance of an estimate by also randomly sampling from the available predictors at every decision point.
In simulation we ask a fit model to randomly generate new data sets which we can compare to training or testing data, helping validate the fit and assumptions in a model.
In Markov chain Monte Carlo we sample from a distribution by exploring the space of possible outcomes using a Markov chain (thanks to @Ben Bolker for this example).
Those are just the common, everyday applications that come to mind immediately. If I dug deep, I could probably double the length of that list. Randomness is both an important object of study, and an important tool to wield. | What are some important uses of random number generation in computational statistics?
There are many, many examples. Way too many to list, and probably too many for anyone to know completely (besides possibly @whuber, who should never be underestimated).
As you mention, in controlled |
18,494 | What are some important uses of random number generation in computational statistics? | This is all true but doesn't address the main problem: a PRNG with any
sort of resultant structure or predictability in the sequence will
cause the simulations to fail. Carl Witthoft Jan 31 at 15:51
If this is your concern then maybe the title of the question should be changed to "Impact of RNG choice on Monte Carlo results" or something like that. In this case, already considered on SE cross validation, here are some directions
If you are considering poorly designed RNGs like the infamous RANDU they will clearly negatively impact the Monte Carlo approximation. To spot deficiencies in RNGs, there exist banks of benchmarks like Marsaglia's Diehard tests. (For instance Park & Miller (1988) use of the Lehmer congruential generator with the factor 16807 has been found lacking, to be replaced with 47271 or 69621. Of course this has been superseded by massive period generators like the Mersenne Twister PRNG.)
A SE question on maths provides a link on the impact (or lack thereof) on estimation and precision, if not a very helpful answer.
Jeff Rosenthal (U Toronto) has a paper where he studies the impact on an RNG on the convergence of (Monte Carlo) Markov chains but I cannot find it. I recently ran a small experiment on my blog with no visible impact of the RNG type.
As an aside, a lottery scheme in Ontario used poorly designed random generation, which was spotted by a statistician, Mohan Srivastava of Toronto, Canada, who notified the Ontario Lottery and Gaming Corporation of the issue, rather than making a hefty profit out of this loophole.
Here is an illustration of a case where a classic network simulator is impacted by a poor default choice (linked to Park and Miller above).
There are specific issues with the structure of RNGs used in parallel computing. Using several seeds is usually not good enough, especially for linear congruential generators. Many approaches can be found in the computer literature, including the scalable parallel random number generation (SPRNG) packages of Michael Mascagni (including an R version) and Matsumoto’s dynamic creator, a C program that provides starting values for independent streams when using the Mersenne twister. This has also been addressed on SE stack overflow.
Last year, I saw a talk by Paula Whitlock on the impact of the GNU Scientific Library on the convergence of high dimension random walks, but cannot.
To end up on a light note, there also is some literature on the distinction between software and hardware RNGs, with claims that psychics can impact the later! | What are some important uses of random number generation in computational statistics? | This is all true but doesn't address the main problem: a PRNG with any
sort of resultant structure or predictability in the sequence will
cause the simulations to fail. Carl Witthoft Jan 31 at 15:51
| What are some important uses of random number generation in computational statistics?
This is all true but doesn't address the main problem: a PRNG with any
sort of resultant structure or predictability in the sequence will
cause the simulations to fail. Carl Witthoft Jan 31 at 15:51
If this is your concern then maybe the title of the question should be changed to "Impact of RNG choice on Monte Carlo results" or something like that. In this case, already considered on SE cross validation, here are some directions
If you are considering poorly designed RNGs like the infamous RANDU they will clearly negatively impact the Monte Carlo approximation. To spot deficiencies in RNGs, there exist banks of benchmarks like Marsaglia's Diehard tests. (For instance Park & Miller (1988) use of the Lehmer congruential generator with the factor 16807 has been found lacking, to be replaced with 47271 or 69621. Of course this has been superseded by massive period generators like the Mersenne Twister PRNG.)
A SE question on maths provides a link on the impact (or lack thereof) on estimation and precision, if not a very helpful answer.
Jeff Rosenthal (U Toronto) has a paper where he studies the impact on an RNG on the convergence of (Monte Carlo) Markov chains but I cannot find it. I recently ran a small experiment on my blog with no visible impact of the RNG type.
As an aside, a lottery scheme in Ontario used poorly designed random generation, which was spotted by a statistician, Mohan Srivastava of Toronto, Canada, who notified the Ontario Lottery and Gaming Corporation of the issue, rather than making a hefty profit out of this loophole.
Here is an illustration of a case where a classic network simulator is impacted by a poor default choice (linked to Park and Miller above).
There are specific issues with the structure of RNGs used in parallel computing. Using several seeds is usually not good enough, especially for linear congruential generators. Many approaches can be found in the computer literature, including the scalable parallel random number generation (SPRNG) packages of Michael Mascagni (including an R version) and Matsumoto’s dynamic creator, a C program that provides starting values for independent streams when using the Mersenne twister. This has also been addressed on SE stack overflow.
Last year, I saw a talk by Paula Whitlock on the impact of the GNU Scientific Library on the convergence of high dimension random walks, but cannot.
To end up on a light note, there also is some literature on the distinction between software and hardware RNGs, with claims that psychics can impact the later! | What are some important uses of random number generation in computational statistics?
This is all true but doesn't address the main problem: a PRNG with any
sort of resultant structure or predictability in the sequence will
cause the simulations to fail. Carl Witthoft Jan 31 at 15:51
|
18,495 | Is the sum of a discrete and a continuous random variable continuous or mixed? | Suppose $X$ assumes values $k \in K$ with discrete distribution $(p_k)_{k \in K}$, where $K$ is a countable set, and $Y$ assumes values in $\mathbb R$ with density $f_Y$ and CDF $F_Y$.
Let $Z = X + Y$. We have
$$ \mathbb P( Z \leq z) = \mathbb P(X + Y \leq z) = \sum_{k \in K} \mathbb P(Y \leq z - X \mid X = k) \mathbb P(X = k) = \sum_{k \in K} F_Y(z-k) p_k,$$ which can be differentiated to obtain a density function for $Z$ given by
$$ f_Z(z) = \sum_{k \in K} f_Y(z-k) p_k.$$
Now let $R = X Y$ and assume $p_0 = 0$. Then
$$ \mathbb P(R \leq r) = \mathbb P(X Y \leq r) = \sum_{k \in K} \mathbb P(Y \leq r/X) \mathbb P(X= k) = \sum_{k \in K} F_Y(r/k) p_k,$$
which again can be differentiated to obtain a density function.
However if $p_0 > 0$, then $\mathbb P(X Y = 0) \geq \mathbb P(X = 0) = p_0 > 0$, which shows that in this case $XY$ has an atom at 0. | Is the sum of a discrete and a continuous random variable continuous or mixed? | Suppose $X$ assumes values $k \in K$ with discrete distribution $(p_k)_{k \in K}$, where $K$ is a countable set, and $Y$ assumes values in $\mathbb R$ with density $f_Y$ and CDF $F_Y$.
Let $Z = X + Y$ | Is the sum of a discrete and a continuous random variable continuous or mixed?
Suppose $X$ assumes values $k \in K$ with discrete distribution $(p_k)_{k \in K}$, where $K$ is a countable set, and $Y$ assumes values in $\mathbb R$ with density $f_Y$ and CDF $F_Y$.
Let $Z = X + Y$. We have
$$ \mathbb P( Z \leq z) = \mathbb P(X + Y \leq z) = \sum_{k \in K} \mathbb P(Y \leq z - X \mid X = k) \mathbb P(X = k) = \sum_{k \in K} F_Y(z-k) p_k,$$ which can be differentiated to obtain a density function for $Z$ given by
$$ f_Z(z) = \sum_{k \in K} f_Y(z-k) p_k.$$
Now let $R = X Y$ and assume $p_0 = 0$. Then
$$ \mathbb P(R \leq r) = \mathbb P(X Y \leq r) = \sum_{k \in K} \mathbb P(Y \leq r/X) \mathbb P(X= k) = \sum_{k \in K} F_Y(r/k) p_k,$$
which again can be differentiated to obtain a density function.
However if $p_0 > 0$, then $\mathbb P(X Y = 0) \geq \mathbb P(X = 0) = p_0 > 0$, which shows that in this case $XY$ has an atom at 0. | Is the sum of a discrete and a continuous random variable continuous or mixed?
Suppose $X$ assumes values $k \in K$ with discrete distribution $(p_k)_{k \in K}$, where $K$ is a countable set, and $Y$ assumes values in $\mathbb R$ with density $f_Y$ and CDF $F_Y$.
Let $Z = X + Y$ |
18,496 | Is the sum of a discrete and a continuous random variable continuous or mixed? | Let $X$ be a discrete random variable with probability mass function $p_X : \mathcal{X} \to [0,1]$, where $\mathcal{X}$ is a discrete set (possibly countably infinite). Random variable $X$ can be thought of as a continuous random variable with the following probability density function
$$f_X (x) = \sum_{x_k \in \mathcal{X}} p_X (x_k) \, \delta (x - x_k)$$
where $\delta$ is the Dirac delta function.
If $Y$ is a continuous random variable, then $Z := X+Y$ is a hybrid random variable. As we know the probability density functions of $X$ and $Y$, we can compute the probability density function of $Z$. Assuming that $X$ and $Y$ are independent, the probability density function of $Z$ is given by the convolution of the probability density functions $f_X$ and $f_Y$
$$f_Z (z) = \sum_{x_k \in \mathcal{X}} p_X (x_k) \, f_Y (z - x_k)$$ | Is the sum of a discrete and a continuous random variable continuous or mixed? | Let $X$ be a discrete random variable with probability mass function $p_X : \mathcal{X} \to [0,1]$, where $\mathcal{X}$ is a discrete set (possibly countably infinite). Random variable $X$ can be thou | Is the sum of a discrete and a continuous random variable continuous or mixed?
Let $X$ be a discrete random variable with probability mass function $p_X : \mathcal{X} \to [0,1]$, where $\mathcal{X}$ is a discrete set (possibly countably infinite). Random variable $X$ can be thought of as a continuous random variable with the following probability density function
$$f_X (x) = \sum_{x_k \in \mathcal{X}} p_X (x_k) \, \delta (x - x_k)$$
where $\delta$ is the Dirac delta function.
If $Y$ is a continuous random variable, then $Z := X+Y$ is a hybrid random variable. As we know the probability density functions of $X$ and $Y$, we can compute the probability density function of $Z$. Assuming that $X$ and $Y$ are independent, the probability density function of $Z$ is given by the convolution of the probability density functions $f_X$ and $f_Y$
$$f_Z (z) = \sum_{x_k \in \mathcal{X}} p_X (x_k) \, f_Y (z - x_k)$$ | Is the sum of a discrete and a continuous random variable continuous or mixed?
Let $X$ be a discrete random variable with probability mass function $p_X : \mathcal{X} \to [0,1]$, where $\mathcal{X}$ is a discrete set (possibly countably infinite). Random variable $X$ can be thou |
18,497 | Is the sum of a discrete and a continuous random variable continuous or mixed? | This answer assumes that $X$ and $Y$ are independent. Here is a solution which does not need that assumption.
Edit: I am assuming that "continuous" means "having a pdf." If continuous is instead intended to mean atomless, the proof is similar; simply replace "Lebesgue null set" with "singleton set" in what follows.
Let the support of $X$ be the countable set $\{x_1,x_2,x_3\dots\}$. I will use
Lemma: A random variable $Z$ is continuous if and only $P(Z\in E)=0$ for all Borel measurable sets $E$ with Lebesgue measure zero.
Proof: Use the Lebesgue-Radon-Nikodym theorem. $ \square$
To prove $X+Y$ is continuous, take any null set $E$, and note that
$$
P(X+Y\in E)=\sum_k P(\{Y+x_k\in E\}\cap \{X=x_k\})\le \sum_k P(Y+x_k\in E)
$$
But $Y+x_k\in E$ if and only if $Y\in E-x_k$. The shifted set $E-x_k$ is still Lebesgue null. Since $Y$ is continuous, this means $P(Y+x_k\in E)=0$, so the above summation is zero, proving $X+Y$ is continuous.
For the question of products, the same logic applies as long as $P(X=0)=0$. If $P(X=0)=1$, then $XY$ is discrete with $P(XY=0)=1$. Otherwise, $XY$ is a nontrivial mixture. | Is the sum of a discrete and a continuous random variable continuous or mixed? | This answer assumes that $X$ and $Y$ are independent. Here is a solution which does not need that assumption.
Edit: I am assuming that "continuous" means "having a pdf." If continuous is instead inten | Is the sum of a discrete and a continuous random variable continuous or mixed?
This answer assumes that $X$ and $Y$ are independent. Here is a solution which does not need that assumption.
Edit: I am assuming that "continuous" means "having a pdf." If continuous is instead intended to mean atomless, the proof is similar; simply replace "Lebesgue null set" with "singleton set" in what follows.
Let the support of $X$ be the countable set $\{x_1,x_2,x_3\dots\}$. I will use
Lemma: A random variable $Z$ is continuous if and only $P(Z\in E)=0$ for all Borel measurable sets $E$ with Lebesgue measure zero.
Proof: Use the Lebesgue-Radon-Nikodym theorem. $ \square$
To prove $X+Y$ is continuous, take any null set $E$, and note that
$$
P(X+Y\in E)=\sum_k P(\{Y+x_k\in E\}\cap \{X=x_k\})\le \sum_k P(Y+x_k\in E)
$$
But $Y+x_k\in E$ if and only if $Y\in E-x_k$. The shifted set $E-x_k$ is still Lebesgue null. Since $Y$ is continuous, this means $P(Y+x_k\in E)=0$, so the above summation is zero, proving $X+Y$ is continuous.
For the question of products, the same logic applies as long as $P(X=0)=0$. If $P(X=0)=1$, then $XY$ is discrete with $P(XY=0)=1$. Otherwise, $XY$ is a nontrivial mixture. | Is the sum of a discrete and a continuous random variable continuous or mixed?
This answer assumes that $X$ and $Y$ are independent. Here is a solution which does not need that assumption.
Edit: I am assuming that "continuous" means "having a pdf." If continuous is instead inten |
18,498 | Is the sum of a discrete and a continuous random variable continuous or mixed? | Assume that $X$ takes values in a countable set $\{n_i\}_{i=1,2,\dots}$. If $Y$ is continuous, for every real number $t$
$$
{\rm P}(X+Y=t) = \sum_i{\rm P}(X=n_i,Y=t-n_i) =0,
$$
since for all $i$ we have $\{X=n_i,Y=t-n_i\}\subseteq \{Y=t-n_i\}$ and ${\rm P}(Y=t-n_i)=0$.
Therefore $X+Y$ is continuous.
If ${\rm P}(X=0)=0$ then $XY$ is continuous too, and the proof is similar:
$$
{\rm P}(XY=t) = \sum_i{\rm P}(X=n_i,Y=t/{n_i}) =0.
$$
However, in general the product $XY$ can be discrete (for instance if $X=0$), continuous (as we have seen) or mixed (take $Y$ with uniform distribution in $(0,1)$, and let $X=0$ if $Y\le 1/2$, $X=1$ if $Y>1/2$). | Is the sum of a discrete and a continuous random variable continuous or mixed? | Assume that $X$ takes values in a countable set $\{n_i\}_{i=1,2,\dots}$. If $Y$ is continuous, for every real number $t$
$$
{\rm P}(X+Y=t) = \sum_i{\rm P}(X=n_i,Y=t-n_i) =0,
$$
since for all $i$ we ha | Is the sum of a discrete and a continuous random variable continuous or mixed?
Assume that $X$ takes values in a countable set $\{n_i\}_{i=1,2,\dots}$. If $Y$ is continuous, for every real number $t$
$$
{\rm P}(X+Y=t) = \sum_i{\rm P}(X=n_i,Y=t-n_i) =0,
$$
since for all $i$ we have $\{X=n_i,Y=t-n_i\}\subseteq \{Y=t-n_i\}$ and ${\rm P}(Y=t-n_i)=0$.
Therefore $X+Y$ is continuous.
If ${\rm P}(X=0)=0$ then $XY$ is continuous too, and the proof is similar:
$$
{\rm P}(XY=t) = \sum_i{\rm P}(X=n_i,Y=t/{n_i}) =0.
$$
However, in general the product $XY$ can be discrete (for instance if $X=0$), continuous (as we have seen) or mixed (take $Y$ with uniform distribution in $(0,1)$, and let $X=0$ if $Y\le 1/2$, $X=1$ if $Y>1/2$). | Is the sum of a discrete and a continuous random variable continuous or mixed?
Assume that $X$ takes values in a countable set $\{n_i\}_{i=1,2,\dots}$. If $Y$ is continuous, for every real number $t$
$$
{\rm P}(X+Y=t) = \sum_i{\rm P}(X=n_i,Y=t-n_i) =0,
$$
since for all $i$ we ha |
18,499 | Weibull Survival Model in R | Ok so I'm just going to post an answer here using the R help that DWin described. Using the function rweibull in R gives the usual form of the Weibull distribution, with its cumulative function being:
$$F(x)=1-\exp(-\left ( \frac{x}{b}\right )^a)$$
So we will denote the shape parameter of rweibull by $a$ and the scale parameter of rweibull by $b$.
Now the problem is that the output of survreg gives both shape and scale parameters which are not the same as the shape and scale parameters from rweibull. Let us denote the shape parameter from survreg as $a_s$ and the scale parameter of survreg by $b_s$.
Then, from ?survreg we have that:
survreg's scale = 1/(rweibull shape)
survreg's intercept = log(rweibull scale)
So this gives us that:
$$a=\frac{1}{b_s}\quad \mbox{and} \quad b=\exp(a_s)$$
So if we suppose that we run the function survreg with $n$ covariates, then the output will be:
$\alpha_0,\ldots, \alpha_{n-1}$, the coefficents of the covariates and some scale parameter $k$. The Weibull model given in standard form is then given by:
$$F(x)=1-\exp\left (- \left (\frac{x}{\exp(\alpha_0+\alpha_1+\ldots +\alpha_{n-1})} \right ) ^{\frac{1}{k}}\right )$$ | Weibull Survival Model in R | Ok so I'm just going to post an answer here using the R help that DWin described. Using the function rweibull in R gives the usual form of the Weibull distribution, with its cumulative function being: | Weibull Survival Model in R
Ok so I'm just going to post an answer here using the R help that DWin described. Using the function rweibull in R gives the usual form of the Weibull distribution, with its cumulative function being:
$$F(x)=1-\exp(-\left ( \frac{x}{b}\right )^a)$$
So we will denote the shape parameter of rweibull by $a$ and the scale parameter of rweibull by $b$.
Now the problem is that the output of survreg gives both shape and scale parameters which are not the same as the shape and scale parameters from rweibull. Let us denote the shape parameter from survreg as $a_s$ and the scale parameter of survreg by $b_s$.
Then, from ?survreg we have that:
survreg's scale = 1/(rweibull shape)
survreg's intercept = log(rweibull scale)
So this gives us that:
$$a=\frac{1}{b_s}\quad \mbox{and} \quad b=\exp(a_s)$$
So if we suppose that we run the function survreg with $n$ covariates, then the output will be:
$\alpha_0,\ldots, \alpha_{n-1}$, the coefficents of the covariates and some scale parameter $k$. The Weibull model given in standard form is then given by:
$$F(x)=1-\exp\left (- \left (\frac{x}{\exp(\alpha_0+\alpha_1+\ldots +\alpha_{n-1})} \right ) ^{\frac{1}{k}}\right )$$ | Weibull Survival Model in R
Ok so I'm just going to post an answer here using the R help that DWin described. Using the function rweibull in R gives the usual form of the Weibull distribution, with its cumulative function being: |
18,500 | Weibull Survival Model in R | I'd like to add an answer with a code example for further clarity.
What we're essentially after is taking the survreg output model and derive from it the survival function. To avoid the common notation confusion I'll actually go ahead and show the code that does that:
fit <- survreg(Surv(time,status) ~ age, data=stanford2) # this is the survreg output model
survreg.lp <- predict(fit, type = "lp")
survreg.scale <- fit$scale
# this is the survival function!
S_t <- function(t, survreg.scale, survreg.lp){
shape <- 1/survreg.scale
scale <- exp(survreg.lp)
ans <- 1 - pweibull(t, shape = shape, scale = scale)
}
As mentioned by vkehayas R's pweibull parameterisation is:
$$F(x) = 1-exp(-\left(\frac{x}{b}\right)^a$$
where a is the weibull distribution shape and b is the scale.
We then get that a = 1/fit$scale and b = exp(predict(fit, type = "lp"))
We can verify below that the derived survival function
# next let's verify it's correct:
fit <- survreg(Surv(time,status) ~ age, data=stanford2) # this is the survreg output model
# this is the survival function!
S_t <- function(t, survreg.scale, survreg.lp){
shape <- 1/survreg.scale
scale <- exp(survreg.lp)
ans <- 1 - pweibull(t, shape = shape, scale = scale)
}
new_dat <- data.frame(age = c(0, seq(min(stanford2$age), max(stanford2$age), length.out = 10)))
pct <- seq(0.01, 0.99, 0.01)
surv_curves <- sapply(pct,
function(x) predict(fit, type = "quantile", p = 1 - x,
newdata = new_dat))
matplot(y = pct, t(surv_curves), type = "l")
# you can vary the below subject_i variable to see it works for all of them
subject_i <- 1
single_curve <- surv_curves[subject_i, ]
plot(single_curve, pct, type = "l") # this is we know to be true
times <- round(seq(1, max(single_curve), length.out = 100))
lp <- predict(fit, newdata = new_dat, type = "lp")[subject_i]
surv <- sapply(times, function(t) S_t(t, survreg.scale = fit$scale, survreg.lp = lp))
lines(times, surv, col = "red", lty = 2) # this is the new S_t function
# They match!
So, to summarize:
a = 1/fit$scale and b = exp(predict(fit, type = "lp"))
Hope this helps. I know I pulled a few hairs before figuring this out. | Weibull Survival Model in R | I'd like to add an answer with a code example for further clarity.
What we're essentially after is taking the survreg output model and derive from it the survival function. To avoid the common notatio | Weibull Survival Model in R
I'd like to add an answer with a code example for further clarity.
What we're essentially after is taking the survreg output model and derive from it the survival function. To avoid the common notation confusion I'll actually go ahead and show the code that does that:
fit <- survreg(Surv(time,status) ~ age, data=stanford2) # this is the survreg output model
survreg.lp <- predict(fit, type = "lp")
survreg.scale <- fit$scale
# this is the survival function!
S_t <- function(t, survreg.scale, survreg.lp){
shape <- 1/survreg.scale
scale <- exp(survreg.lp)
ans <- 1 - pweibull(t, shape = shape, scale = scale)
}
As mentioned by vkehayas R's pweibull parameterisation is:
$$F(x) = 1-exp(-\left(\frac{x}{b}\right)^a$$
where a is the weibull distribution shape and b is the scale.
We then get that a = 1/fit$scale and b = exp(predict(fit, type = "lp"))
We can verify below that the derived survival function
# next let's verify it's correct:
fit <- survreg(Surv(time,status) ~ age, data=stanford2) # this is the survreg output model
# this is the survival function!
S_t <- function(t, survreg.scale, survreg.lp){
shape <- 1/survreg.scale
scale <- exp(survreg.lp)
ans <- 1 - pweibull(t, shape = shape, scale = scale)
}
new_dat <- data.frame(age = c(0, seq(min(stanford2$age), max(stanford2$age), length.out = 10)))
pct <- seq(0.01, 0.99, 0.01)
surv_curves <- sapply(pct,
function(x) predict(fit, type = "quantile", p = 1 - x,
newdata = new_dat))
matplot(y = pct, t(surv_curves), type = "l")
# you can vary the below subject_i variable to see it works for all of them
subject_i <- 1
single_curve <- surv_curves[subject_i, ]
plot(single_curve, pct, type = "l") # this is we know to be true
times <- round(seq(1, max(single_curve), length.out = 100))
lp <- predict(fit, newdata = new_dat, type = "lp")[subject_i]
surv <- sapply(times, function(t) S_t(t, survreg.scale = fit$scale, survreg.lp = lp))
lines(times, surv, col = "red", lty = 2) # this is the new S_t function
# They match!
So, to summarize:
a = 1/fit$scale and b = exp(predict(fit, type = "lp"))
Hope this helps. I know I pulled a few hairs before figuring this out. | Weibull Survival Model in R
I'd like to add an answer with a code example for further clarity.
What we're essentially after is taking the survreg output model and derive from it the survival function. To avoid the common notatio |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.