idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
34,601
Bayesian two-factor ANOVA
I'm a bit late to the show here, but John Kruschke's recently published book Doing Bayesian Data Analysis: A Tutorial with R and BUGS has a whole chapter (19) on this. His book is written in a highly accessible and practical style, well worth checking out for his description of the approach. He also includes R and BUGS code for his examples. His website accompanying the book is interesting reading also: http://www.indiana.edu/~kruschke/DoingBayesianDataAnalysis/ and includes all the code in the book.
Bayesian two-factor ANOVA
I'm a bit late to the show here, but John Kruschke's recently published book Doing Bayesian Data Analysis: A Tutorial with R and BUGS has a whole chapter (19) on this. His book is written in a highly
Bayesian two-factor ANOVA I'm a bit late to the show here, but John Kruschke's recently published book Doing Bayesian Data Analysis: A Tutorial with R and BUGS has a whole chapter (19) on this. His book is written in a highly accessible and practical style, well worth checking out for his description of the approach. He also includes R and BUGS code for his examples. His website accompanying the book is interesting reading also: http://www.indiana.edu/~kruschke/DoingBayesianDataAnalysis/ and includes all the code in the book.
Bayesian two-factor ANOVA I'm a bit late to the show here, but John Kruschke's recently published book Doing Bayesian Data Analysis: A Tutorial with R and BUGS has a whole chapter (19) on this. His book is written in a highly
34,602
Bayesian two-factor ANOVA
See Chapter 10 of Marc Kery's book Introduction to WinBUGS for Ecologists, where he compares a two-way ANOVA in R with several versions of the model in WinBUGS. (The book has lots of great examples; ecology not a pre-requisite.) Website for the book is here: http://www.mbr-pwrc.usgs.gov/software/kerybook/
Bayesian two-factor ANOVA
See Chapter 10 of Marc Kery's book Introduction to WinBUGS for Ecologists, where he compares a two-way ANOVA in R with several versions of the model in WinBUGS. (The book has lots of great examples; e
Bayesian two-factor ANOVA See Chapter 10 of Marc Kery's book Introduction to WinBUGS for Ecologists, where he compares a two-way ANOVA in R with several versions of the model in WinBUGS. (The book has lots of great examples; ecology not a pre-requisite.) Website for the book is here: http://www.mbr-pwrc.usgs.gov/software/kerybook/
Bayesian two-factor ANOVA See Chapter 10 of Marc Kery's book Introduction to WinBUGS for Ecologists, where he compares a two-way ANOVA in R with several versions of the model in WinBUGS. (The book has lots of great examples; e
34,603
Interpretation of positive and negative beta weights in regression equation
In explaining the meaning of regression coefficient I found that the following explanation very useful. Suppose we have the regression $$Y=a+bX$$ Say $X$ changes by $\Delta X$ and $Y$ changes by $\Delta Y$. Since we have the linear relationship we have $$Y+\Delta Y= a+ b(X+\Delta X)$$ Since $Y=a+bX$ we get that $$\Delta Y = b \Delta X.$$ Having this is easy to see that if $b$ positive, then positive change in $X$ will result in positive change in $Y$. If $b$ is negative then positive change in $X$ will result in negative change in $Y$. Note: I treated this question as a pedagogical one, i.e. provide simple explanation. Note 2: As pointed out by @whuber this explanation has an important assumption that the relationship holds for all possible values of $X$ and $Y$. In reality this is a very restricting assumption, on the other hand the the explanation is valid for small values of $\Delta X$, since Taylor theorem says that relationships which can be expressed as differentiable functions (and this is a reasonable assumption to make) are linear locally.
Interpretation of positive and negative beta weights in regression equation
In explaining the meaning of regression coefficient I found that the following explanation very useful. Suppose we have the regression $$Y=a+bX$$ Say $X$ changes by $\Delta X$ and $Y$ changes by $\D
Interpretation of positive and negative beta weights in regression equation In explaining the meaning of regression coefficient I found that the following explanation very useful. Suppose we have the regression $$Y=a+bX$$ Say $X$ changes by $\Delta X$ and $Y$ changes by $\Delta Y$. Since we have the linear relationship we have $$Y+\Delta Y= a+ b(X+\Delta X)$$ Since $Y=a+bX$ we get that $$\Delta Y = b \Delta X.$$ Having this is easy to see that if $b$ positive, then positive change in $X$ will result in positive change in $Y$. If $b$ is negative then positive change in $X$ will result in negative change in $Y$. Note: I treated this question as a pedagogical one, i.e. provide simple explanation. Note 2: As pointed out by @whuber this explanation has an important assumption that the relationship holds for all possible values of $X$ and $Y$. In reality this is a very restricting assumption, on the other hand the the explanation is valid for small values of $\Delta X$, since Taylor theorem says that relationships which can be expressed as differentiable functions (and this is a reasonable assumption to make) are linear locally.
Interpretation of positive and negative beta weights in regression equation In explaining the meaning of regression coefficient I found that the following explanation very useful. Suppose we have the regression $$Y=a+bX$$ Say $X$ changes by $\Delta X$ and $Y$ changes by $\D
34,604
Interpretation of positive and negative beta weights in regression equation
As @gung notes, there are varying conventions regarding the meaning of ($\beta$, i.e., "beta"). In the broader statistical literature, beta is often used to represent unstandardised coefficients. However, in psychology (and perhaps other areas), there is often a distinction between b for unstandardised and beta for standardised coefficients. This answer assumes that the context indicates that beta is representing standardised coefficients: Beta weights: As @whuber mentioned, "beta weights" are by convention standardised regression coefficients (see wikipedia on standardised coefficient). In this context, $b$ is often used for unstandardised coefficients and $\beta$ is often used for standardised coefficients. Basic interpretation: A beta weight for a given predictor variable is the predicted difference in the outcome variable in standard units for a one standard deviation increase on the given predictor variable holding all other predictors constant. General resource on multiple regression: The question is elementary and implies that you should read some general material on multiple regression (here is an elementary description by Andy Field). Causality: Be careful of language like "the dependent variable has increased in response to greater use of the independent variable". Such language has causal connotations. Beta weights by themselves are not enough to justify a causal interpretation. You would require additional evidence to justify a causal interpretation.
Interpretation of positive and negative beta weights in regression equation
As @gung notes, there are varying conventions regarding the meaning of ($\beta$, i.e., "beta"). In the broader statistical literature, beta is often used to represent unstandardised coefficients. Howe
Interpretation of positive and negative beta weights in regression equation As @gung notes, there are varying conventions regarding the meaning of ($\beta$, i.e., "beta"). In the broader statistical literature, beta is often used to represent unstandardised coefficients. However, in psychology (and perhaps other areas), there is often a distinction between b for unstandardised and beta for standardised coefficients. This answer assumes that the context indicates that beta is representing standardised coefficients: Beta weights: As @whuber mentioned, "beta weights" are by convention standardised regression coefficients (see wikipedia on standardised coefficient). In this context, $b$ is often used for unstandardised coefficients and $\beta$ is often used for standardised coefficients. Basic interpretation: A beta weight for a given predictor variable is the predicted difference in the outcome variable in standard units for a one standard deviation increase on the given predictor variable holding all other predictors constant. General resource on multiple regression: The question is elementary and implies that you should read some general material on multiple regression (here is an elementary description by Andy Field). Causality: Be careful of language like "the dependent variable has increased in response to greater use of the independent variable". Such language has causal connotations. Beta weights by themselves are not enough to justify a causal interpretation. You would require additional evidence to justify a causal interpretation.
Interpretation of positive and negative beta weights in regression equation As @gung notes, there are varying conventions regarding the meaning of ($\beta$, i.e., "beta"). In the broader statistical literature, beta is often used to represent unstandardised coefficients. Howe
34,605
Can we use bounded continuous variables as predictors in regression and logistic regression?
The condition that dependent variables must be "continuous and unbounded" is unusual: there is no mathematical or statistical requirement for either. In most regression models we posit that the dependent variable be a linear combination of the independent variables plus an independent random error term of zero mean, approximately and within the ranges attained by, or potentially attained by, the independent variables. For instance, it would be fine to regress the length of the Mississippi River on time for the period 1700 - 1850 but not to project the regression back, say, a million years or forward 700 years: In the space of one hundred and seventy-six years the Lower Mississippi has shortened itself two hundred and forty-two miles. That is an average of a trifle over one mile and a third per year. Therefore, any calm person, who is not blind or idiotic, can see that in the Old Oolitic Silurian Period, just a million years ago next November, the Lower Mississippi River was upwards of one million three hundred thousand miles long, and stuck out over the Gulf of Mexico like a fishing-rod. And by the same token any person can see that seven hundred and forty-two years from now the Lower Mississippi will be only a mile and three-quarters long, and Cairo and New Orleans will have joined their streets together, and be plodding comfortably along under a single mayor and a mutual board of aldermen. There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact. (Mark Twain, Life on the Mississippi.) In the present case it sounds like the angle is an independent variable, not the dependent one, so this question does not even arise. The problem that arises is that the angle seems to be defined only modulo 360 degrees (actually mod 180). Actually, the angle is really a latitude and varies from 0 to 180 (or -90 to 90) without "wrapping around" at all. Really, then, all that matters is how best to express this angle: does the reaction rate vary linearly with the angle or does it vary perhaps with its sine or cosine? Or maybe its tangent, which is unbounded? But that matter is addressed with appropriate exploratory analysis, perhaps by some stereochemical considerations, and standard procedures to fit and check models. Therefore this angular variable neither enjoys nor suffers from any special quality that would distinguish it from other independent variables.
Can we use bounded continuous variables as predictors in regression and logistic regression?
The condition that dependent variables must be "continuous and unbounded" is unusual: there is no mathematical or statistical requirement for either. In most regression models we posit that the depend
Can we use bounded continuous variables as predictors in regression and logistic regression? The condition that dependent variables must be "continuous and unbounded" is unusual: there is no mathematical or statistical requirement for either. In most regression models we posit that the dependent variable be a linear combination of the independent variables plus an independent random error term of zero mean, approximately and within the ranges attained by, or potentially attained by, the independent variables. For instance, it would be fine to regress the length of the Mississippi River on time for the period 1700 - 1850 but not to project the regression back, say, a million years or forward 700 years: In the space of one hundred and seventy-six years the Lower Mississippi has shortened itself two hundred and forty-two miles. That is an average of a trifle over one mile and a third per year. Therefore, any calm person, who is not blind or idiotic, can see that in the Old Oolitic Silurian Period, just a million years ago next November, the Lower Mississippi River was upwards of one million three hundred thousand miles long, and stuck out over the Gulf of Mexico like a fishing-rod. And by the same token any person can see that seven hundred and forty-two years from now the Lower Mississippi will be only a mile and three-quarters long, and Cairo and New Orleans will have joined their streets together, and be plodding comfortably along under a single mayor and a mutual board of aldermen. There is something fascinating about science. One gets such wholesale returns of conjecture out of such a trifling investment of fact. (Mark Twain, Life on the Mississippi.) In the present case it sounds like the angle is an independent variable, not the dependent one, so this question does not even arise. The problem that arises is that the angle seems to be defined only modulo 360 degrees (actually mod 180). Actually, the angle is really a latitude and varies from 0 to 180 (or -90 to 90) without "wrapping around" at all. Really, then, all that matters is how best to express this angle: does the reaction rate vary linearly with the angle or does it vary perhaps with its sine or cosine? Or maybe its tangent, which is unbounded? But that matter is addressed with appropriate exploratory analysis, perhaps by some stereochemical considerations, and standard procedures to fit and check models. Therefore this angular variable neither enjoys nor suffers from any special quality that would distinguish it from other independent variables.
Can we use bounded continuous variables as predictors in regression and logistic regression? The condition that dependent variables must be "continuous and unbounded" is unusual: there is no mathematical or statistical requirement for either. In most regression models we posit that the depend
34,606
Can we use bounded continuous variables as predictors in regression and logistic regression?
With respect to the question in the header With logistic regression predicting posterior probabilities, the dependent variable (outcome) is both bounded and continuous. One train of thoughts to arrive at logistic regression in fact is thinking how to construct a regression with limits for the continuous outcome. You want e.g. to do a regression directly on the probability "Common" regression methods (e.g. linear regression) give you continuous output in the set of real numbers, $\mathbb R$. But probabilities are in [0, 1] So put a sigmoid transformation into your model to transform $\mathbb R \mapsto [0, 1]$ If you choose the logistic function $\frac{1}{1 + e^{-x}}$ (a standard choice of a sigmoid), you end up with logistic regression. With respect to modeling angles in general I'd like to follow up with another question: how to model cyclic behaviour, how would I tell a model that 359° is almost the same as 0° (regardless of whether the variable is dependent or independent)?
Can we use bounded continuous variables as predictors in regression and logistic regression?
With respect to the question in the header With logistic regression predicting posterior probabilities, the dependent variable (outcome) is both bounded and continuous. One train of thoughts to arrive
Can we use bounded continuous variables as predictors in regression and logistic regression? With respect to the question in the header With logistic regression predicting posterior probabilities, the dependent variable (outcome) is both bounded and continuous. One train of thoughts to arrive at logistic regression in fact is thinking how to construct a regression with limits for the continuous outcome. You want e.g. to do a regression directly on the probability "Common" regression methods (e.g. linear regression) give you continuous output in the set of real numbers, $\mathbb R$. But probabilities are in [0, 1] So put a sigmoid transformation into your model to transform $\mathbb R \mapsto [0, 1]$ If you choose the logistic function $\frac{1}{1 + e^{-x}}$ (a standard choice of a sigmoid), you end up with logistic regression. With respect to modeling angles in general I'd like to follow up with another question: how to model cyclic behaviour, how would I tell a model that 359° is almost the same as 0° (regardless of whether the variable is dependent or independent)?
Can we use bounded continuous variables as predictors in regression and logistic regression? With respect to the question in the header With logistic regression predicting posterior probabilities, the dependent variable (outcome) is both bounded and continuous. One train of thoughts to arrive
34,607
How will you deal with "don't know" and "missing data" in survey data?
Well, you should also considered that "don't know" is at least some kind of answer, whereas non-response is a purely missing value. Now, we often allow for "don't know" response in survey just to avoid forcing people to provide a response anyway (which might bias the results). For example, in the National Health and Nutrition Examination Survey, they are coded differently but subsequently discarded from the analysis. You could try analyzing the data both ways: (1) treating "don't know response" as specific response category and handling all responses set with some kind of multivariate data analysis (e.g. multiple correspondence analysis or multiple factor analysis for mixed data, see the FactoMineR package), and (2) if it doesn't bring any evidence of distortion on items distribution, just merge it with missing values. For (2), I would also suggest you to check that "don't know" and MV are at least missing at random (MAR), or that they are not specific of one respondents group (e.g. male/female, age class, SES, etc.).
How will you deal with "don't know" and "missing data" in survey data?
Well, you should also considered that "don't know" is at least some kind of answer, whereas non-response is a purely missing value. Now, we often allow for "don't know" response in survey just to avoi
How will you deal with "don't know" and "missing data" in survey data? Well, you should also considered that "don't know" is at least some kind of answer, whereas non-response is a purely missing value. Now, we often allow for "don't know" response in survey just to avoid forcing people to provide a response anyway (which might bias the results). For example, in the National Health and Nutrition Examination Survey, they are coded differently but subsequently discarded from the analysis. You could try analyzing the data both ways: (1) treating "don't know response" as specific response category and handling all responses set with some kind of multivariate data analysis (e.g. multiple correspondence analysis or multiple factor analysis for mixed data, see the FactoMineR package), and (2) if it doesn't bring any evidence of distortion on items distribution, just merge it with missing values. For (2), I would also suggest you to check that "don't know" and MV are at least missing at random (MAR), or that they are not specific of one respondents group (e.g. male/female, age class, SES, etc.).
How will you deal with "don't know" and "missing data" in survey data? Well, you should also considered that "don't know" is at least some kind of answer, whereas non-response is a purely missing value. Now, we often allow for "don't know" response in survey just to avoi
34,608
How will you deal with "don't know" and "missing data" in survey data?
It depends on the type of question/response in your survey. If they are like "I like", "I dislike", "Don't know", chl answers partially to your question. The first solution is chl's answer. You have to check if "Don't know" doesn't hide anything. You have to analyse separately these values to see if it highlights a specific profile of respondents. I'm not about imputation but... "Frenchy" software do it for MCA, ... often considering MAR assumption. It supposes that these answers are randomly distributed (you pick randomly another modality of response). You can also use a more sophisticated approach : if "Like" is at 30% and "Dislike" at 70% you pick an uniform random number distributed on (0,1) and choose "Like" if your number is at or below 0.3. If you pick a number between 0.3 and 1 you choose "Dislike". A more modern approach is Multiplie imputation (see MI PROC in SAS and mice package in R). Imputation is very efficient... But it can't recreate atypic profiles... If you're working in educational testing or if you need to compute a score, let me know I will complete this answer about scores estimation. Ref: Multiple Imputation for Nonreponse in survey, Rubin (1987). Wiley. mice package: http://cran.r-project.org/web/packages/mice/index.html Survey Methodology, Robert M. Groves, Floyd J. Fowler & al. Wiley.
How will you deal with "don't know" and "missing data" in survey data?
It depends on the type of question/response in your survey. If they are like "I like", "I dislike", "Don't know", chl answers partially to your question. The first solution is chl's answer. You have
How will you deal with "don't know" and "missing data" in survey data? It depends on the type of question/response in your survey. If they are like "I like", "I dislike", "Don't know", chl answers partially to your question. The first solution is chl's answer. You have to check if "Don't know" doesn't hide anything. You have to analyse separately these values to see if it highlights a specific profile of respondents. I'm not about imputation but... "Frenchy" software do it for MCA, ... often considering MAR assumption. It supposes that these answers are randomly distributed (you pick randomly another modality of response). You can also use a more sophisticated approach : if "Like" is at 30% and "Dislike" at 70% you pick an uniform random number distributed on (0,1) and choose "Like" if your number is at or below 0.3. If you pick a number between 0.3 and 1 you choose "Dislike". A more modern approach is Multiplie imputation (see MI PROC in SAS and mice package in R). Imputation is very efficient... But it can't recreate atypic profiles... If you're working in educational testing or if you need to compute a score, let me know I will complete this answer about scores estimation. Ref: Multiple Imputation for Nonreponse in survey, Rubin (1987). Wiley. mice package: http://cran.r-project.org/web/packages/mice/index.html Survey Methodology, Robert M. Groves, Floyd J. Fowler & al. Wiley.
How will you deal with "don't know" and "missing data" in survey data? It depends on the type of question/response in your survey. If they are like "I like", "I dislike", "Don't know", chl answers partially to your question. The first solution is chl's answer. You have
34,609
FA: Choosing Rotation matrix, based on "Simple Structure Criteria"
The R psych package includes various routines to apply Factor Analysis (whether it be PCA-, ML- or FA-based), but see my short review on crantastic. Most of the usual rotation techniques are available, as well as algorithm relying on simple structure criteria; you might want to have a look at W. Revelle's paper on this topic, Very Simple Structure: An Alternative Procedure For Estimating The Optimal Number Of Interpretable Factors (MBR 1979 (14)) and the VSS() function. Many authors are using orthogonal rotation (VARIMAX), considering loadings higher than, say 0.3 or 0.4 (which amounts to 9 or 16% of variance explained by the factor), as it provides simpler structures for interpretation and scoring purpose (e.g., in quality of life research); others (e.g. Cattell, 1978; Kline, 1979) would recommend oblique rotations since "in the real world, it is not unreasonable to think that factors, as important determiners of behavior, would be correlated" (I'm quoting Kline, Intelligence. The Psychometric View, 1991, p. 19). To my knowledge, researchers generally start with FA (or PCA), using a scree-plot together with simulated data (parallel analysis) to help choosing the right number of factors. I often found that item cluster analysis and VSS nicely complement such an approach. When one is interested in second-order factors, or to carry on with SEM-based methods, then obviously you need to use oblique rotation and factor out the resulting correlation matrix. Other packages/software: lavaan, for latent variable analysis in R; OpenMx based on Mx, a general purpose software including a matrix algebra interpreter and numerical optimizer for structural equation modeling. References 1. Cattell, R.B. (1978). The scientific use of factor analysis in behavioural and life sciences. New York, Plenum. 2. Kline, P. (1979). Psychometrics and Psychology. London, Academic Press.
FA: Choosing Rotation matrix, based on "Simple Structure Criteria"
The R psych package includes various routines to apply Factor Analysis (whether it be PCA-, ML- or FA-based), but see my short review on crantastic. Most of the usual rotation techniques are available
FA: Choosing Rotation matrix, based on "Simple Structure Criteria" The R psych package includes various routines to apply Factor Analysis (whether it be PCA-, ML- or FA-based), but see my short review on crantastic. Most of the usual rotation techniques are available, as well as algorithm relying on simple structure criteria; you might want to have a look at W. Revelle's paper on this topic, Very Simple Structure: An Alternative Procedure For Estimating The Optimal Number Of Interpretable Factors (MBR 1979 (14)) and the VSS() function. Many authors are using orthogonal rotation (VARIMAX), considering loadings higher than, say 0.3 or 0.4 (which amounts to 9 or 16% of variance explained by the factor), as it provides simpler structures for interpretation and scoring purpose (e.g., in quality of life research); others (e.g. Cattell, 1978; Kline, 1979) would recommend oblique rotations since "in the real world, it is not unreasonable to think that factors, as important determiners of behavior, would be correlated" (I'm quoting Kline, Intelligence. The Psychometric View, 1991, p. 19). To my knowledge, researchers generally start with FA (or PCA), using a scree-plot together with simulated data (parallel analysis) to help choosing the right number of factors. I often found that item cluster analysis and VSS nicely complement such an approach. When one is interested in second-order factors, or to carry on with SEM-based methods, then obviously you need to use oblique rotation and factor out the resulting correlation matrix. Other packages/software: lavaan, for latent variable analysis in R; OpenMx based on Mx, a general purpose software including a matrix algebra interpreter and numerical optimizer for structural equation modeling. References 1. Cattell, R.B. (1978). The scientific use of factor analysis in behavioural and life sciences. New York, Plenum. 2. Kline, P. (1979). Psychometrics and Psychology. London, Academic Press.
FA: Choosing Rotation matrix, based on "Simple Structure Criteria" The R psych package includes various routines to apply Factor Analysis (whether it be PCA-, ML- or FA-based), but see my short review on crantastic. Most of the usual rotation techniques are available
34,610
FA: Choosing Rotation matrix, based on "Simple Structure Criteria"
I find myself routinely using parallel analysis (O'Connor, 2000). This solves the problem of how many factors to extract nicely. See: https://people.ok.ubc.ca/brioconn/nfactors/nfactors.html O'Connor, B. P. (2000). SPSS and SAS programs for determining the number of components using parallel analysis and Velicer's MAP test. Behavior Research Methods, Instrumentation, and Computers, 32, 396-402.
FA: Choosing Rotation matrix, based on "Simple Structure Criteria"
I find myself routinely using parallel analysis (O'Connor, 2000). This solves the problem of how many factors to extract nicely. See: https://people.ok.ubc.ca/brioconn/nfactors/nfactors.html O'Conno
FA: Choosing Rotation matrix, based on "Simple Structure Criteria" I find myself routinely using parallel analysis (O'Connor, 2000). This solves the problem of how many factors to extract nicely. See: https://people.ok.ubc.ca/brioconn/nfactors/nfactors.html O'Connor, B. P. (2000). SPSS and SAS programs for determining the number of components using parallel analysis and Velicer's MAP test. Behavior Research Methods, Instrumentation, and Computers, 32, 396-402.
FA: Choosing Rotation matrix, based on "Simple Structure Criteria" I find myself routinely using parallel analysis (O'Connor, 2000). This solves the problem of how many factors to extract nicely. See: https://people.ok.ubc.ca/brioconn/nfactors/nfactors.html O'Conno
34,611
FA: Choosing Rotation matrix, based on "Simple Structure Criteria"
I would have to second chl's suggestion of the psych package, its extremely useful and has implementations of the MAP and parallel analysis criteria for number of factors. In my own experience, i have found that if you create factor analysis solutions for all the numbers between those returned by MAP and parallel analysis, you normally can find a relatively optimum solution. I would also second the use of OpenMx for confirmatory factor analysis, as it seems to give the best results of all of them, and is much, much better for badly behaved matrices (as mine tend to be). The syntax is also quite nice, once you get used to it. The only issue that i have with it is that the optimiser is not open source, and thus it is not available on CRAN. Apparently they are working on an open source implementation of the optimiser, so that may not be an issue for much longer.
FA: Choosing Rotation matrix, based on "Simple Structure Criteria"
I would have to second chl's suggestion of the psych package, its extremely useful and has implementations of the MAP and parallel analysis criteria for number of factors. In my own experience, i have
FA: Choosing Rotation matrix, based on "Simple Structure Criteria" I would have to second chl's suggestion of the psych package, its extremely useful and has implementations of the MAP and parallel analysis criteria for number of factors. In my own experience, i have found that if you create factor analysis solutions for all the numbers between those returned by MAP and parallel analysis, you normally can find a relatively optimum solution. I would also second the use of OpenMx for confirmatory factor analysis, as it seems to give the best results of all of them, and is much, much better for badly behaved matrices (as mine tend to be). The syntax is also quite nice, once you get used to it. The only issue that i have with it is that the optimiser is not open source, and thus it is not available on CRAN. Apparently they are working on an open source implementation of the optimiser, so that may not be an issue for much longer.
FA: Choosing Rotation matrix, based on "Simple Structure Criteria" I would have to second chl's suggestion of the psych package, its extremely useful and has implementations of the MAP and parallel analysis criteria for number of factors. In my own experience, i have
34,612
FA: Choosing Rotation matrix, based on "Simple Structure Criteria"
Great Question. This is not really an answer, but just a few thoughts. In most of the applications where I have used factor analysis, permitting correlated factors makes more theoretical sense. I tend to rely on the proxmax rotation method. I used to do this in SPSS and now I use the factanal function in R.
FA: Choosing Rotation matrix, based on "Simple Structure Criteria"
Great Question. This is not really an answer, but just a few thoughts. In most of the applications where I have used factor analysis, permitting correlated factors makes more theoretical sense. I tend
FA: Choosing Rotation matrix, based on "Simple Structure Criteria" Great Question. This is not really an answer, but just a few thoughts. In most of the applications where I have used factor analysis, permitting correlated factors makes more theoretical sense. I tend to rely on the proxmax rotation method. I used to do this in SPSS and now I use the factanal function in R.
FA: Choosing Rotation matrix, based on "Simple Structure Criteria" Great Question. This is not really an answer, but just a few thoughts. In most of the applications where I have used factor analysis, permitting correlated factors makes more theoretical sense. I tend
34,613
What is the expected MINIMUM value drawn from a uniform distribution between 0 and 1 after n trials?
You are looking for order statistics. The wiki indicates that the distribution of the minimum draw from a uniform distribution between 0 and 1 after $n$ trials is a beta distribution (I have not checked it for correctness which you should probably do.). Specifically, let $U_{(1)}$ be the minimum order statistic. Then: $U_{(1)} \sim B(1,n)$ Therefore, the mean is $\frac{1}{1+n}$. You can use the beta distribution to identify $a$ and $b$ such that $Prob(a \le U_{(1)} \le b) = 0.95$. By the way, the use of the term confidence interval is not appropriate in this context as you are not performing inference. Update Calculating $a$ and $b$ such that $Prob(a \le U_{(1)} \le b) = 0.95$ is not straightforward. There are several possible ways in which you can calculate $a$ and $b$. One approach is to center the interval around the mean. In this approach, you would set: $a = \mu - \delta$ and $b = \mu + \delta$ where $\mu = \frac{1}{1+n}$. You would then calculate $\delta$ such that the required probability is 0.95. Do note that under this approach you may not be able to identify a symmetric interval around the mean for high $n$ but this is just my hunch.
What is the expected MINIMUM value drawn from a uniform distribution between 0 and 1 after n trials?
You are looking for order statistics. The wiki indicates that the distribution of the minimum draw from a uniform distribution between 0 and 1 after $n$ trials is a beta distribution (I have not check
What is the expected MINIMUM value drawn from a uniform distribution between 0 and 1 after n trials? You are looking for order statistics. The wiki indicates that the distribution of the minimum draw from a uniform distribution between 0 and 1 after $n$ trials is a beta distribution (I have not checked it for correctness which you should probably do.). Specifically, let $U_{(1)}$ be the minimum order statistic. Then: $U_{(1)} \sim B(1,n)$ Therefore, the mean is $\frac{1}{1+n}$. You can use the beta distribution to identify $a$ and $b$ such that $Prob(a \le U_{(1)} \le b) = 0.95$. By the way, the use of the term confidence interval is not appropriate in this context as you are not performing inference. Update Calculating $a$ and $b$ such that $Prob(a \le U_{(1)} \le b) = 0.95$ is not straightforward. There are several possible ways in which you can calculate $a$ and $b$. One approach is to center the interval around the mean. In this approach, you would set: $a = \mu - \delta$ and $b = \mu + \delta$ where $\mu = \frac{1}{1+n}$. You would then calculate $\delta$ such that the required probability is 0.95. Do note that under this approach you may not be able to identify a symmetric interval around the mean for high $n$ but this is just my hunch.
What is the expected MINIMUM value drawn from a uniform distribution between 0 and 1 after n trials? You are looking for order statistics. The wiki indicates that the distribution of the minimum draw from a uniform distribution between 0 and 1 after $n$ trials is a beta distribution (I have not check
34,614
What is the expected MINIMUM value drawn from a uniform distribution between 0 and 1 after n trials?
As Srikant suggests, you need to look at order statistics. To add to Srikant's answer, you can simulate this process easily in R: n = 10 N = 1000;sims = numeric(N) for(i in 1:N) sims[i] = min(runif(n)) hist(sims, freq=FALSE) x = seq(0,1,0.01) lines(x, dbeta(x, 1, n), col=2) To get alt text http://img441.imageshack.us/img441/6826/tmpe.jpg Slight digression This question is related to one of my favourite statistics problems, the German tank problem. This problem is about the maximum of uniform distributions, and can be summarised as: Suppose one is an Allied intelligence analyst during World War II, and one has some serial numbers of captured German tanks. Further, assume that the tanks are numbered sequentially from 1 to N. How does one estimate the total number of tanks? Taken from wikipedia Check out the wikipedia page for more details.
What is the expected MINIMUM value drawn from a uniform distribution between 0 and 1 after n trials?
As Srikant suggests, you need to look at order statistics. To add to Srikant's answer, you can simulate this process easily in R: n = 10 N = 1000;sims = numeric(N) for(i in 1:N) sims[i] = min(runif(
What is the expected MINIMUM value drawn from a uniform distribution between 0 and 1 after n trials? As Srikant suggests, you need to look at order statistics. To add to Srikant's answer, you can simulate this process easily in R: n = 10 N = 1000;sims = numeric(N) for(i in 1:N) sims[i] = min(runif(n)) hist(sims, freq=FALSE) x = seq(0,1,0.01) lines(x, dbeta(x, 1, n), col=2) To get alt text http://img441.imageshack.us/img441/6826/tmpe.jpg Slight digression This question is related to one of my favourite statistics problems, the German tank problem. This problem is about the maximum of uniform distributions, and can be summarised as: Suppose one is an Allied intelligence analyst during World War II, and one has some serial numbers of captured German tanks. Further, assume that the tanks are numbered sequentially from 1 to N. How does one estimate the total number of tanks? Taken from wikipedia Check out the wikipedia page for more details.
What is the expected MINIMUM value drawn from a uniform distribution between 0 and 1 after n trials? As Srikant suggests, you need to look at order statistics. To add to Srikant's answer, you can simulate this process easily in R: n = 10 N = 1000;sims = numeric(N) for(i in 1:N) sims[i] = min(runif(
34,615
What is the expected MINIMUM value drawn from a uniform distribution between 0 and 1 after n trials?
following @Srikant, one can compute the CDF of the beta distribution, and find conditions on $a, b$ such that the interval $[a,b]$ contains the minimum of $n$ draws of a uniform with 95% probability. The condition is: $(1-a)^n - (1-b)^n = 0.95$. One attractive choice would then be the interval $[0,1 - 0.05^{1/n}]$. This is also the smallest interval with the desired property.
What is the expected MINIMUM value drawn from a uniform distribution between 0 and 1 after n trials?
following @Srikant, one can compute the CDF of the beta distribution, and find conditions on $a, b$ such that the interval $[a,b]$ contains the minimum of $n$ draws of a uniform with 95% probability.
What is the expected MINIMUM value drawn from a uniform distribution between 0 and 1 after n trials? following @Srikant, one can compute the CDF of the beta distribution, and find conditions on $a, b$ such that the interval $[a,b]$ contains the minimum of $n$ draws of a uniform with 95% probability. The condition is: $(1-a)^n - (1-b)^n = 0.95$. One attractive choice would then be the interval $[0,1 - 0.05^{1/n}]$. This is also the smallest interval with the desired property.
What is the expected MINIMUM value drawn from a uniform distribution between 0 and 1 after n trials? following @Srikant, one can compute the CDF of the beta distribution, and find conditions on $a, b$ such that the interval $[a,b]$ contains the minimum of $n$ draws of a uniform with 95% probability.
34,616
A case of survivorship bias?
The basic idea behind this is that football clubs have an age cut-off when determining teams. In the league my children participate in the age restrictions states that children born after July 31st are placed on the younger team. This means that two children that are effectively the same age can be playing with two different age groups. The child born July 31st will be playing on the older team and theoretically be the youngest and smallest on the team and in the league. The child born on August 1st will be the oldest and largest child in the league and will be able to benefit from that. The survivorship bias comes because competitive leagues will select the best players for their teams. The best players in childhood are often the older players since they have additional time for their bodies to mature. This means that otherwise acceptable younger players are not selected simply because of their age. Since they are not given the same opportunities as the older kids, they don’t develop the same skills and eventually drop out of competitive soccer. If the cut-off for competitive soccer in enough countries is January 1st, that would support the phenomena you see. A similar phenomena has been observed in several other sports including baseball and ice hockey.
A case of survivorship bias?
The basic idea behind this is that football clubs have an age cut-off when determining teams. In the league my children participate in the age restrictions states that children born after July 31st a
A case of survivorship bias? The basic idea behind this is that football clubs have an age cut-off when determining teams. In the league my children participate in the age restrictions states that children born after July 31st are placed on the younger team. This means that two children that are effectively the same age can be playing with two different age groups. The child born July 31st will be playing on the older team and theoretically be the youngest and smallest on the team and in the league. The child born on August 1st will be the oldest and largest child in the league and will be able to benefit from that. The survivorship bias comes because competitive leagues will select the best players for their teams. The best players in childhood are often the older players since they have additional time for their bodies to mature. This means that otherwise acceptable younger players are not selected simply because of their age. Since they are not given the same opportunities as the older kids, they don’t develop the same skills and eventually drop out of competitive soccer. If the cut-off for competitive soccer in enough countries is January 1st, that would support the phenomena you see. A similar phenomena has been observed in several other sports including baseball and ice hockey.
A case of survivorship bias? The basic idea behind this is that football clubs have an age cut-off when determining teams. In the league my children participate in the age restrictions states that children born after July 31st a
34,617
A case of survivorship bias?
Malcom Gladewell analyses the problem in the book Outliers by analyzing Hockey Players.
A case of survivorship bias?
Malcom Gladewell analyses the problem in the book Outliers by analyzing Hockey Players.
A case of survivorship bias? Malcom Gladewell analyses the problem in the book Outliers by analyzing Hockey Players.
A case of survivorship bias? Malcom Gladewell analyses the problem in the book Outliers by analyzing Hockey Players.
34,618
If the difference of scores is normally distributed, the sample distributions don't matter in a paired t-test, right?
You are correct. A paired t-test is conducted on the differences of the paired scores. It doesn't look at the individual scores in any way. A paired t-test is precisely the same as a one-sample t-test on the differences of the pairs. Consider the following observations, A and B. While the distributions of each are skewed, the differences are relatively symmetric and bell-shaped in distribution. The results of the paired t-test and one-sample t-test are the same. A = (1, 2, 3, 4, 4, 5, 5, 5, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7) B = (0.98, 2.10, 3.84, 4.11, 3.02, 3.01, 5.67, 5.07, 6.20, 5.67, 6.77, 7.61, 6.15, 9.32, 8.41, 7.71, 8.64, 8.49, 8.15, 9.00, 8.80, 7.85, 8.90) In R: A = c(1, 2, 3, 4, 4, 5, 5, 5, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7) B = c(0.98, 2.10, 3.84, 4.11, 3.02, 3.01, 5.67, 5.07, 6.20, 5.67, 6.77, 7.61, 6.15, 9.32, 8.41, 7.71, 8.64, 8.49, 8.15, 9.00, 8.80, 7.85, 8.90) hist(A) hist(B) Difference = A - B hist(Difference) t.test(A, B, paired=TRUE) t.test(Difference) ### Paired t-test ### t = -3.3339, df = 22, p-value = 0.00301 ### ### One Sample t-test ### t = -3.3339, df = 22, p-value = 0.00301
If the difference of scores is normally distributed, the sample distributions don't matter in a pair
You are correct. A paired t-test is conducted on the differences of the paired scores. It doesn't look at the individual scores in any way. A paired t-test is precisely the same as a one-sample t-test
If the difference of scores is normally distributed, the sample distributions don't matter in a paired t-test, right? You are correct. A paired t-test is conducted on the differences of the paired scores. It doesn't look at the individual scores in any way. A paired t-test is precisely the same as a one-sample t-test on the differences of the pairs. Consider the following observations, A and B. While the distributions of each are skewed, the differences are relatively symmetric and bell-shaped in distribution. The results of the paired t-test and one-sample t-test are the same. A = (1, 2, 3, 4, 4, 5, 5, 5, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7) B = (0.98, 2.10, 3.84, 4.11, 3.02, 3.01, 5.67, 5.07, 6.20, 5.67, 6.77, 7.61, 6.15, 9.32, 8.41, 7.71, 8.64, 8.49, 8.15, 9.00, 8.80, 7.85, 8.90) In R: A = c(1, 2, 3, 4, 4, 5, 5, 5, 6, 6, 6, 6, 6, 7, 7, 7, 7, 7, 7, 7, 7, 7, 7) B = c(0.98, 2.10, 3.84, 4.11, 3.02, 3.01, 5.67, 5.07, 6.20, 5.67, 6.77, 7.61, 6.15, 9.32, 8.41, 7.71, 8.64, 8.49, 8.15, 9.00, 8.80, 7.85, 8.90) hist(A) hist(B) Difference = A - B hist(Difference) t.test(A, B, paired=TRUE) t.test(Difference) ### Paired t-test ### t = -3.3339, df = 22, p-value = 0.00301 ### ### One Sample t-test ### t = -3.3339, df = 22, p-value = 0.00301
If the difference of scores is normally distributed, the sample distributions don't matter in a pair You are correct. A paired t-test is conducted on the differences of the paired scores. It doesn't look at the individual scores in any way. A paired t-test is precisely the same as a one-sample t-test
34,619
Why symmetric trees are not used in xgboost
Mostly yes. Simple logistics: Symmetric trees were not around when the original XGBoost implementation came out. Nobody using XGBoost is bothered enough to code them as the current performance is deemed adequate. New base-learners are no clear wins. Initially, people thought for example that DART (Dropouts) was also a game-changer and it turned out to be... OKish...? Just to be clear, symmetric trees are not guaranteed to be better. If anything, we empirically know that CatBoost has not dominated any of the other implementations, it is as good as. Also given that we have so many different ways to regularise a GBM already, the ability to have one more regularisation lever is not so groundbreaking. Symmetric trees have a clear advantage when it comes to inference time but for GBM applications, the inference speed usually is a small aspect of the project. Contrary to that, Catboost's grandparent from Yandex, MatrixNet, was a GBM specifically geared towards recommender systems where inference time is crucial, and during a time when GPUs were not so mature; this has carried forward. In contrast, XGBoost, which started as part of the DCML, has moved towards high-performance computing on the GPU work first by cuBLAS and nowadays via Rapids, a far more GPU-first approach. (Side-note: using a GPU doesn't equate to HPC.)
Why symmetric trees are not used in xgboost
Mostly yes. Simple logistics: Symmetric trees were not around when the original XGBoost implementation came out. Nobody using XGBoost is bothered enough to code them as the current performance is dee
Why symmetric trees are not used in xgboost Mostly yes. Simple logistics: Symmetric trees were not around when the original XGBoost implementation came out. Nobody using XGBoost is bothered enough to code them as the current performance is deemed adequate. New base-learners are no clear wins. Initially, people thought for example that DART (Dropouts) was also a game-changer and it turned out to be... OKish...? Just to be clear, symmetric trees are not guaranteed to be better. If anything, we empirically know that CatBoost has not dominated any of the other implementations, it is as good as. Also given that we have so many different ways to regularise a GBM already, the ability to have one more regularisation lever is not so groundbreaking. Symmetric trees have a clear advantage when it comes to inference time but for GBM applications, the inference speed usually is a small aspect of the project. Contrary to that, Catboost's grandparent from Yandex, MatrixNet, was a GBM specifically geared towards recommender systems where inference time is crucial, and during a time when GPUs were not so mature; this has carried forward. In contrast, XGBoost, which started as part of the DCML, has moved towards high-performance computing on the GPU work first by cuBLAS and nowadays via Rapids, a far more GPU-first approach. (Side-note: using a GPU doesn't equate to HPC.)
Why symmetric trees are not used in xgboost Mostly yes. Simple logistics: Symmetric trees were not around when the original XGBoost implementation came out. Nobody using XGBoost is bothered enough to code them as the current performance is dee
34,620
Should we always minimize squared deviations if we want to find the dependency of mean on features?
Minimizing the MSE in the cases you describe indeed produces a consistent estimator for the model parameters. The consistency is related to the fact that the derivative of the MSE, and therefore the first order optimality condition, is linear in the observations $y_i$. So, if you want a consistent estimator without any additional information on the distribution of the samples, MSE is the only option. On the other hand, if you have a parametric form for the distribution of $y_i$, then the maximum likelihood estimator (MLE) is also consistent, but also asymptotically unbiased and efficient, meaning that it has the smallest variance among all unbiased estimators (in the limit $n \to \infty$). In that sense it is the most "accurate" estimator possible.
Should we always minimize squared deviations if we want to find the dependency of mean on features?
Minimizing the MSE in the cases you describe indeed produces a consistent estimator for the model parameters. The consistency is related to the fact that the derivative of the MSE, and therefore the f
Should we always minimize squared deviations if we want to find the dependency of mean on features? Minimizing the MSE in the cases you describe indeed produces a consistent estimator for the model parameters. The consistency is related to the fact that the derivative of the MSE, and therefore the first order optimality condition, is linear in the observations $y_i$. So, if you want a consistent estimator without any additional information on the distribution of the samples, MSE is the only option. On the other hand, if you have a parametric form for the distribution of $y_i$, then the maximum likelihood estimator (MLE) is also consistent, but also asymptotically unbiased and efficient, meaning that it has the smallest variance among all unbiased estimators (in the limit $n \to \infty$). In that sense it is the most "accurate" estimator possible.
Should we always minimize squared deviations if we want to find the dependency of mean on features? Minimizing the MSE in the cases you describe indeed produces a consistent estimator for the model parameters. The consistency is related to the fact that the derivative of the MSE, and therefore the f
34,621
Should we always minimize squared deviations if we want to find the dependency of mean on features?
NO It is important to keep in mind that an estimator of a parameter can take on many forms. In fact, constants can be estimators! Consequently, we might find that calculating something other than the empirical mean might have desirable properties for estimating the mean. In this post, I give an example where estimating the conditional median, by minimizing MAE, gives better estimates of the regression parameters than the OLS estimates, in the sense that the estimator is unbiased (as is the case with the OLS estimator) but has lower variance than the OLS estimator. Another answer here mentions the Gauss-Markov theorem. As Richard Hardy explains in the answer to my linked question, the MAE minimizer is nonlinear. Thus, Gauss-Markov does not apply. It is fine for minimization of MAE to result in an unbiased estimator that has lower-variance than OLS. EDIT Another answer of mine shows when minimizing in-sample MAE results in lower out-of-sample MSE than minimizing in-sample MSE. EDIT 2 Let's check out a simulation and visualization. In the code, I simulate a heavy-tailed $t_{1.1}$ error term. At each iteration, I calculate the OLS coefficients (with lm) and the MAE-minimizing coefficients (with rq). Then I plot those regression lines, along with the true regression line. library(quantreg) set.seed(2022) N <- 50 B <- 100 beta0 <- 2 beta1 <- -3 x <- seq(0, 1, 1/(N - 1)) yhat <- beta0 + (beta1 * x) Q0 <- Q1 <- L0 <- L1 <- rep(NA, B) for (i in 1:B){ y <- yhat + rt(N, 1.1) L <- lm(y ~ x) L0[i] <- summary(L)$coef[1, 1] L1[i] <- summary(L)$coef[2, 1] Q <- quantreg::rq(y ~ x, tau = 0.5) Q0[i] <- summary(Q)$coef[1, 1] Q1[i] <- summary(Q)$coef[2, 1] } par(mfrow = c(2, 1)) plot( x, yhat, type = 'l', lty = 2, ylim = c(min(L0 + L1*x, Q0 + Q1*x), max(L0 + L1*x, Q0 + Q1*x)), main = "OLS" ) for (i in 1:B){ lines(x, L0[i] + L1[i] * x, col = 'red') } lines(x, yhat, type = 'l', lty = 2) # plot( x, yhat, type = 'l', lty = 2, ylim = c(min(L0 + L1*x, Q0 + Q1*x), max(L0 + L1*x, Q0 + Q1*x)), main = "MAE" ) for (i in 1:B){ lines(x, Q0[i] + Q1[i] * x, col = 'red') } lines(x, yhat, type = 'l', lty = 2) par(mfrow = c(1, 1)) The red estimates of the conditional means are much more reasonable when absolute loss is minimized. EDIT 4 Another example is in "classification" problems with discrete outcomes (say binary for now). The typical loss function minimized is log loss ("crossentropy" in some circles), which corresponds to maximum likelihood estimation in logistic regression. Our Frank Harrell has a strong opinion about minimizing this loss function as opposed to minimizing square loss. $$ \text{Log Loss}\\ L(y, p) = -\dfrac{1}{N} \sum_{i = 1}^N \bigg[ y_i\log(p_i) + (1 - y_i)\log(1 - p_i) \bigg] $$ EDIT 5 Finally, there is the James-Stein estimator, which shows that the OLS solution to linear regression is inadmissible for any reasonable sample size for doing regression, despite the Gaussian conditional distribution. That is, even the maximum likelihood estimator is inadmissible due to being dominated by James-Stein.
Should we always minimize squared deviations if we want to find the dependency of mean on features?
NO It is important to keep in mind that an estimator of a parameter can take on many forms. In fact, constants can be estimators! Consequently, we might find that calculating something other than the
Should we always minimize squared deviations if we want to find the dependency of mean on features? NO It is important to keep in mind that an estimator of a parameter can take on many forms. In fact, constants can be estimators! Consequently, we might find that calculating something other than the empirical mean might have desirable properties for estimating the mean. In this post, I give an example where estimating the conditional median, by minimizing MAE, gives better estimates of the regression parameters than the OLS estimates, in the sense that the estimator is unbiased (as is the case with the OLS estimator) but has lower variance than the OLS estimator. Another answer here mentions the Gauss-Markov theorem. As Richard Hardy explains in the answer to my linked question, the MAE minimizer is nonlinear. Thus, Gauss-Markov does not apply. It is fine for minimization of MAE to result in an unbiased estimator that has lower-variance than OLS. EDIT Another answer of mine shows when minimizing in-sample MAE results in lower out-of-sample MSE than minimizing in-sample MSE. EDIT 2 Let's check out a simulation and visualization. In the code, I simulate a heavy-tailed $t_{1.1}$ error term. At each iteration, I calculate the OLS coefficients (with lm) and the MAE-minimizing coefficients (with rq). Then I plot those regression lines, along with the true regression line. library(quantreg) set.seed(2022) N <- 50 B <- 100 beta0 <- 2 beta1 <- -3 x <- seq(0, 1, 1/(N - 1)) yhat <- beta0 + (beta1 * x) Q0 <- Q1 <- L0 <- L1 <- rep(NA, B) for (i in 1:B){ y <- yhat + rt(N, 1.1) L <- lm(y ~ x) L0[i] <- summary(L)$coef[1, 1] L1[i] <- summary(L)$coef[2, 1] Q <- quantreg::rq(y ~ x, tau = 0.5) Q0[i] <- summary(Q)$coef[1, 1] Q1[i] <- summary(Q)$coef[2, 1] } par(mfrow = c(2, 1)) plot( x, yhat, type = 'l', lty = 2, ylim = c(min(L0 + L1*x, Q0 + Q1*x), max(L0 + L1*x, Q0 + Q1*x)), main = "OLS" ) for (i in 1:B){ lines(x, L0[i] + L1[i] * x, col = 'red') } lines(x, yhat, type = 'l', lty = 2) # plot( x, yhat, type = 'l', lty = 2, ylim = c(min(L0 + L1*x, Q0 + Q1*x), max(L0 + L1*x, Q0 + Q1*x)), main = "MAE" ) for (i in 1:B){ lines(x, Q0[i] + Q1[i] * x, col = 'red') } lines(x, yhat, type = 'l', lty = 2) par(mfrow = c(1, 1)) The red estimates of the conditional means are much more reasonable when absolute loss is minimized. EDIT 4 Another example is in "classification" problems with discrete outcomes (say binary for now). The typical loss function minimized is log loss ("crossentropy" in some circles), which corresponds to maximum likelihood estimation in logistic regression. Our Frank Harrell has a strong opinion about minimizing this loss function as opposed to minimizing square loss. $$ \text{Log Loss}\\ L(y, p) = -\dfrac{1}{N} \sum_{i = 1}^N \bigg[ y_i\log(p_i) + (1 - y_i)\log(1 - p_i) \bigg] $$ EDIT 5 Finally, there is the James-Stein estimator, which shows that the OLS solution to linear regression is inadmissible for any reasonable sample size for doing regression, despite the Gaussian conditional distribution. That is, even the maximum likelihood estimator is inadmissible due to being dominated by James-Stein.
Should we always minimize squared deviations if we want to find the dependency of mean on features? NO It is important to keep in mind that an estimator of a parameter can take on many forms. In fact, constants can be estimators! Consequently, we might find that calculating something other than the
34,622
Should we always minimize squared deviations if we want to find the dependency of mean on features?
A similar question (if not the same) is: If the predicted value of machine learning method is E(y | x), why bother with different cost functions for y | x? The theoretical mean of a distribution minimizes the squared error, but that does not mean that the sample mean is always the best estimator with regards the squared error loss function. The sample mean has a statistical variation. In an answer to the above question an example is givem that shows how the sample median is performing better than the sample mean when the errors are Laplace distributed. Below is a copy of the image: Another example question is: Why is the Median Less Sensitive to Extreme Values Compared to the Mean? The median can be a better estimator in the case of distributions with outliers. Related is also: Could a mismatch between loss functions used for fitting vs. tuning parameter selection be justified? The answer to that question explains how, when we wish to have an estimator that optimizes the mean squared error, then it doesn't mean that we need to use the squared error loss function in fitting/training the model. Asside from using different estimators like the median or a maximum likelihood estimators, there is also the concept of biased estimators that can improve the expectation of the loss. Examples are regularisation (Ridge regression, lasso regression), Bayesian estimators, shrinking.
Should we always minimize squared deviations if we want to find the dependency of mean on features?
A similar question (if not the same) is: If the predicted value of machine learning method is E(y | x), why bother with different cost functions for y | x? The theoretical mean of a distribution minim
Should we always minimize squared deviations if we want to find the dependency of mean on features? A similar question (if not the same) is: If the predicted value of machine learning method is E(y | x), why bother with different cost functions for y | x? The theoretical mean of a distribution minimizes the squared error, but that does not mean that the sample mean is always the best estimator with regards the squared error loss function. The sample mean has a statistical variation. In an answer to the above question an example is givem that shows how the sample median is performing better than the sample mean when the errors are Laplace distributed. Below is a copy of the image: Another example question is: Why is the Median Less Sensitive to Extreme Values Compared to the Mean? The median can be a better estimator in the case of distributions with outliers. Related is also: Could a mismatch between loss functions used for fitting vs. tuning parameter selection be justified? The answer to that question explains how, when we wish to have an estimator that optimizes the mean squared error, then it doesn't mean that we need to use the squared error loss function in fitting/training the model. Asside from using different estimators like the median or a maximum likelihood estimators, there is also the concept of biased estimators that can improve the expectation of the loss. Examples are regularisation (Ridge regression, lasso regression), Bayesian estimators, shrinking.
Should we always minimize squared deviations if we want to find the dependency of mean on features? A similar question (if not the same) is: If the predicted value of machine learning method is E(y | x), why bother with different cost functions for y | x? The theoretical mean of a distribution minim
34,623
Should we always minimize squared deviations if we want to find the dependency of mean on features?
Your estimator is the OLS estimator in nonlinear regression Your problem is essentially just the OLS estimation problem in nonlinear regression. To see this, suppose you have nonlinear regression model of the form: $$Y_i = \mu(x_i, \mathbf{p}) + \varepsilon_i \quad \quad \quad \quad \quad \varepsilon_i \sim \text{IID Dist}(\text{Mean} = 0),$$ with error terms that are independent of the explanatory variables.$^\dagger$ Taking conditional expectations of both sides yields the equation: $$\mu(x_i, \mathbf{p}) = \mathbb{E}(Y_i|X_i=x_i, \mathbf{p}).$$ This shows that the function $\mu$ is the true regression function which represents the conditional expected value of the response variable conditional on the explanatory variable. Now, if you have $n$ observations from this model, then the OLS estimator for $\mathbf{p}$ is defined by the optimisation requirement: $$\mu(x_i, \hat{\mathbf{p}}_n) = \min_\mathbf{p} \sum_{i=1}^n (y_i-\mu(x_i, \mathbf{p}))^2,$$ which is the optimisation requirement in your question. So, your problem is about OLS estimation of the parameter $\mathbf{p}$, and you want to know the consistency properties of the OLS estimator $\hat{\mathbf{p}}_n$. Consisteny properties in regression models is something that has been examined extensively in the statistical literature. Under broad conditions on the sequence of explanatory variables, the estimator $\hat{\mathbf{p}}_n$ is a consistent estimator for $\mathbf{p}$. For OLS estimation in nonlinear regression, these conditions are extensions of the "Grenander conditions" for consistency in linear regression (see e.g., Richardson and Bhattacharyya 1990). The exact conditions are quite technical, but heuristically, they require the sequence of explanatory variables $x_1,x_2,x_3,...$ to be such that the "influence" of any finite set of data points tends to zero as $n \rightarrow \infty$. If you have the mathematical background to do so, I recommend you read the linked paper to get an understanding of the consistency conditions in nonlinear regression. For a simpler place to get started you can have a look at some answers on this site that look at the Grenander conditions for OLS consistency in linear regression (see e.g., here). $^\dagger$ In this formulation of the nonlinear regression model I allow any error distribution with zero mean. In the special case where the error distribution is normal the OLS estimator will correspond to the MLE. The general case is used here because you have not specified an error distribution in your problem.
Should we always minimize squared deviations if we want to find the dependency of mean on features?
Your estimator is the OLS estimator in nonlinear regression Your problem is essentially just the OLS estimation problem in nonlinear regression. To see this, suppose you have nonlinear regression mod
Should we always minimize squared deviations if we want to find the dependency of mean on features? Your estimator is the OLS estimator in nonlinear regression Your problem is essentially just the OLS estimation problem in nonlinear regression. To see this, suppose you have nonlinear regression model of the form: $$Y_i = \mu(x_i, \mathbf{p}) + \varepsilon_i \quad \quad \quad \quad \quad \varepsilon_i \sim \text{IID Dist}(\text{Mean} = 0),$$ with error terms that are independent of the explanatory variables.$^\dagger$ Taking conditional expectations of both sides yields the equation: $$\mu(x_i, \mathbf{p}) = \mathbb{E}(Y_i|X_i=x_i, \mathbf{p}).$$ This shows that the function $\mu$ is the true regression function which represents the conditional expected value of the response variable conditional on the explanatory variable. Now, if you have $n$ observations from this model, then the OLS estimator for $\mathbf{p}$ is defined by the optimisation requirement: $$\mu(x_i, \hat{\mathbf{p}}_n) = \min_\mathbf{p} \sum_{i=1}^n (y_i-\mu(x_i, \mathbf{p}))^2,$$ which is the optimisation requirement in your question. So, your problem is about OLS estimation of the parameter $\mathbf{p}$, and you want to know the consistency properties of the OLS estimator $\hat{\mathbf{p}}_n$. Consisteny properties in regression models is something that has been examined extensively in the statistical literature. Under broad conditions on the sequence of explanatory variables, the estimator $\hat{\mathbf{p}}_n$ is a consistent estimator for $\mathbf{p}$. For OLS estimation in nonlinear regression, these conditions are extensions of the "Grenander conditions" for consistency in linear regression (see e.g., Richardson and Bhattacharyya 1990). The exact conditions are quite technical, but heuristically, they require the sequence of explanatory variables $x_1,x_2,x_3,...$ to be such that the "influence" of any finite set of data points tends to zero as $n \rightarrow \infty$. If you have the mathematical background to do so, I recommend you read the linked paper to get an understanding of the consistency conditions in nonlinear regression. For a simpler place to get started you can have a look at some answers on this site that look at the Grenander conditions for OLS consistency in linear regression (see e.g., here). $^\dagger$ In this formulation of the nonlinear regression model I allow any error distribution with zero mean. In the special case where the error distribution is normal the OLS estimator will correspond to the MLE. The general case is used here because you have not specified an error distribution in your problem.
Should we always minimize squared deviations if we want to find the dependency of mean on features? Your estimator is the OLS estimator in nonlinear regression Your problem is essentially just the OLS estimation problem in nonlinear regression. To see this, suppose you have nonlinear regression mod
34,624
Should we always minimize squared deviations if we want to find the dependency of mean on features?
Yes, it is possible for estimators obtained by minimizing some different than squared deviation to give a better estimator of model parameters. The question of whether a given estimator can be beaten by others is studied in statistical decision theory. I'll lay out the basics below then give two examples. A framework to compare estimators Suppose we have a data vector $Z \in \mathbb{R}^{p}$, which gives the covariates and response of a single case. Suppose we are estimating a parameter $\mathbf{p}$. Using i.i.d. data $Z_1, \dots, Z_n$, we can estimate the parameter using $\delta(Z_1, \dots, Z_n)$ for some functional $\delta$. We can evaluate the closeness of an estimate to the parameter using the squared-error loss function $\|\delta(Z_1, \dots, Z_n) - \mathbf{p}\|^2$. Notice the loss function depends on the data as well as the parameter. We can form the risk $R(\delta, \mathbf{p}) = \mathbb{E} \|\delta(Z_1, \dots, Z_n) - \mathbf{p}\|^2$ by averaging over the data. For a given estimation rule $\delta$, the risk $R$ tells us how close we can expect the estimator to be to the truth. Lower is better. Using a simple computation, we can prove that $$R(\delta, \mathbf{p}) = \| \mathrm{Bias}\, \delta(Z_1, \dots, Z_n) \|^2 + \mathrm{trace} \, \mathrm{Var} \, \delta(Z_1, \dots, Z_n),$$ i.e. that the risk trades off the bias (shift) and the variance (width) of the estimator. This decomposition shows that minimizing the variance among unbiased estimators does not lead to the estimator which is expected to be closest to the parameter - instead this tradeoff must be minimized. We can compare the quality of different estimators $\delta_1$ and $\delta_2$ by comparing the risk curves $\mathbf{p} \mapsto R(\delta_1, \mathbf{p})$ and $\mathbf{p} \mapsto R(\delta_2, \mathbf{p})$. For example, if $R(\delta_1, \mathbf{p}) \leq R(\delta_2, \mathbf{p})$ for all parameters $\mathbf{p}$, this means that the estimation rule $\delta_1$ is will be closer on average to the parameter $\mathbf{p}$ than $\delta_2$ for all possible parameter values. This means that $\delta_1$ dominates $\delta_2$. A first example Let's consider the simple example given as the first in OP's question. Here the data $Z=Y \in \mathbb{R}^1$ so that there is only one variable. Let us further assume that $Y = \mu + \epsilon$ for normally distributed $\epsilon$. The estimator formed by minimizing the empirical squared error is the sample mean $\bar{Y} = \frac{1}{n} \sum_{i=1}^n Y_i$. It is a classical result that the estimator $\delta(Y_1, \dots, Y_n) = \bar{Y}$ is admissible. This means that there does not exist any other estimator $\delta_2(Y_1, \dots, Y_n)$ which dominates the sample mean. A second example Now let's consider a linear regression example. Let the data be given by $Z=(y, x)$, where the outcome $y$ is scalar and the covariates $x \in \mathbb{R}^{p-1}$. Assume that $y = x^T \beta + \epsilon$, where $\epsilon$ is normally distributed. Let $\beta$ be the target of inference. The estimate formed by minimizing the empirical squared error is the OLS estimator $\hat\beta$. When $p > 3$, it turns out this estimator is not admissible: that is, there are other estimators which are always closer on average to the true parameter value $\beta$, regardless of its (unknown) value. A classical example is the James-Stein estimator, which equals $s(Z_1, \dots, Z_n) \hat\beta$ for a suitably chosen data-dependent shrinkage term $s \in (0,1)$. Conclusion Basing an estimating equation on the loss function does not necessarily lead to finite sample optimality of the estimator. OP is right to question the basis of the procedure.
Should we always minimize squared deviations if we want to find the dependency of mean on features?
Yes, it is possible for estimators obtained by minimizing some different than squared deviation to give a better estimator of model parameters. The question of whether a given estimator can be beaten
Should we always minimize squared deviations if we want to find the dependency of mean on features? Yes, it is possible for estimators obtained by minimizing some different than squared deviation to give a better estimator of model parameters. The question of whether a given estimator can be beaten by others is studied in statistical decision theory. I'll lay out the basics below then give two examples. A framework to compare estimators Suppose we have a data vector $Z \in \mathbb{R}^{p}$, which gives the covariates and response of a single case. Suppose we are estimating a parameter $\mathbf{p}$. Using i.i.d. data $Z_1, \dots, Z_n$, we can estimate the parameter using $\delta(Z_1, \dots, Z_n)$ for some functional $\delta$. We can evaluate the closeness of an estimate to the parameter using the squared-error loss function $\|\delta(Z_1, \dots, Z_n) - \mathbf{p}\|^2$. Notice the loss function depends on the data as well as the parameter. We can form the risk $R(\delta, \mathbf{p}) = \mathbb{E} \|\delta(Z_1, \dots, Z_n) - \mathbf{p}\|^2$ by averaging over the data. For a given estimation rule $\delta$, the risk $R$ tells us how close we can expect the estimator to be to the truth. Lower is better. Using a simple computation, we can prove that $$R(\delta, \mathbf{p}) = \| \mathrm{Bias}\, \delta(Z_1, \dots, Z_n) \|^2 + \mathrm{trace} \, \mathrm{Var} \, \delta(Z_1, \dots, Z_n),$$ i.e. that the risk trades off the bias (shift) and the variance (width) of the estimator. This decomposition shows that minimizing the variance among unbiased estimators does not lead to the estimator which is expected to be closest to the parameter - instead this tradeoff must be minimized. We can compare the quality of different estimators $\delta_1$ and $\delta_2$ by comparing the risk curves $\mathbf{p} \mapsto R(\delta_1, \mathbf{p})$ and $\mathbf{p} \mapsto R(\delta_2, \mathbf{p})$. For example, if $R(\delta_1, \mathbf{p}) \leq R(\delta_2, \mathbf{p})$ for all parameters $\mathbf{p}$, this means that the estimation rule $\delta_1$ is will be closer on average to the parameter $\mathbf{p}$ than $\delta_2$ for all possible parameter values. This means that $\delta_1$ dominates $\delta_2$. A first example Let's consider the simple example given as the first in OP's question. Here the data $Z=Y \in \mathbb{R}^1$ so that there is only one variable. Let us further assume that $Y = \mu + \epsilon$ for normally distributed $\epsilon$. The estimator formed by minimizing the empirical squared error is the sample mean $\bar{Y} = \frac{1}{n} \sum_{i=1}^n Y_i$. It is a classical result that the estimator $\delta(Y_1, \dots, Y_n) = \bar{Y}$ is admissible. This means that there does not exist any other estimator $\delta_2(Y_1, \dots, Y_n)$ which dominates the sample mean. A second example Now let's consider a linear regression example. Let the data be given by $Z=(y, x)$, where the outcome $y$ is scalar and the covariates $x \in \mathbb{R}^{p-1}$. Assume that $y = x^T \beta + \epsilon$, where $\epsilon$ is normally distributed. Let $\beta$ be the target of inference. The estimate formed by minimizing the empirical squared error is the OLS estimator $\hat\beta$. When $p > 3$, it turns out this estimator is not admissible: that is, there are other estimators which are always closer on average to the true parameter value $\beta$, regardless of its (unknown) value. A classical example is the James-Stein estimator, which equals $s(Z_1, \dots, Z_n) \hat\beta$ for a suitably chosen data-dependent shrinkage term $s \in (0,1)$. Conclusion Basing an estimating equation on the loss function does not necessarily lead to finite sample optimality of the estimator. OP is right to question the basis of the procedure.
Should we always minimize squared deviations if we want to find the dependency of mean on features? Yes, it is possible for estimators obtained by minimizing some different than squared deviation to give a better estimator of model parameters. The question of whether a given estimator can be beaten
34,625
Should we always minimize squared deviations if we want to find the dependency of mean on features?
Can it be the case that a use of something different from squared deviation gives a better estimate of model parameters (for example more accurate (smaller width) and with smaller or no systematic from the correct model parameters? The keyword in your question is "better". To rigorously define "better", you need a loss function. If your loss function is the squared loss and its argument is the prediction error that you commit by using $\mu$ as a prediction of $y$, then estimating $\mu$ by minimizing squared deviations is the right thing to do. In technical terms, you are approximating the expected loss (or statistical risk) with the sample loss (empirical risk). The latter converges to the former (by the law of large numbers), provided that the minimization does not introduce some serious discontinuities at the limit. If your loss function is not the squared loss, or it is not defined on prediction errors (e.g., it is defined on parameter estimation errors), then the minimization of squared deviations can be sub-optimal. In the first case (different loss function), you just need to be coherent. For example, if you use the absolute loss, then you need to minimize absolute deviations. In the second case (loss defined on something different from prediction errors), things get complicated, and I dare say that there are very few analytical results. One of the few results is the Gauss-Markov theorem for linear regression models: under certain assumptions, even if you minimize a quadratic loss function defined over prediction errors, you achieve optimality also with respect to a quadratic loss function defined over parameter estimation errors.
Should we always minimize squared deviations if we want to find the dependency of mean on features?
Can it be the case that a use of something different from squared deviation gives a better estimate of model parameters (for example more accurate (smaller width) and with smaller or no systematic fro
Should we always minimize squared deviations if we want to find the dependency of mean on features? Can it be the case that a use of something different from squared deviation gives a better estimate of model parameters (for example more accurate (smaller width) and with smaller or no systematic from the correct model parameters? The keyword in your question is "better". To rigorously define "better", you need a loss function. If your loss function is the squared loss and its argument is the prediction error that you commit by using $\mu$ as a prediction of $y$, then estimating $\mu$ by minimizing squared deviations is the right thing to do. In technical terms, you are approximating the expected loss (or statistical risk) with the sample loss (empirical risk). The latter converges to the former (by the law of large numbers), provided that the minimization does not introduce some serious discontinuities at the limit. If your loss function is not the squared loss, or it is not defined on prediction errors (e.g., it is defined on parameter estimation errors), then the minimization of squared deviations can be sub-optimal. In the first case (different loss function), you just need to be coherent. For example, if you use the absolute loss, then you need to minimize absolute deviations. In the second case (loss defined on something different from prediction errors), things get complicated, and I dare say that there are very few analytical results. One of the few results is the Gauss-Markov theorem for linear regression models: under certain assumptions, even if you minimize a quadratic loss function defined over prediction errors, you achieve optimality also with respect to a quadratic loss function defined over parameter estimation errors.
Should we always minimize squared deviations if we want to find the dependency of mean on features? Can it be the case that a use of something different from squared deviation gives a better estimate of model parameters (for example more accurate (smaller width) and with smaller or no systematic fro
34,626
Variance estimate for Student's t-distribution with heavy tails
As @whuber observes, the "usual" standard deviation estimate is highly variable when the underlying data is distributed $t$ with degrees of freedom just above 2. Consider the following experiment. We generate 100,000 samples from a $t(2.2)$ distribution, and calculate the sample standard deviation over successive sample sizes in steps of 10, e.g., x[1:10], x[1:20], ... x[1:100000]. We plot the results, which show the instability quite clearly: df <- 2.2 x <- rt(100000, df) sd_est <- rep(0, 10000) for (i in seq_along(sd_est)) { sd_est[i] <- sd(x[1:(10*i)]) } plot(sd_est ~ seq(1, length(x), length.out=length(sd_est)), xlab = "Sample size", ylab = "Std. deviation estimate") abline(h=sqrt(df/(df-2))) # The true standard deviation And the plot, with a horizontal line at the true value: Even when we think we have a stable result, e.g., with sample sizes of 50,000, we can experience a big jump in our estimate with just one observation: max(x) [1] 712.4925 which.max(x) [1] 55119 And of course at no point in the trace are we particularly near the true value. The solution is, as @whuber observes, to use a robust estimator. Note that estimating the parameters of the distribution using maximum likelihood, then calculating an estimate of the std. deviation from the estimated parameters, may also not be a good idea: see this answer to Fitting t-distribution in R: scaling parameter
Variance estimate for Student's t-distribution with heavy tails
As @whuber observes, the "usual" standard deviation estimate is highly variable when the underlying data is distributed $t$ with degrees of freedom just above 2. Consider the following experiment. We
Variance estimate for Student's t-distribution with heavy tails As @whuber observes, the "usual" standard deviation estimate is highly variable when the underlying data is distributed $t$ with degrees of freedom just above 2. Consider the following experiment. We generate 100,000 samples from a $t(2.2)$ distribution, and calculate the sample standard deviation over successive sample sizes in steps of 10, e.g., x[1:10], x[1:20], ... x[1:100000]. We plot the results, which show the instability quite clearly: df <- 2.2 x <- rt(100000, df) sd_est <- rep(0, 10000) for (i in seq_along(sd_est)) { sd_est[i] <- sd(x[1:(10*i)]) } plot(sd_est ~ seq(1, length(x), length.out=length(sd_est)), xlab = "Sample size", ylab = "Std. deviation estimate") abline(h=sqrt(df/(df-2))) # The true standard deviation And the plot, with a horizontal line at the true value: Even when we think we have a stable result, e.g., with sample sizes of 50,000, we can experience a big jump in our estimate with just one observation: max(x) [1] 712.4925 which.max(x) [1] 55119 And of course at no point in the trace are we particularly near the true value. The solution is, as @whuber observes, to use a robust estimator. Note that estimating the parameters of the distribution using maximum likelihood, then calculating an estimate of the std. deviation from the estimated parameters, may also not be a good idea: see this answer to Fitting t-distribution in R: scaling parameter
Variance estimate for Student's t-distribution with heavy tails As @whuber observes, the "usual" standard deviation estimate is highly variable when the underlying data is distributed $t$ with degrees of freedom just above 2. Consider the following experiment. We
34,627
Variance estimate for Student's t-distribution with heavy tails
Your example would be a good application of a new estimator for heavy-tailed distributions. I call the method Independent Approximates (IAs), since it utilizes a subsample of n-tuples that are approximately equal. The method can be applied to estimate the location $\mu$, scale $\sigma$, and degree of freedom $\nu$. Since you are estimating the variance, I'll assume the scale is your primary interest. To estimate the scale, assuming the location is known, one can use the triplet IAs. The triplet IAs are selected by partitioning the original samples into triplets and subselecting those triplets that are approximately equal, and retaining the median sample. The referenced paper provides further details. The triplet-IAs which are guaranteed to have a finite second moment for all $\nu$, and if $\nu > 2$ the variance of the second moment will be finite. For your example with $\nu=2.2$, the median triplet IA samples will have $\nu_{triplet} = 8.6$. The estimate of the second moment of the triplets, $\mu_{triplet}^2$ can be used to estimate the scale of the original distribution using the following function $\mu = \sqrt{3 \mu_{triplet}^2}$. See my paper Independent Approximates enable closed-form estimation of heavy-tailed distributions for further details and examples.
Variance estimate for Student's t-distribution with heavy tails
Your example would be a good application of a new estimator for heavy-tailed distributions. I call the method Independent Approximates (IAs), since it utilizes a subsample of n-tuples that are approx
Variance estimate for Student's t-distribution with heavy tails Your example would be a good application of a new estimator for heavy-tailed distributions. I call the method Independent Approximates (IAs), since it utilizes a subsample of n-tuples that are approximately equal. The method can be applied to estimate the location $\mu$, scale $\sigma$, and degree of freedom $\nu$. Since you are estimating the variance, I'll assume the scale is your primary interest. To estimate the scale, assuming the location is known, one can use the triplet IAs. The triplet IAs are selected by partitioning the original samples into triplets and subselecting those triplets that are approximately equal, and retaining the median sample. The referenced paper provides further details. The triplet-IAs which are guaranteed to have a finite second moment for all $\nu$, and if $\nu > 2$ the variance of the second moment will be finite. For your example with $\nu=2.2$, the median triplet IA samples will have $\nu_{triplet} = 8.6$. The estimate of the second moment of the triplets, $\mu_{triplet}^2$ can be used to estimate the scale of the original distribution using the following function $\mu = \sqrt{3 \mu_{triplet}^2}$. See my paper Independent Approximates enable closed-form estimation of heavy-tailed distributions for further details and examples.
Variance estimate for Student's t-distribution with heavy tails Your example would be a good application of a new estimator for heavy-tailed distributions. I call the method Independent Approximates (IAs), since it utilizes a subsample of n-tuples that are approx
34,628
Variance estimate for Student's t-distribution with heavy tails
The distribution has a few large values. You don't see them when you plot only a hundred as in your example. The example below shows more clearly that you get a mode that is unequal to zero but you do not necessarily have a mean unequal to zero. set.seed(1) nu <- 2.2 sim <- sapply(round(seq(1e4, 1e4, len=10000)), function(n) replicate(10, var(rt(n, nu)) - nu/(nu-2))) matplot(t(sim),pch=21,col=1, bg = 1, cex = 0.5) abline(h=0) hist(sim, breaks = seq(min(sim-1), max(sim+1), 0.25), xlim = c(-7,30)) mean(sim) ### the mean of this sample will equal ### 2.297499, which is *above* zero
Variance estimate for Student's t-distribution with heavy tails
The distribution has a few large values. You don't see them when you plot only a hundred as in your example. The example below shows more clearly that you get a mode that is unequal to zero but you do
Variance estimate for Student's t-distribution with heavy tails The distribution has a few large values. You don't see them when you plot only a hundred as in your example. The example below shows more clearly that you get a mode that is unequal to zero but you do not necessarily have a mean unequal to zero. set.seed(1) nu <- 2.2 sim <- sapply(round(seq(1e4, 1e4, len=10000)), function(n) replicate(10, var(rt(n, nu)) - nu/(nu-2))) matplot(t(sim),pch=21,col=1, bg = 1, cex = 0.5) abline(h=0) hist(sim, breaks = seq(min(sim-1), max(sim+1), 0.25), xlim = c(-7,30)) mean(sim) ### the mean of this sample will equal ### 2.297499, which is *above* zero
Variance estimate for Student's t-distribution with heavy tails The distribution has a few large values. You don't see them when you plot only a hundred as in your example. The example below shows more clearly that you get a mode that is unequal to zero but you do
34,629
What is the distribution of the value in a sample closest to a given value?
Let's solve this for all distributions, normal or not. To this end, let the distribution function be $F$ and let $\epsilon \ge 0$ be any possible distance to $m.$ The event "$X$ is within distance $\epsilon$ of $m$" is the interval $X\in[m-\epsilon, m+\epsilon].$ According to the definition of $F,$ this can be expressed as $$\Pr(|X-m|\le \epsilon) = F(m+\epsilon) - F(m-\epsilon) + \Pr(X=m-\epsilon).$$ (For a Normal distribution, or any continuous distribution, that last term is zero and can be ignored.) The chance this does not occur is its complement, $$\Pr(|X-m|\gt \epsilon) = 1- \Pr(|X-m|\le \epsilon).$$ For a random sample of $n$ independent values, these probabilities multiply (that's the definition of independence). Consequently, the chance that all values in the sample are greater than $\epsilon$ from $m$ is $$\Pr(|X_i-m|\gt \epsilon\ \forall i) = \left[1- \Pr(|X-m|\le \epsilon)\right]^n.$$ Its complement therefore is the chance that at least one of the $X_i$ is within distance $\epsilon$ of $m.$ This is precisely the distribution function of the nearest distance. Writing $E = \min|X_i-m|$ for that distance, we have found $$F_E(\epsilon) = \Pr(E\le \epsilon) = 1 - \left[1- \Pr(|X-m|\le \epsilon)\right]^n.$$ This is a thorough and fully general answer. When $F$ is continuous at $m\pm\epsilon$ (with density function $f$) though, we can (a) neglect that last probability term and (b) differentiate the expression to obtain a density for $E,$ $$f_E(\epsilon) = \frac{\mathrm d}{\mathrm{d}\epsilon} F_E(\epsilon) = n\left[F(m+\epsilon) - F(m-\epsilon)\right]^{n-1} \left(f(m+\epsilon) + f(m-\epsilon)\right).$$ Here are some plots of $f_E$ for various sample sizes from the standard Normal distribution. It all makes sense: as you look from left to right, the sample size increases and therefore the chance of being close to any given $m$ increases. As $m$ increases from $0$ (the mode) to $4$ (far out into the right tail), the chance of being close to $m$ remains small, but the typical nearest distance to $m$ shrinks. In a similar fashion you can write the (more complicated) formula for the signed distance between the nearest $X$ and $m.$ Adding $m$ to this will produce a distribution of the nearest $X,$ if that's what you want. This is the R code used to generate the figure. It implements $F_E$ as pnormclosest and $f_E$ as dnormclosest. They are readily modified to handle any distribution $F$ by replacing pnorm and dnorm by its distribution and density functions, respectively. pnormclosest <- function(x, m, n=1, mu=0, sigma=1) { 1 - (pnorm(m-x, mu, sigma) + pnorm(m+x, mu, sigma, lower.tail=FALSE))^n } dnormclosest <- function(x, m, n=1, mu=0, sigma=1) { n * (pnorm(m-x, mu, sigma) + pnorm(m+x, mu, sigma, lower.tail=FALSE))^(n-1) * (dnorm(m-x, mu, sigma) + dnorm(m+x, mu, sigma)) } ns <- c(1, 2, 20, 100) ms <- c(0, 1, 2, 4) par(mfrow = c(1, length(ns))) for (n in ns) { for (m in ms) curve(dnormclosest(x, m, n), 0, 3, ylim=c(0,2), add=m != 0, lwd=2, lty=abs(m)+1, col=hsv(abs(m)/(max(abs(ms))+1), .9, .8), xlab="Distance", ylab="Density", main=paste0("Sample size ", n)) legend("topright", bty="n", title="m", legend=ms, lty=abs(ms)+1, lwd=2, col=hsv(abs(ms)/(max(abs(ms))+1), .9, .8)) } par(mfrow=c(1,1))
What is the distribution of the value in a sample closest to a given value?
Let's solve this for all distributions, normal or not. To this end, let the distribution function be $F$ and let $\epsilon \ge 0$ be any possible distance to $m.$ The event "$X$ is within distance $\
What is the distribution of the value in a sample closest to a given value? Let's solve this for all distributions, normal or not. To this end, let the distribution function be $F$ and let $\epsilon \ge 0$ be any possible distance to $m.$ The event "$X$ is within distance $\epsilon$ of $m$" is the interval $X\in[m-\epsilon, m+\epsilon].$ According to the definition of $F,$ this can be expressed as $$\Pr(|X-m|\le \epsilon) = F(m+\epsilon) - F(m-\epsilon) + \Pr(X=m-\epsilon).$$ (For a Normal distribution, or any continuous distribution, that last term is zero and can be ignored.) The chance this does not occur is its complement, $$\Pr(|X-m|\gt \epsilon) = 1- \Pr(|X-m|\le \epsilon).$$ For a random sample of $n$ independent values, these probabilities multiply (that's the definition of independence). Consequently, the chance that all values in the sample are greater than $\epsilon$ from $m$ is $$\Pr(|X_i-m|\gt \epsilon\ \forall i) = \left[1- \Pr(|X-m|\le \epsilon)\right]^n.$$ Its complement therefore is the chance that at least one of the $X_i$ is within distance $\epsilon$ of $m.$ This is precisely the distribution function of the nearest distance. Writing $E = \min|X_i-m|$ for that distance, we have found $$F_E(\epsilon) = \Pr(E\le \epsilon) = 1 - \left[1- \Pr(|X-m|\le \epsilon)\right]^n.$$ This is a thorough and fully general answer. When $F$ is continuous at $m\pm\epsilon$ (with density function $f$) though, we can (a) neglect that last probability term and (b) differentiate the expression to obtain a density for $E,$ $$f_E(\epsilon) = \frac{\mathrm d}{\mathrm{d}\epsilon} F_E(\epsilon) = n\left[F(m+\epsilon) - F(m-\epsilon)\right]^{n-1} \left(f(m+\epsilon) + f(m-\epsilon)\right).$$ Here are some plots of $f_E$ for various sample sizes from the standard Normal distribution. It all makes sense: as you look from left to right, the sample size increases and therefore the chance of being close to any given $m$ increases. As $m$ increases from $0$ (the mode) to $4$ (far out into the right tail), the chance of being close to $m$ remains small, but the typical nearest distance to $m$ shrinks. In a similar fashion you can write the (more complicated) formula for the signed distance between the nearest $X$ and $m.$ Adding $m$ to this will produce a distribution of the nearest $X,$ if that's what you want. This is the R code used to generate the figure. It implements $F_E$ as pnormclosest and $f_E$ as dnormclosest. They are readily modified to handle any distribution $F$ by replacing pnorm and dnorm by its distribution and density functions, respectively. pnormclosest <- function(x, m, n=1, mu=0, sigma=1) { 1 - (pnorm(m-x, mu, sigma) + pnorm(m+x, mu, sigma, lower.tail=FALSE))^n } dnormclosest <- function(x, m, n=1, mu=0, sigma=1) { n * (pnorm(m-x, mu, sigma) + pnorm(m+x, mu, sigma, lower.tail=FALSE))^(n-1) * (dnorm(m-x, mu, sigma) + dnorm(m+x, mu, sigma)) } ns <- c(1, 2, 20, 100) ms <- c(0, 1, 2, 4) par(mfrow = c(1, length(ns))) for (n in ns) { for (m in ms) curve(dnormclosest(x, m, n), 0, 3, ylim=c(0,2), add=m != 0, lwd=2, lty=abs(m)+1, col=hsv(abs(m)/(max(abs(ms))+1), .9, .8), xlab="Distance", ylab="Density", main=paste0("Sample size ", n)) legend("topright", bty="n", title="m", legend=ms, lty=abs(ms)+1, lwd=2, col=hsv(abs(ms)/(max(abs(ms))+1), .9, .8)) } par(mfrow=c(1,1))
What is the distribution of the value in a sample closest to a given value? Let's solve this for all distributions, normal or not. To this end, let the distribution function be $F$ and let $\epsilon \ge 0$ be any possible distance to $m.$ The event "$X$ is within distance $\
34,630
Why does torchvision.models.resnet18 not use softmax?
Whether you need a softmax layer to train a neural network in PyTorch will depend on what loss function you use. If you use the torch.nn.CrossEntropyLoss, then the softmax is computed as part of the loss. From the link: The loss can be described as: $$ \text{loss}(x,class) = −\log\left(\frac{\exp⁡(x[class])}{\sum_j \exp(x[j])}\right) $$ This loss is just the concatenation of a torch.nn.LogSoftmax followed by the torch.nn.NLLLoss loss. From the documentation of torch.nn.CrossEntropyLoss: This criterion combines LogSoftmax and NLLLoss in one single class. and from the documentation of torch.nn.NLLLoss: Obtaining log-probabilities in a neural network is easily achieved by adding a LogSoftmax layer in the last layer of your network. You may use CrossEntropyLoss instead, if you prefer not to add an extra layer. It seems that the developers of these pretrained models had the torch.nn.CrossEntropyLoss in mind when they were creating them.
Why does torchvision.models.resnet18 not use softmax?
Whether you need a softmax layer to train a neural network in PyTorch will depend on what loss function you use. If you use the torch.nn.CrossEntropyLoss, then the softmax is computed as part of the l
Why does torchvision.models.resnet18 not use softmax? Whether you need a softmax layer to train a neural network in PyTorch will depend on what loss function you use. If you use the torch.nn.CrossEntropyLoss, then the softmax is computed as part of the loss. From the link: The loss can be described as: $$ \text{loss}(x,class) = −\log\left(\frac{\exp⁡(x[class])}{\sum_j \exp(x[j])}\right) $$ This loss is just the concatenation of a torch.nn.LogSoftmax followed by the torch.nn.NLLLoss loss. From the documentation of torch.nn.CrossEntropyLoss: This criterion combines LogSoftmax and NLLLoss in one single class. and from the documentation of torch.nn.NLLLoss: Obtaining log-probabilities in a neural network is easily achieved by adding a LogSoftmax layer in the last layer of your network. You may use CrossEntropyLoss instead, if you prefer not to add an extra layer. It seems that the developers of these pretrained models had the torch.nn.CrossEntropyLoss in mind when they were creating them.
Why does torchvision.models.resnet18 not use softmax? Whether you need a softmax layer to train a neural network in PyTorch will depend on what loss function you use. If you use the torch.nn.CrossEntropyLoss, then the softmax is computed as part of the l
34,631
What does it imply when the sensitivity = 1.000 and specificity = 0.000?
Sensitivity $= 1$ means you had some true positives and no false negatives: all actual cases were correctly predicted as positive Specificity $= 0$ means you had some false positives and no true negatives: all actual non-cases were incorrectly predicted as positive So having both of these means that everything was predicted to be positive, whether it was an actual case or not You might want to adjust your predictions so some are predicted positive and some negative. How you do this depends on how you are predicting
What does it imply when the sensitivity = 1.000 and specificity = 0.000?
Sensitivity $= 1$ means you had some true positives and no false negatives: all actual cases were correctly predicted as positive Specificity $= 0$ means you had some false positives and no true negat
What does it imply when the sensitivity = 1.000 and specificity = 0.000? Sensitivity $= 1$ means you had some true positives and no false negatives: all actual cases were correctly predicted as positive Specificity $= 0$ means you had some false positives and no true negatives: all actual non-cases were incorrectly predicted as positive So having both of these means that everything was predicted to be positive, whether it was an actual case or not You might want to adjust your predictions so some are predicted positive and some negative. How you do this depends on how you are predicting
What does it imply when the sensitivity = 1.000 and specificity = 0.000? Sensitivity $= 1$ means you had some true positives and no false negatives: all actual cases were correctly predicted as positive Specificity $= 0$ means you had some false positives and no true negat
34,632
Is it allowed to refer to Artificial Neural Networks as Statistical learning?
The classic The Elements of Statistical Learning handbook by Hastie et al discusses neural networks among other algorithms, so it needs to be a “statistical learning” algorithm. Depending whom you’d ask, neural networks are either statistics, statistical learning, pattern recognition, machine learning, deep learning, or artificial intelligence. There’s no single, agreed category used by everybody to describe them.
Is it allowed to refer to Artificial Neural Networks as Statistical learning?
The classic The Elements of Statistical Learning handbook by Hastie et al discusses neural networks among other algorithms, so it needs to be a “statistical learning” algorithm. Depending whom you’d a
Is it allowed to refer to Artificial Neural Networks as Statistical learning? The classic The Elements of Statistical Learning handbook by Hastie et al discusses neural networks among other algorithms, so it needs to be a “statistical learning” algorithm. Depending whom you’d ask, neural networks are either statistics, statistical learning, pattern recognition, machine learning, deep learning, or artificial intelligence. There’s no single, agreed category used by everybody to describe them.
Is it allowed to refer to Artificial Neural Networks as Statistical learning? The classic The Elements of Statistical Learning handbook by Hastie et al discusses neural networks among other algorithms, so it needs to be a “statistical learning” algorithm. Depending whom you’d a
34,633
Is it allowed to refer to Artificial Neural Networks as Statistical learning?
That's a political question, not a statistical one :-) Historically, statistics and machine learning were two distinct communities, with little interaction. ANNs were developed by the machine learning community. Today, the lines might be somewhat blurred, with some statisticians counting ANNs to statistics, while some machine learners count logistic regression and even linear regression to machine learning. Needless to say, some members of the opposite camp beg to differ. So, there is no simple answer to your question. Calling ANNs statistical method might be seen as justified by some and objected to by others.
Is it allowed to refer to Artificial Neural Networks as Statistical learning?
That's a political question, not a statistical one :-) Historically, statistics and machine learning were two distinct communities, with little interaction. ANNs were developed by the machine learning
Is it allowed to refer to Artificial Neural Networks as Statistical learning? That's a political question, not a statistical one :-) Historically, statistics and machine learning were two distinct communities, with little interaction. ANNs were developed by the machine learning community. Today, the lines might be somewhat blurred, with some statisticians counting ANNs to statistics, while some machine learners count logistic regression and even linear regression to machine learning. Needless to say, some members of the opposite camp beg to differ. So, there is no simple answer to your question. Calling ANNs statistical method might be seen as justified by some and objected to by others.
Is it allowed to refer to Artificial Neural Networks as Statistical learning? That's a political question, not a statistical one :-) Historically, statistics and machine learning were two distinct communities, with little interaction. ANNs were developed by the machine learning
34,634
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis?
You can use the equivalence between confidence intervals and hypothesis testing: Can we reject a null hypothesis with confidence intervals produced via sampling rather than the null hypothesis? Then you will compute the confidence interval for the difference of the means and reject the null hypothesis when none of the values between $\pm \delta$ are inside the interval. But with this method you will reject the null hypothesis less often than the aimed significance level. This difference arrises because confidence intervals relate to point hypotheses, which is not your case. Graphical view of the sample distribution of $\bar{x}-\bar{y}$ and $\hat{\sigma}$ In the image below the images sketches two situations for a t-test When we compare two samples with equal size and variance and the null hypothesis is $$H_0: \mu_y-\mu_x = 0$$ then we look at the value of the t-statistic, which relates to the likelihood-ratio. $$t = \frac{1}{\sqrt{2/n}} \frac{d}{s_p}$$ When we use instead the null hypothesis $$H_0: \vert \mu_y-\mu_x \vert \leq \delta$$ then the likelihood ratio test will work out similar the same and be like the t-statistic but now it is shifted to the left and the right. In the image below the boundaries for the t-value of a 95% significance test are drawn. These boundaries are compared with sample distributions of the standard deviation and difference of means for samples of size 5. The $X$ and $Y$ are normal distributed with equal variance and equal means, except in the lower image where the means differ by $\mu_y-\mu_X = 0.5$. Likelihood ratio test, T-test with shifted boundaries, not ideal In the first image, you see that 5% of the samples lead to a rejection of the hypothesis (as designed by setting the level at 95%). However, in the lower image, the rejection rate is lower and not equal to 5% (Because the boundaries are wider due to the shift $\delta$). So possibly one can choose to draw the boundaries more narrow. But for large $s_p$ you get closer to the current boundaries (Intuitively you can say that $\delta$ becomes less important, relatively smaller, when the variance of the variables is large). The reason is that we do not need to necessarily use the likelihood ratio test is that we are not dealing with a simple hypothesis. According to the Neyman-Pearson lemma the likelihood ratio test is the most powerful test. But, that is only true when the hypotheses are simple hypotheses (like $H_0: \mu_y-\mu_x = 0$), and we have a composite hypothesis (like $H_0: -\delta \leq \mu_y-\mu_x \leq \delta$). For a composite hypothesis the likelihood ratio test may not always give the specified significance level (we choose boundaries for the likelihood ratio according to the worst case). So we can make sharper boundaries than the likelihood ratio test. However, there is no unique way to do this. R-code for the images: nsim <- 10^4 nsmp <- 5 rowDevs <- function(x) { n <- length(x[1,]) sqrt((rowMeans(x^2)-rowMeans(x)^2)*n/(n-1)) } ### simulations set.seed(1) x <- matrix(rnorm(nsim*nsmp),nsim) y <- matrix(rnorm(nsim*nsmp),nsim) ### statistics of difference and variance d <- rowMeans(y)-rowMeans(x) v <- (0.5*rowDevs(x)+0.5*rowDevs(y)) ## colouring 5% points with t-values above/below qt(0.975, df = 18) dv_slope <- qt(0.975, df = 18)*sqrt(2/nsmp) col <- (d/v > dv_slope)+(d/v < -dv_slope) ### plot points plot(d,v, xlim = c(-4,4), ylim = c(0,1.5), pch = 21, col = rgb(col,0,0,0.1), bg = rgb(col,0,0,0.1), cex = 0.5, xlab = expression(d == bar(y)-bar(x)), ylab = expression(s[p] == sqrt(0.5*s[x]+0.5*s[y])), xaxs = "i", yaxs = "i", main = expression(H[0] : mu[y]-mu[x]==0)) lines(c(0,10),c(0,10)/dv_slope, col = 1, lty = 2) lines(-c(0,10),c(0,10)/dv_slope, col = 1, lty = 2) ## colouring 5% points with t-values above/below qt(0.975, df = 18) dlt <- 0.5 ## colouring 5% points with t-values above/below qt(0.975, df = 18) dv_slope <- qt(0.975, df = 18)*sqrt(2/nsmp) col <- ((d-2*dlt)/v > dv_slope)+((d)/v < -dv_slope) ### plot points plot(d-dlt,v, xlim = c(-4,4), ylim = c(0,1.5), pch = 21, col = rgb(col,0,0,0.1), bg = rgb(col,0,0,0.1), cex = 0.5, xlab = expression(d == bar(y)-bar(x)), ylab = expression(s[p] == sqrt(0.5*s[x]+0.5*s[y])), xaxs = "i", yaxs = "i", main = expression(H[0] : "|" * mu[x]-mu[y] * "|" <= delta)) lines(c(0,10)+dlt,c(0,10)/dv_slope, col = 1, lty = 2) lines(-c(0,10)-dlt,c(0,10)/dv_slope, col = 1, lty = 2) Why does the t-test work for point hypothesis, $H_0 : \mu = 0$, but not for a composite hypothesis $H_0: \sigma \leq \mu \leq \sigma$? In the image below we draw the situation like above, but now we change the standard deviation $\sigma$ of the population from which we draw the sample. Now the image contains two seperate clouds. In the one case $\sigma = 1$ like before. In the other case $\sigma = 0.2$, and this creates the additional smaller little cloud of points. The diagonal lines are the borders for some critical level of the likelihood ratio. The first case (upper image) is for a point null hypothesis $H_0 : \mu = 0$, the second case is for a composite hypothesis $H_0: \sigma \leq \mu \leq \sigma$ (where in this particular image $\sigma = 0.15$). When we consider the probability of rejecting the null hypothesis if it is true (type I error), then this probability will depend on the parameters $\mu$ and $\sigma$ (which can differ within the null hypothesis). Dependency on $\mu$: When $\mu$ is closer to either $\pm \delta$ instead of $0$ then it might be intuitive that the null hypothesis is more likely to be rejected, and that we can not make a test such that the the type 1 error is the same for whatever value of $\mu$ that corresponds to the null hypothesis. Dependency on $\sigma$: The rejection probability will also depend on $\sigma$. In the first case/image (point hypothesis), then independent of $\sigma$ the type I error will be constant. If we change the $\sigma$ then this relates to scaling the sample distribution (represented by the cloud of points in the image) in both vertical and horizontal directions and the diagonal boundary line will intersect the same proportion. In the second case/image (composite hypothesis), then the the type I error will depend on $\sigma$. The boundary lines are shifted and do not pass through the center of the scaling transformation, so the scaling won't be an invariant transformation anymore with regards to the type I error. While these borders relate to some critical likelihood ratio, this is based on the ratio for a specific case out of the composite hypotheses, and may not be optimal for other cases. (in the case of point hypotheses there are no 'other cases', or in the case of the "point hypothesis" $\mu_a - \mu_b = 0$, which is not really a point hypothesis because $\sigma$ is not specified in the hypothesis, it happens to work out because the likelihood ratio is independent of $\sigma$).
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis?
You can use the equivalence between confidence intervals and hypothesis testing: Can we reject a null hypothesis with confidence intervals produced via sampling rather than the null hypothesis? Then y
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis? You can use the equivalence between confidence intervals and hypothesis testing: Can we reject a null hypothesis with confidence intervals produced via sampling rather than the null hypothesis? Then you will compute the confidence interval for the difference of the means and reject the null hypothesis when none of the values between $\pm \delta$ are inside the interval. But with this method you will reject the null hypothesis less often than the aimed significance level. This difference arrises because confidence intervals relate to point hypotheses, which is not your case. Graphical view of the sample distribution of $\bar{x}-\bar{y}$ and $\hat{\sigma}$ In the image below the images sketches two situations for a t-test When we compare two samples with equal size and variance and the null hypothesis is $$H_0: \mu_y-\mu_x = 0$$ then we look at the value of the t-statistic, which relates to the likelihood-ratio. $$t = \frac{1}{\sqrt{2/n}} \frac{d}{s_p}$$ When we use instead the null hypothesis $$H_0: \vert \mu_y-\mu_x \vert \leq \delta$$ then the likelihood ratio test will work out similar the same and be like the t-statistic but now it is shifted to the left and the right. In the image below the boundaries for the t-value of a 95% significance test are drawn. These boundaries are compared with sample distributions of the standard deviation and difference of means for samples of size 5. The $X$ and $Y$ are normal distributed with equal variance and equal means, except in the lower image where the means differ by $\mu_y-\mu_X = 0.5$. Likelihood ratio test, T-test with shifted boundaries, not ideal In the first image, you see that 5% of the samples lead to a rejection of the hypothesis (as designed by setting the level at 95%). However, in the lower image, the rejection rate is lower and not equal to 5% (Because the boundaries are wider due to the shift $\delta$). So possibly one can choose to draw the boundaries more narrow. But for large $s_p$ you get closer to the current boundaries (Intuitively you can say that $\delta$ becomes less important, relatively smaller, when the variance of the variables is large). The reason is that we do not need to necessarily use the likelihood ratio test is that we are not dealing with a simple hypothesis. According to the Neyman-Pearson lemma the likelihood ratio test is the most powerful test. But, that is only true when the hypotheses are simple hypotheses (like $H_0: \mu_y-\mu_x = 0$), and we have a composite hypothesis (like $H_0: -\delta \leq \mu_y-\mu_x \leq \delta$). For a composite hypothesis the likelihood ratio test may not always give the specified significance level (we choose boundaries for the likelihood ratio according to the worst case). So we can make sharper boundaries than the likelihood ratio test. However, there is no unique way to do this. R-code for the images: nsim <- 10^4 nsmp <- 5 rowDevs <- function(x) { n <- length(x[1,]) sqrt((rowMeans(x^2)-rowMeans(x)^2)*n/(n-1)) } ### simulations set.seed(1) x <- matrix(rnorm(nsim*nsmp),nsim) y <- matrix(rnorm(nsim*nsmp),nsim) ### statistics of difference and variance d <- rowMeans(y)-rowMeans(x) v <- (0.5*rowDevs(x)+0.5*rowDevs(y)) ## colouring 5% points with t-values above/below qt(0.975, df = 18) dv_slope <- qt(0.975, df = 18)*sqrt(2/nsmp) col <- (d/v > dv_slope)+(d/v < -dv_slope) ### plot points plot(d,v, xlim = c(-4,4), ylim = c(0,1.5), pch = 21, col = rgb(col,0,0,0.1), bg = rgb(col,0,0,0.1), cex = 0.5, xlab = expression(d == bar(y)-bar(x)), ylab = expression(s[p] == sqrt(0.5*s[x]+0.5*s[y])), xaxs = "i", yaxs = "i", main = expression(H[0] : mu[y]-mu[x]==0)) lines(c(0,10),c(0,10)/dv_slope, col = 1, lty = 2) lines(-c(0,10),c(0,10)/dv_slope, col = 1, lty = 2) ## colouring 5% points with t-values above/below qt(0.975, df = 18) dlt <- 0.5 ## colouring 5% points with t-values above/below qt(0.975, df = 18) dv_slope <- qt(0.975, df = 18)*sqrt(2/nsmp) col <- ((d-2*dlt)/v > dv_slope)+((d)/v < -dv_slope) ### plot points plot(d-dlt,v, xlim = c(-4,4), ylim = c(0,1.5), pch = 21, col = rgb(col,0,0,0.1), bg = rgb(col,0,0,0.1), cex = 0.5, xlab = expression(d == bar(y)-bar(x)), ylab = expression(s[p] == sqrt(0.5*s[x]+0.5*s[y])), xaxs = "i", yaxs = "i", main = expression(H[0] : "|" * mu[x]-mu[y] * "|" <= delta)) lines(c(0,10)+dlt,c(0,10)/dv_slope, col = 1, lty = 2) lines(-c(0,10)-dlt,c(0,10)/dv_slope, col = 1, lty = 2) Why does the t-test work for point hypothesis, $H_0 : \mu = 0$, but not for a composite hypothesis $H_0: \sigma \leq \mu \leq \sigma$? In the image below we draw the situation like above, but now we change the standard deviation $\sigma$ of the population from which we draw the sample. Now the image contains two seperate clouds. In the one case $\sigma = 1$ like before. In the other case $\sigma = 0.2$, and this creates the additional smaller little cloud of points. The diagonal lines are the borders for some critical level of the likelihood ratio. The first case (upper image) is for a point null hypothesis $H_0 : \mu = 0$, the second case is for a composite hypothesis $H_0: \sigma \leq \mu \leq \sigma$ (where in this particular image $\sigma = 0.15$). When we consider the probability of rejecting the null hypothesis if it is true (type I error), then this probability will depend on the parameters $\mu$ and $\sigma$ (which can differ within the null hypothesis). Dependency on $\mu$: When $\mu$ is closer to either $\pm \delta$ instead of $0$ then it might be intuitive that the null hypothesis is more likely to be rejected, and that we can not make a test such that the the type 1 error is the same for whatever value of $\mu$ that corresponds to the null hypothesis. Dependency on $\sigma$: The rejection probability will also depend on $\sigma$. In the first case/image (point hypothesis), then independent of $\sigma$ the type I error will be constant. If we change the $\sigma$ then this relates to scaling the sample distribution (represented by the cloud of points in the image) in both vertical and horizontal directions and the diagonal boundary line will intersect the same proportion. In the second case/image (composite hypothesis), then the the type I error will depend on $\sigma$. The boundary lines are shifted and do not pass through the center of the scaling transformation, so the scaling won't be an invariant transformation anymore with regards to the type I error. While these borders relate to some critical likelihood ratio, this is based on the ratio for a specific case out of the composite hypotheses, and may not be optimal for other cases. (in the case of point hypotheses there are no 'other cases', or in the case of the "point hypothesis" $\mu_a - \mu_b = 0$, which is not really a point hypothesis because $\sigma$ is not specified in the hypothesis, it happens to work out because the likelihood ratio is independent of $\sigma$).
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis? You can use the equivalence between confidence intervals and hypothesis testing: Can we reject a null hypothesis with confidence intervals produced via sampling rather than the null hypothesis? Then y
34,635
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis?
Alternatively using simulations i.e. the bootstrap method (R code follows). # Generate 1000 random standard normal values for x and y x = rnorm(1000,0,1) y = rnorm(1000,1,1) # Repeat many times: sample with replacement x and y, # calculate the mean of the new samples, take the difference res = replicate(1e4, mean(sample(x,replace=T)) - mean(sample(y,replace=T))) # Estimate the desired probability mean(abs(res) <= 1) [1] 0.1583 mean(abs(res) <= 1.1) [1] 0.8875
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis?
Alternatively using simulations i.e. the bootstrap method (R code follows). # Generate 1000 random standard normal values for x and y x = rnorm(1000,0,1) y = rnorm(1000,1,1) # Repeat many times: samp
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis? Alternatively using simulations i.e. the bootstrap method (R code follows). # Generate 1000 random standard normal values for x and y x = rnorm(1000,0,1) y = rnorm(1000,1,1) # Repeat many times: sample with replacement x and y, # calculate the mean of the new samples, take the difference res = replicate(1e4, mean(sample(x,replace=T)) - mean(sample(y,replace=T))) # Estimate the desired probability mean(abs(res) <= 1) [1] 0.1583 mean(abs(res) <= 1.1) [1] 0.8875
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis? Alternatively using simulations i.e. the bootstrap method (R code follows). # Generate 1000 random standard normal values for x and y x = rnorm(1000,0,1) y = rnorm(1000,1,1) # Repeat many times: samp
34,636
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis?
I think one possible solution for this test is to turn to regression to get the two means calculate the absolute value of their difference from the regression coefficients (a non-linear combination). Let's call this random variable $|\Delta|$. Once this is done, you have two choices. You can look at the one-sided CI for $|\Delta|$ to see if it excludes your superiority threshold. You can get that easily from step (2), since the overlap between two one-sided 95% CIs makes a two-sided 90% CI, so you can work backwards from the usual 90% CI for $|\Delta|$. Alternatively, you can perform a two-sided hypothesis test on $|\Delta|$, and then calculate the one-sided p-value from that. This is a bit more work, but is just a matter of getting the sign of the inequality, a $\chi^2$ statistic from the two-sided test, and evaluating cumulative standard normal distribution. If your test returns an F-statistic, you will have to use that instead, along with the t distribution in place of the normal. If you don't want to go this route, when $|\Delta| - \delta$ is positive, you can simply divide the two-sided p-value by 2. In the other case, you need to calculate $1-\frac{p}{2}$ since you are in the other tail. This simpler division approach works for symmetric distributions only. Here is an example in Stata, where we will conduct two such hypotheses comparing the average price of foreign (foreign = 1) and domestic cars (foreign = 0): . sysuse auto, clear (1978 Automobile Data) . table foreign, c(mean price) ----------------------- Car type | mean(price) ----------+------------ Domestic | 6,072.4 Foreign | 6,384.7 ----------------------- . /* (1) Calculate the means using regression */ . regress price ibn.foreign, noconstant Source | SS df MS Number of obs = 74 -------------+---------------------------------- F(2, 72) = 159.91 Model | 2.8143e+09 2 1.4071e+09 Prob > F = 0.0000 Residual | 633558013 72 8799416.85 R-squared = 0.8162 -------------+---------------------------------- Adj R-squared = 0.8111 Total | 3.4478e+09 74 46592355.7 Root MSE = 2966.4 ------------------------------------------------------------------------------ price | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- foreign | Domestic | 6072.423 411.363 14.76 0.000 5252.386 6892.46 Foreign | 6384.682 632.4346 10.10 0.000 5123.947 7645.417 ------------------------------------------------------------------------------ . /* (2) Calculate the absolute value of the foreign-domestic difference */ . nlcom av_diff:abs(_b[1.foreign] - _b[0.foreign]), level(90) post av_diff: abs(_b[1.foreign] - _b[0.foreign]) ------------------------------------------------------------------------------ price | Coef. Std. Err. z P>|z| [90% Conf. Interval] -------------+---------------------------------------------------------------- av_diff | 312.2587 754.4488 0.41 0.679 -928.6992 1553.217 ------------------------------------------------------------------------------ . /* (3a) We know that a one-sided 95% CI is (-inf,1553.217] */ . /* (3b) Transform two-sided test into a one-sided test and get p-values */ . // Test something just inside the CI */ . // H_0': (avg_price_foreign - avg_price_domestic) <= 1553 . // H_1': (avg_price_foreign - avg_price_domestic) > 1553 . test av_diff = 1553 ( 1) av_diff = 1553 chi2( 1) = 2.70 Prob > chi2 = 0.1001 . local sign_av_diff = sign(_b[av_diff] - 1553) // get the sign . display "p-value' = " normal(`sign_av_diff'*sqrt(r(chi2))) p-value' = .05002962 . // Test something just above the CI */ . // H_0'': (avg_price_foreign - avg_price_domestic) <= 1554 . // H_1'': (avg_price_foreign - avg_price_domestic) > 1554 . test av_diff = 1554 ( 1) av_diff = 1554 chi2( 1) = 2.71 Prob > chi2 = 0.0998 . local sign_av_diff = sign(_b[av_diff] - 1554) // get the sign . display "p-value = " normal(`sign_av_diff'*sqrt(r(chi2))) p-value = .049893 The one-sided 95% CI is $(-\infty, 1553.217]$, so $\delta>1553.217$ in order of us to reject. If we try testing a value below that upper bound like 1553, the one-sided p-value is .05003, so we cannot reject. If we test something just above the UB, like 1554, the p-value is .049893, so we can reject at $\alpha=5\%$. I don't advocate using rigid thresholds for significance, this is just meant to illustrate the intuition. Note that you can also divide the two-sided p-values by 2 to get this (Stata's two-sided p-values are on the "Prob > chi2" line). Here the null is $H_0=|\Delta|\le \delta$ (practical equivalence) versus $H_a=|\Delta| > \delta$ (non-equivalence). We focus on testing $|\Delta| = \delta$, so we calculate the probability at the most extreme point of the null hypothesis, closest to alternative parameter space. This means that the p-value is exact only for $|\Delta| = \delta$. If $|\Delta| < \delta$, then our p-value is just a conservative bound on the type I error rate (the error being finding a negative effect when there is none).
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis?
I think one possible solution for this test is to turn to regression to get the two means calculate the absolute value of their difference from the regression coefficients (a non-linear combination).
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis? I think one possible solution for this test is to turn to regression to get the two means calculate the absolute value of their difference from the regression coefficients (a non-linear combination). Let's call this random variable $|\Delta|$. Once this is done, you have two choices. You can look at the one-sided CI for $|\Delta|$ to see if it excludes your superiority threshold. You can get that easily from step (2), since the overlap between two one-sided 95% CIs makes a two-sided 90% CI, so you can work backwards from the usual 90% CI for $|\Delta|$. Alternatively, you can perform a two-sided hypothesis test on $|\Delta|$, and then calculate the one-sided p-value from that. This is a bit more work, but is just a matter of getting the sign of the inequality, a $\chi^2$ statistic from the two-sided test, and evaluating cumulative standard normal distribution. If your test returns an F-statistic, you will have to use that instead, along with the t distribution in place of the normal. If you don't want to go this route, when $|\Delta| - \delta$ is positive, you can simply divide the two-sided p-value by 2. In the other case, you need to calculate $1-\frac{p}{2}$ since you are in the other tail. This simpler division approach works for symmetric distributions only. Here is an example in Stata, where we will conduct two such hypotheses comparing the average price of foreign (foreign = 1) and domestic cars (foreign = 0): . sysuse auto, clear (1978 Automobile Data) . table foreign, c(mean price) ----------------------- Car type | mean(price) ----------+------------ Domestic | 6,072.4 Foreign | 6,384.7 ----------------------- . /* (1) Calculate the means using regression */ . regress price ibn.foreign, noconstant Source | SS df MS Number of obs = 74 -------------+---------------------------------- F(2, 72) = 159.91 Model | 2.8143e+09 2 1.4071e+09 Prob > F = 0.0000 Residual | 633558013 72 8799416.85 R-squared = 0.8162 -------------+---------------------------------- Adj R-squared = 0.8111 Total | 3.4478e+09 74 46592355.7 Root MSE = 2966.4 ------------------------------------------------------------------------------ price | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- foreign | Domestic | 6072.423 411.363 14.76 0.000 5252.386 6892.46 Foreign | 6384.682 632.4346 10.10 0.000 5123.947 7645.417 ------------------------------------------------------------------------------ . /* (2) Calculate the absolute value of the foreign-domestic difference */ . nlcom av_diff:abs(_b[1.foreign] - _b[0.foreign]), level(90) post av_diff: abs(_b[1.foreign] - _b[0.foreign]) ------------------------------------------------------------------------------ price | Coef. Std. Err. z P>|z| [90% Conf. Interval] -------------+---------------------------------------------------------------- av_diff | 312.2587 754.4488 0.41 0.679 -928.6992 1553.217 ------------------------------------------------------------------------------ . /* (3a) We know that a one-sided 95% CI is (-inf,1553.217] */ . /* (3b) Transform two-sided test into a one-sided test and get p-values */ . // Test something just inside the CI */ . // H_0': (avg_price_foreign - avg_price_domestic) <= 1553 . // H_1': (avg_price_foreign - avg_price_domestic) > 1553 . test av_diff = 1553 ( 1) av_diff = 1553 chi2( 1) = 2.70 Prob > chi2 = 0.1001 . local sign_av_diff = sign(_b[av_diff] - 1553) // get the sign . display "p-value' = " normal(`sign_av_diff'*sqrt(r(chi2))) p-value' = .05002962 . // Test something just above the CI */ . // H_0'': (avg_price_foreign - avg_price_domestic) <= 1554 . // H_1'': (avg_price_foreign - avg_price_domestic) > 1554 . test av_diff = 1554 ( 1) av_diff = 1554 chi2( 1) = 2.71 Prob > chi2 = 0.0998 . local sign_av_diff = sign(_b[av_diff] - 1554) // get the sign . display "p-value = " normal(`sign_av_diff'*sqrt(r(chi2))) p-value = .049893 The one-sided 95% CI is $(-\infty, 1553.217]$, so $\delta>1553.217$ in order of us to reject. If we try testing a value below that upper bound like 1553, the one-sided p-value is .05003, so we cannot reject. If we test something just above the UB, like 1554, the p-value is .049893, so we can reject at $\alpha=5\%$. I don't advocate using rigid thresholds for significance, this is just meant to illustrate the intuition. Note that you can also divide the two-sided p-values by 2 to get this (Stata's two-sided p-values are on the "Prob > chi2" line). Here the null is $H_0=|\Delta|\le \delta$ (practical equivalence) versus $H_a=|\Delta| > \delta$ (non-equivalence). We focus on testing $|\Delta| = \delta$, so we calculate the probability at the most extreme point of the null hypothesis, closest to alternative parameter space. This means that the p-value is exact only for $|\Delta| = \delta$. If $|\Delta| < \delta$, then our p-value is just a conservative bound on the type I error rate (the error being finding a negative effect when there is none).
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis? I think one possible solution for this test is to turn to regression to get the two means calculate the absolute value of their difference from the regression coefficients (a non-linear combination).
34,637
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis?
You can perform a t-test and just look at confidence intervals. In some circumstances (e.g. clinical trials) you are not interested in statistical significance, but whether the difference is significant from a practical point of view by adding a margin $\delta$ (in a clinical trials setting it’s called clinical significance). Have a look at the picture. We assess mean response difference in experimental and control group.
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis?
You can perform a t-test and just look at confidence intervals. In some circumstances (e.g. clinical trials) you are not interested in statistical significance, but whether the difference is significa
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis? You can perform a t-test and just look at confidence intervals. In some circumstances (e.g. clinical trials) you are not interested in statistical significance, but whether the difference is significant from a practical point of view by adding a margin $\delta$ (in a clinical trials setting it’s called clinical significance). Have a look at the picture. We assess mean response difference in experimental and control group.
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis? You can perform a t-test and just look at confidence intervals. In some circumstances (e.g. clinical trials) you are not interested in statistical significance, but whether the difference is significa
34,638
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis?
one of ideas is to add $\delta$ to one population (raising mean) and in second test substracting $\delta$ and then computing statistic and figure out in two "one-sided tests" p-values, after adding these you will have one p-value for two sided test stated in your question it's like solving equation in elementary school: $$|\mu_A - \mu_B| \le \delta => \begin{cases} \mu_A - \mu_B \le \delta, & \text{if}\ \mu_A - \mu_B \ge 0 \\[2ex] \mu_A - \mu_B \ge -\delta, & \text{if}\ \mu_A - \mu_B < 0 \end{cases} =>\begin{cases} (\mu_A-\delta) - \mu_B \le 0, & \text{if}\ \mu_A - \mu_B \ge 0 \\[2ex] (\mu_A+\delta) - \mu_B \ge 0, & \text{if}\ \mu_A - \mu_B < 0 \end{cases} =>\begin{cases} (\mu_A-\delta) \le \mu_B, & \text{if}\ \mu_B \le \mu_A\\[2ex] (\mu_A+\delta) \ge \mu_B, & \text{if}\ \mu_B > \mu_A \end{cases}$$ this is your $H_0$ :) now let's construct $H_1$ $$H_0\begin{cases} (\mu_A-\delta) \le \mu_B, & \text{if}\ \mu_B \le \mu_A\\[2ex] (\mu_A+\delta) \ge \mu_B, & \text{if}\ \mu_B > \mu_A \end{cases}, H_1\begin{cases} (\mu_A-\delta) \ge \mu_B, & \text{if}\ \mu_B \le \mu_A, & (1)\\[2ex] (\mu_A+\delta) \le \mu_B, & \text{if}\ \mu_B > \mu_A, & (2) \end{cases}$$ for $(1)$ you want to compute p-value that $$p((\mu_A-\delta) \ge \mu_B|\mu_A \ge \mu_B) = \frac{p((\mu_A-\delta) \ge \mu_B)}{p(\mu_A \ge \mu_B)}$$ analogous for $(2)$, and combining $$p(|\mu_A - \mu_B| \le \delta) = 1-p((\mu_A-\delta) \ge \mu_B|\mu_A \ge \mu_B) - p((\mu_A+\delta) \le \mu_B|\mu_A \lt \mu_B)$$ ask questions if needed, I am not entirely sure of this approach, and would welcome any critique
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis?
one of ideas is to add $\delta$ to one population (raising mean) and in second test substracting $\delta$ and then computing statistic and figure out in two "one-sided tests" p-values, after adding th
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis? one of ideas is to add $\delta$ to one population (raising mean) and in second test substracting $\delta$ and then computing statistic and figure out in two "one-sided tests" p-values, after adding these you will have one p-value for two sided test stated in your question it's like solving equation in elementary school: $$|\mu_A - \mu_B| \le \delta => \begin{cases} \mu_A - \mu_B \le \delta, & \text{if}\ \mu_A - \mu_B \ge 0 \\[2ex] \mu_A - \mu_B \ge -\delta, & \text{if}\ \mu_A - \mu_B < 0 \end{cases} =>\begin{cases} (\mu_A-\delta) - \mu_B \le 0, & \text{if}\ \mu_A - \mu_B \ge 0 \\[2ex] (\mu_A+\delta) - \mu_B \ge 0, & \text{if}\ \mu_A - \mu_B < 0 \end{cases} =>\begin{cases} (\mu_A-\delta) \le \mu_B, & \text{if}\ \mu_B \le \mu_A\\[2ex] (\mu_A+\delta) \ge \mu_B, & \text{if}\ \mu_B > \mu_A \end{cases}$$ this is your $H_0$ :) now let's construct $H_1$ $$H_0\begin{cases} (\mu_A-\delta) \le \mu_B, & \text{if}\ \mu_B \le \mu_A\\[2ex] (\mu_A+\delta) \ge \mu_B, & \text{if}\ \mu_B > \mu_A \end{cases}, H_1\begin{cases} (\mu_A-\delta) \ge \mu_B, & \text{if}\ \mu_B \le \mu_A, & (1)\\[2ex] (\mu_A+\delta) \le \mu_B, & \text{if}\ \mu_B > \mu_A, & (2) \end{cases}$$ for $(1)$ you want to compute p-value that $$p((\mu_A-\delta) \ge \mu_B|\mu_A \ge \mu_B) = \frac{p((\mu_A-\delta) \ge \mu_B)}{p(\mu_A \ge \mu_B)}$$ analogous for $(2)$, and combining $$p(|\mu_A - \mu_B| \le \delta) = 1-p((\mu_A-\delta) \ge \mu_B|\mu_A \ge \mu_B) - p((\mu_A+\delta) \le \mu_B|\mu_A \lt \mu_B)$$ ask questions if needed, I am not entirely sure of this approach, and would welcome any critique
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis? one of ideas is to add $\delta$ to one population (raising mean) and in second test substracting $\delta$ and then computing statistic and figure out in two "one-sided tests" p-values, after adding th
34,639
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis?
Just for a comment; Let $F$ be the cumulative distribution of $p(\ \ |{H_0})$, that means $$F(t) = p(t>T |\ {H_0}\ is\ true) . \tag{1-1}$$ Here, $p(t>-\infty\ |\ {H_0}\ is\ true)$ is the probability that $t>T$ under the condition that $H_0$ is true, $T$ is a random value representing the t-value. The $t$ is a real number substituted to the $F$. And, let $t_{obs}$ be the t-value calculated from actual observations. Then, the p-value shall be; $$p-value = p(|t|>|t_{obs}|\ |\ {H_0}\ is\ true). \tag{1-2}$$ Therefore, $$p-value = p(|t|>|t_{obs}|\ |\ {H_0}\ is\ true)$$ $$=p(\ t>|t_{obs}|\ or\ \ t<-|t_{obs}|\ |\ {H_0}\ is\ true)$$ $$=p(\ t>|t_{obs}|\ |\ {H_0}\ is\ true)\ +\ p(\ t<-|t_{obs}|\ |\ {H_0}\ is\ true) $$ $$=F(-|t_{obs}|)+(1-F(|t_{obs}|))$$ $$=1+F(-|t_{obs}|)-F(|t_{obs}|) \tag{1-3}$$ Thus, the essence of my question would be what function $F$ in (1-1) would be under my ${H}_{0}$. If the mean and standard deviation of the population are known, I think these distributions can be brought to a form similar to the simulation of user2974951 by using the regenerability of the normal distribution. However, if both of the mean and standard deviation of the population are unknown, then I have no idea. I'm waiting for your opinion.
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis?
Just for a comment; Let $F$ be the cumulative distribution of $p(\ \ |{H_0})$, that means $$F(t) = p(t>T |\ {H_0}\ is\ true) . \tag{1-1}$$ Here, $p(t>-\infty\ |\ {H_0}\ is\ true)$ is the probability t
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis? Just for a comment; Let $F$ be the cumulative distribution of $p(\ \ |{H_0})$, that means $$F(t) = p(t>T |\ {H_0}\ is\ true) . \tag{1-1}$$ Here, $p(t>-\infty\ |\ {H_0}\ is\ true)$ is the probability that $t>T$ under the condition that $H_0$ is true, $T$ is a random value representing the t-value. The $t$ is a real number substituted to the $F$. And, let $t_{obs}$ be the t-value calculated from actual observations. Then, the p-value shall be; $$p-value = p(|t|>|t_{obs}|\ |\ {H_0}\ is\ true). \tag{1-2}$$ Therefore, $$p-value = p(|t|>|t_{obs}|\ |\ {H_0}\ is\ true)$$ $$=p(\ t>|t_{obs}|\ or\ \ t<-|t_{obs}|\ |\ {H_0}\ is\ true)$$ $$=p(\ t>|t_{obs}|\ |\ {H_0}\ is\ true)\ +\ p(\ t<-|t_{obs}|\ |\ {H_0}\ is\ true) $$ $$=F(-|t_{obs}|)+(1-F(|t_{obs}|))$$ $$=1+F(-|t_{obs}|)-F(|t_{obs}|) \tag{1-3}$$ Thus, the essence of my question would be what function $F$ in (1-1) would be under my ${H}_{0}$. If the mean and standard deviation of the population are known, I think these distributions can be brought to a form similar to the simulation of user2974951 by using the regenerability of the normal distribution. However, if both of the mean and standard deviation of the population are unknown, then I have no idea. I'm waiting for your opinion.
Is there a test that uses $|{\mu_A}-{\mu_B}|\le \delta $ as the null hypothesis? Just for a comment; Let $F$ be the cumulative distribution of $p(\ \ |{H_0})$, that means $$F(t) = p(t>T |\ {H_0}\ is\ true) . \tag{1-1}$$ Here, $p(t>-\infty\ |\ {H_0}\ is\ true)$ is the probability t
34,640
What is the mean absolute difference between values in a normal distribution?
Assume that $X, Y\sim N(\mu,\sigma^2)$ are iid. Then their difference is $X-Y\sim N(0,2\sigma^2)$. As you write, the expectation of this difference is zero. And the absolute value of this difference $|X-Y|$ follows a folded normal distribution. Its mean can be found by plugging the mean $0$ and variance $2\sigma^2$ of $X-Y$ into the formula at the Wikipedia page: $$ \sqrt{2}\sigma\sqrt{\frac{2}{\pi}} = \frac{2\sigma}{\sqrt{\pi}}. $$ A quick simulation in R is consistent with this: > nn <- 1e6 > sigma <- 2 > set.seed(1) > XX <- rnorm(nn,0,sigma) > YY <- rnorm(nn,0,sigma) > mean(abs(XX-YY)) [1] 2.257667 > sqrt(2)*sigma*sqrt(2/pi) [1] 2.256758
What is the mean absolute difference between values in a normal distribution?
Assume that $X, Y\sim N(\mu,\sigma^2)$ are iid. Then their difference is $X-Y\sim N(0,2\sigma^2)$. As you write, the expectation of this difference is zero. And the absolute value of this difference $
What is the mean absolute difference between values in a normal distribution? Assume that $X, Y\sim N(\mu,\sigma^2)$ are iid. Then their difference is $X-Y\sim N(0,2\sigma^2)$. As you write, the expectation of this difference is zero. And the absolute value of this difference $|X-Y|$ follows a folded normal distribution. Its mean can be found by plugging the mean $0$ and variance $2\sigma^2$ of $X-Y$ into the formula at the Wikipedia page: $$ \sqrt{2}\sigma\sqrt{\frac{2}{\pi}} = \frac{2\sigma}{\sqrt{\pi}}. $$ A quick simulation in R is consistent with this: > nn <- 1e6 > sigma <- 2 > set.seed(1) > XX <- rnorm(nn,0,sigma) > YY <- rnorm(nn,0,sigma) > mean(abs(XX-YY)) [1] 2.257667 > sqrt(2)*sigma*sqrt(2/pi) [1] 2.256758
What is the mean absolute difference between values in a normal distribution? Assume that $X, Y\sim N(\mu,\sigma^2)$ are iid. Then their difference is $X-Y\sim N(0,2\sigma^2)$. As you write, the expectation of this difference is zero. And the absolute value of this difference $
34,641
Mixture of non-normals is normal?
I can show you all examples, not just the simple ones. Solution Here they are, schematically: The bottom panels show how the density function $f$ of a distribution $F$ is split into two parts vertically along a nearly arbitrary curve. The cyan portion of the split is a fraction $\lambda$ of $f;$ the upper left plots its graph. The remaining portion (gray) therefore is a fraction $1-\lambda$ of $f$ whose graph is plotted in the upper right. This is how all mixtures arise. (Notice that little is assumed about the density $f$ except that it exists.) Details The mixture distribution $F$ is Normal, which means there is a mean $\mu$ and variance $\sigma^2$ for which $F$ has a density function $f(z;\mu,\sigma).$ The details of $f$ don't matter! Let $\lambda:\mathbb{R}\to[0,1]$ be any (measurable) non-negative function. This means the following integrals involving $\lambda$ are defined and non-negative: $$\pi_\lambda = \int_\mathbb{R} \lambda(z)f(z;\mu,\sigma)\,\mathrm{d}z \le \sup(\lambda)\, \int_\mathbb{R}f(z;\mu,\sigma)\,\mathrm{d}z \le (1)(1)=1;$$ $$1-\pi_\lambda = 1 - \int_\mathbb{R} \lambda(z)f(z;\mu,\sigma)\,\mathrm{d}z = \int_\mathbb{R} (1-\lambda(z))_f(z;\mu,\sigma)\,\mathrm{d}z \le 1.$$ (The first inequality is an easy special case of Holder's Inequality.) Define two distributions as $$F_{\lambda}(x) = \frac{1}{\pi_\lambda}\int_{-\infty}^x \lambda(z)f(z;\mu,\sigma)\,\mathrm{d}z;$$ $$F_{1-\lambda}(x) = \frac{1}{1-\pi_\lambda}\int_{-\infty}^x (1-\lambda(z))f(z;\mu,\sigma)\,\mathrm{d}z.$$ It is straightforward to establish that these are distribution functions and, by construction, $$F = \pi_\lambda F_\lambda + (1-\pi_\lambda) F_{1-\lambda}\tag{*}$$ exhibits the original normal distribution as a mixture of these two. Conversely, whenever there exist differentiable functions with property $(*),$ then a version of $\lambda$ can be recovered via $$\lambda(z) = \left\{\begin{aligned}\frac{F^\prime_\lambda(z)}{f(z;\mu,\sigma)} &\quad&f(z;\mu,\sigma)\ne 0\\ 0 & &\text{otherwise}\end{aligned}\right.$$ and because $0 \le \pi_\lambda\le 1,$ the range of $\lambda$ is contained in $[0,1],$ QED. Finally, it is possible for the component distributions to be Normal: for instance, when $\lambda$ is a constant function that will be the case. That is the only possibility, though: see https://stats.stackexchange.com/a/429877/919 for the proof. Application As requested in comments, it would be of interest to choose $\lambda$ to meet a set of criteria, such as Give the components equal weights, which means $$\frac{1}{2}=\pi_\lambda = \int \lambda(z) f(z)\,\mathrm{d}z.$$ Since these are intended to model errors in a regression setting (with $\mu=0,$ we would like each of the components also to have zero mean: $0 = E_{F_\lambda}[X].$ In light of (1), that is equivalent to $$0 = \int z\lambda(z) f(z)\,\mathrm{d}z.$$ Since regression errors are often assumed to be homoscedastic -- of equal variances -- we would like the variances of $F_\lambda$ and $F_{1-\lambda}$ to be equal. Since they have means of zero, when $f$ is a Normal density, this is achieved when $$\sigma^2 = 2\int z^2\lambda(z) f(z)\,\mathrm{d}z.$$ Although there are many solutions to these equations, one simple (striking) solution is obtained by supposing $\lambda$ and $1-\lambda$ are both simple functions: that is, piecewise constant. By making $\lambda$ symmetric around $0$ we can assure that (2) holds. The simplest of such simple functions is zero except on some positive interval $[a,b]$ and its negative $[-b,-a],$ where it equals $1.$ Without any loss of generality take $\sigma^2=1,$ so that $f = \phi$ is the standard Normal density with the property $\phi^(z) = -z\phi(z).$ Using this fact we may compute $$\int \lambda(z)\phi(z)\,\mathrm{d}z = 2 \int_a^b \phi(z)\,\mathrm{d}z = 2(\Phi(b)-\Phi(a))$$ (where $\Phi$ is the standard Normal distribution function) and $$\begin{aligned} \int z^2 \lambda(z)\phi(z)\,\mathrm{d}z &= 2 \int_a^b z^2\phi(z)\,\mathrm{d}z \\ &= 2(\Phi(b) - \Phi(a) + a\phi(b) - b\phi(b)). \end{aligned}$$ This permits numerical solution of (1) and (3). The work is streamlined by noting from (1) that, given $0 \le a\lt \Phi^{-1}(3/4),$ $$b = b(a) = \Phi^{-1}(\Phi(a) + 1/4).$$ That leaves us to solve (3) for $a \ge 0$. Here is an R implementation to illustrate: f <- function(a) { b <- qnorm(1/4 + q <- pnorm(a)) pnorm(b) - q + a * dnorm(a) - b * dnorm(b) - 1/4 } uniroot(f, c(0, qnorm(3/4)- 1e-6))$root -> a qnorm(pnorm(a) + 1/4) -> b This calculation gives $a \approx 0.508949$ and $b \approx 1.59466.$ Here are plots of the two component densities $f_\lambda$ and $f_{1-\lambda}:$ To illustrate the intended application, here are bivariate data with 150 responses at $X=0$ with errors distributed as $F_\lambda$ and 150 responses at $X=1$ with errors distributed as $F_{1-\lambda}.$ To the right is a quantile plot of the collected residuals. Although separately neither group of residuals appears Normal, they are both centered at zero, have nearly the same variance, and collectively look perfectly Normal. Remarks The basic construction readily generalizes to mixtures with more than two components. The example in the application can be extended, by using simple (indicator) functions supported on intervals $[a_i,b_i]$ with $0\le a_1 \lt b_1 \le a_2 \lt b_2 \cdots \lt b_k,$ to create component distributions that match the first $2k$ moments of the Normal distribution their mixture creates. With sufficiently large $k,$ the component distributions will be difficult to discriminate even with largish datasets (at which point one might legitimately wonder whether their non-Normality matters at all).
Mixture of non-normals is normal?
I can show you all examples, not just the simple ones. Solution Here they are, schematically: The bottom panels show how the density function $f$ of a distribution $F$ is split into two parts vertica
Mixture of non-normals is normal? I can show you all examples, not just the simple ones. Solution Here they are, schematically: The bottom panels show how the density function $f$ of a distribution $F$ is split into two parts vertically along a nearly arbitrary curve. The cyan portion of the split is a fraction $\lambda$ of $f;$ the upper left plots its graph. The remaining portion (gray) therefore is a fraction $1-\lambda$ of $f$ whose graph is plotted in the upper right. This is how all mixtures arise. (Notice that little is assumed about the density $f$ except that it exists.) Details The mixture distribution $F$ is Normal, which means there is a mean $\mu$ and variance $\sigma^2$ for which $F$ has a density function $f(z;\mu,\sigma).$ The details of $f$ don't matter! Let $\lambda:\mathbb{R}\to[0,1]$ be any (measurable) non-negative function. This means the following integrals involving $\lambda$ are defined and non-negative: $$\pi_\lambda = \int_\mathbb{R} \lambda(z)f(z;\mu,\sigma)\,\mathrm{d}z \le \sup(\lambda)\, \int_\mathbb{R}f(z;\mu,\sigma)\,\mathrm{d}z \le (1)(1)=1;$$ $$1-\pi_\lambda = 1 - \int_\mathbb{R} \lambda(z)f(z;\mu,\sigma)\,\mathrm{d}z = \int_\mathbb{R} (1-\lambda(z))_f(z;\mu,\sigma)\,\mathrm{d}z \le 1.$$ (The first inequality is an easy special case of Holder's Inequality.) Define two distributions as $$F_{\lambda}(x) = \frac{1}{\pi_\lambda}\int_{-\infty}^x \lambda(z)f(z;\mu,\sigma)\,\mathrm{d}z;$$ $$F_{1-\lambda}(x) = \frac{1}{1-\pi_\lambda}\int_{-\infty}^x (1-\lambda(z))f(z;\mu,\sigma)\,\mathrm{d}z.$$ It is straightforward to establish that these are distribution functions and, by construction, $$F = \pi_\lambda F_\lambda + (1-\pi_\lambda) F_{1-\lambda}\tag{*}$$ exhibits the original normal distribution as a mixture of these two. Conversely, whenever there exist differentiable functions with property $(*),$ then a version of $\lambda$ can be recovered via $$\lambda(z) = \left\{\begin{aligned}\frac{F^\prime_\lambda(z)}{f(z;\mu,\sigma)} &\quad&f(z;\mu,\sigma)\ne 0\\ 0 & &\text{otherwise}\end{aligned}\right.$$ and because $0 \le \pi_\lambda\le 1,$ the range of $\lambda$ is contained in $[0,1],$ QED. Finally, it is possible for the component distributions to be Normal: for instance, when $\lambda$ is a constant function that will be the case. That is the only possibility, though: see https://stats.stackexchange.com/a/429877/919 for the proof. Application As requested in comments, it would be of interest to choose $\lambda$ to meet a set of criteria, such as Give the components equal weights, which means $$\frac{1}{2}=\pi_\lambda = \int \lambda(z) f(z)\,\mathrm{d}z.$$ Since these are intended to model errors in a regression setting (with $\mu=0,$ we would like each of the components also to have zero mean: $0 = E_{F_\lambda}[X].$ In light of (1), that is equivalent to $$0 = \int z\lambda(z) f(z)\,\mathrm{d}z.$$ Since regression errors are often assumed to be homoscedastic -- of equal variances -- we would like the variances of $F_\lambda$ and $F_{1-\lambda}$ to be equal. Since they have means of zero, when $f$ is a Normal density, this is achieved when $$\sigma^2 = 2\int z^2\lambda(z) f(z)\,\mathrm{d}z.$$ Although there are many solutions to these equations, one simple (striking) solution is obtained by supposing $\lambda$ and $1-\lambda$ are both simple functions: that is, piecewise constant. By making $\lambda$ symmetric around $0$ we can assure that (2) holds. The simplest of such simple functions is zero except on some positive interval $[a,b]$ and its negative $[-b,-a],$ where it equals $1.$ Without any loss of generality take $\sigma^2=1,$ so that $f = \phi$ is the standard Normal density with the property $\phi^(z) = -z\phi(z).$ Using this fact we may compute $$\int \lambda(z)\phi(z)\,\mathrm{d}z = 2 \int_a^b \phi(z)\,\mathrm{d}z = 2(\Phi(b)-\Phi(a))$$ (where $\Phi$ is the standard Normal distribution function) and $$\begin{aligned} \int z^2 \lambda(z)\phi(z)\,\mathrm{d}z &= 2 \int_a^b z^2\phi(z)\,\mathrm{d}z \\ &= 2(\Phi(b) - \Phi(a) + a\phi(b) - b\phi(b)). \end{aligned}$$ This permits numerical solution of (1) and (3). The work is streamlined by noting from (1) that, given $0 \le a\lt \Phi^{-1}(3/4),$ $$b = b(a) = \Phi^{-1}(\Phi(a) + 1/4).$$ That leaves us to solve (3) for $a \ge 0$. Here is an R implementation to illustrate: f <- function(a) { b <- qnorm(1/4 + q <- pnorm(a)) pnorm(b) - q + a * dnorm(a) - b * dnorm(b) - 1/4 } uniroot(f, c(0, qnorm(3/4)- 1e-6))$root -> a qnorm(pnorm(a) + 1/4) -> b This calculation gives $a \approx 0.508949$ and $b \approx 1.59466.$ Here are plots of the two component densities $f_\lambda$ and $f_{1-\lambda}:$ To illustrate the intended application, here are bivariate data with 150 responses at $X=0$ with errors distributed as $F_\lambda$ and 150 responses at $X=1$ with errors distributed as $F_{1-\lambda}.$ To the right is a quantile plot of the collected residuals. Although separately neither group of residuals appears Normal, they are both centered at zero, have nearly the same variance, and collectively look perfectly Normal. Remarks The basic construction readily generalizes to mixtures with more than two components. The example in the application can be extended, by using simple (indicator) functions supported on intervals $[a_i,b_i]$ with $0\le a_1 \lt b_1 \le a_2 \lt b_2 \cdots \lt b_k,$ to create component distributions that match the first $2k$ moments of the Normal distribution their mixture creates. With sufficiently large $k,$ the component distributions will be difficult to discriminate even with largish datasets (at which point one might legitimately wonder whether their non-Normality matters at all).
Mixture of non-normals is normal? I can show you all examples, not just the simple ones. Solution Here they are, schematically: The bottom panels show how the density function $f$ of a distribution $F$ is split into two parts vertica
34,642
Mixture of non-normals is normal?
A very simple example from the Skew normal distribution with density $$ 2\phi(x)\Phi(\alpha x) $$ Choose for the twocomponents $\alpha, -\alpha$ then $$ \frac12 2 \phi(x) \Phi(-\alpha x) + \frac12 2 \phi(x) \Phi(\alpha x) $$ is the standard normal density $\phi(x)$, by using symmetry, since $\Phi(-\alpha x) = 1-\Phi(\alpha x)$, but unfortunately the two mixture components do not have equal mean. A simple example with equal means is got by exploiting $1=\sin^2 x +\cos^2 x$ so simply define mixture components by $$ \phi(x) = \sin^2(x) \phi(x) + \cos^2(x) \phi(x) $$ and both components have mean zero.
Mixture of non-normals is normal?
A very simple example from the Skew normal distribution with density $$ 2\phi(x)\Phi(\alpha x) $$ Choose for the twocomponents $\alpha, -\alpha$ then $$ \frac12 2 \phi(x) \Phi(-\alpha x) + \frac12 2 \
Mixture of non-normals is normal? A very simple example from the Skew normal distribution with density $$ 2\phi(x)\Phi(\alpha x) $$ Choose for the twocomponents $\alpha, -\alpha$ then $$ \frac12 2 \phi(x) \Phi(-\alpha x) + \frac12 2 \phi(x) \Phi(\alpha x) $$ is the standard normal density $\phi(x)$, by using symmetry, since $\Phi(-\alpha x) = 1-\Phi(\alpha x)$, but unfortunately the two mixture components do not have equal mean. A simple example with equal means is got by exploiting $1=\sin^2 x +\cos^2 x$ so simply define mixture components by $$ \phi(x) = \sin^2(x) \phi(x) + \cos^2(x) \phi(x) $$ and both components have mean zero.
Mixture of non-normals is normal? A very simple example from the Skew normal distribution with density $$ 2\phi(x)\Phi(\alpha x) $$ Choose for the twocomponents $\alpha, -\alpha$ then $$ \frac12 2 \phi(x) \Phi(-\alpha x) + \frac12 2 \
34,643
How to combine standard errors for correlated variables
I find a little algebraic manipulation of the following nature to provide a congenial path to solving problems like this -- where you know the covariance matrix of variables $(B,C)$ and wish to estimate the variance of some function of them, such as $B/C.$ (This is often called the "Delta Method.") Write $$B = \beta + X,\ C = \gamma + Y$$ where $\beta$ is the expectation of $B$ and $\gamma$ that of $C.$ This makes $(X,Y)$ a zero-mean random variable with the same variances and covariance as $(B,C).$ Seemingly nothing is accomplished, but this decomposition is algebraically suggestive, as in $$A = \frac{B}{C} = \frac{\beta+X}{\gamma+Y} = \left(\frac{\beta}{\gamma}\right) \frac{1 + X/\beta}{1+Y/\gamma}.$$ That is, $A$ is proportional to a ratio of two numbers that might both be close to unity. This is the circumstance that permits an approximate calculation of the variance of $A$ based only on the covariance matrix of $(B,C).$ Right away this division by $\gamma$ shows the futility of attempting a solution when $\gamma \approx 0.$ (See https://stats.stackexchange.com/a/299765/919 for illustrations of what goes wrong when dividing one random variable by another that has a good chance of coming very close to zero.) Assuming $\gamma$ is reasonably far from $0,$ the foregoing expression also hints at the possibility of approximating the second fraction using the MacLaurin series for $(1+Y/\gamma)^{-1},$ which will be possible provided there is little change that $|Y/\gamma|\ge 1$ (outside the range of absolute convergence of this expansion). In other words, further suppose the distribution of $C$ is concentrated between $0$ and $2\gamma.$ In this case the series gives $$\begin{aligned} \frac{1 + X/\beta}{1+Y/\gamma} &= \left(1 + X/\beta\right)\left(1 - (Y/\gamma) + O\left((Y/\gamma)^2\right)\right)\\&= 1 + X/\beta - Y/\gamma + O\left(\left(X/\beta\right)(Y/\gamma)^2\right).\end{aligned}$$ We may neglect the last term provided the chance that $(X/\beta)(Y/\gamma)^2$ being large is tiny. This is tantamount to supposing most of the probability of $Y$ is very close to $\gamma$ and that $X$ and $Y^2$ are not too strongly correlated. In this case $$\begin{aligned} \operatorname{Var}(A) &\approx \left(\frac{\beta}{\gamma}\right)^2\operatorname{Var}(1 + X/\beta - Y/\gamma)\\ &= \left(\frac{\beta}{\gamma}\right)^2\left( \frac{1}{\beta^2}\operatorname{Var}(B) + \frac{1}{\gamma^2}\operatorname{Var}(C) - \frac{2}{\beta\gamma}\operatorname{Cov}(B,C)\right) \\ &= \frac{1}{\gamma^2} \operatorname{Var}(B) + \frac{\beta^2}{\gamma^4}\operatorname{Var}(C) - \frac{2\beta}{\gamma^3}\operatorname{Cov}(B,C). \end{aligned}$$ You might wonder why I fuss over the assumptions. They matter. One way to check them is to generate Normally distributed variates $B$ and $C$ in a simulation: it will provide a good estimate of the variance of $A$ and, to the extent $A$ appears approximately Normally distributed, will confirm the three bold assumptions needed to rely on this result do indeed hold. For instance, with the covariance matrix $\pmatrix{1&-0.9\\-0.9&1}$ and means $(\beta,\gamma)=(5, 10),$ the approximation does OK (left panel): The variance of these 100,000 simulated values is $0.0233,$ close to the formula's value of $0.0215.$ But reducing $\gamma$ from $10$ to $4,$ which looks innocent enough ($4$ is still four standard deviations of $C$ away from $0$) has profound effects due to the strong correlation of $B$ and $C,$ as seen in the right hand histogram. Evidently $C$ has a small but appreciable chance of being nearly $0,$ creating large values of $B/C$ (both negative and positive). This is a case where we should not neglect the $XY^2$ term in the MacLaurin expansion. Now the variance of these 100,000 simulated values of $A$ is $2.200$ but the formula gives $0.301,$ far too small. This is the R code that generated the first figure. A small change in the third line generates the second figure. n <- 1e5 # Simulation size beta <- 5 gamma <- 10 Sigma <- matrix(c(1, -0.9, -0.9, 1), 2) library(MASS) #mvrnorm bc <- mvrnorm(n, c(beta, gamma), Sigma) A <- bc[, 1] / bc[, 2] # # Report the simulated and approximate variances. # signif(c(`Var(A)`=var(A), Approx=(Sigma[1,1]/gamma^2 + beta^2*Sigma[2,2]/gamma^4 - 2*beta/gamma^3*Sigma[1,2])), 3) hist(A, freq=FALSE, breaks=50, col="#f0f0f0") curve(dnorm(x, mean(A), sd(A)), col="SkyBlue", lwd=2, add=TRUE)
How to combine standard errors for correlated variables
I find a little algebraic manipulation of the following nature to provide a congenial path to solving problems like this -- where you know the covariance matrix of variables $(B,C)$ and wish to estima
How to combine standard errors for correlated variables I find a little algebraic manipulation of the following nature to provide a congenial path to solving problems like this -- where you know the covariance matrix of variables $(B,C)$ and wish to estimate the variance of some function of them, such as $B/C.$ (This is often called the "Delta Method.") Write $$B = \beta + X,\ C = \gamma + Y$$ where $\beta$ is the expectation of $B$ and $\gamma$ that of $C.$ This makes $(X,Y)$ a zero-mean random variable with the same variances and covariance as $(B,C).$ Seemingly nothing is accomplished, but this decomposition is algebraically suggestive, as in $$A = \frac{B}{C} = \frac{\beta+X}{\gamma+Y} = \left(\frac{\beta}{\gamma}\right) \frac{1 + X/\beta}{1+Y/\gamma}.$$ That is, $A$ is proportional to a ratio of two numbers that might both be close to unity. This is the circumstance that permits an approximate calculation of the variance of $A$ based only on the covariance matrix of $(B,C).$ Right away this division by $\gamma$ shows the futility of attempting a solution when $\gamma \approx 0.$ (See https://stats.stackexchange.com/a/299765/919 for illustrations of what goes wrong when dividing one random variable by another that has a good chance of coming very close to zero.) Assuming $\gamma$ is reasonably far from $0,$ the foregoing expression also hints at the possibility of approximating the second fraction using the MacLaurin series for $(1+Y/\gamma)^{-1},$ which will be possible provided there is little change that $|Y/\gamma|\ge 1$ (outside the range of absolute convergence of this expansion). In other words, further suppose the distribution of $C$ is concentrated between $0$ and $2\gamma.$ In this case the series gives $$\begin{aligned} \frac{1 + X/\beta}{1+Y/\gamma} &= \left(1 + X/\beta\right)\left(1 - (Y/\gamma) + O\left((Y/\gamma)^2\right)\right)\\&= 1 + X/\beta - Y/\gamma + O\left(\left(X/\beta\right)(Y/\gamma)^2\right).\end{aligned}$$ We may neglect the last term provided the chance that $(X/\beta)(Y/\gamma)^2$ being large is tiny. This is tantamount to supposing most of the probability of $Y$ is very close to $\gamma$ and that $X$ and $Y^2$ are not too strongly correlated. In this case $$\begin{aligned} \operatorname{Var}(A) &\approx \left(\frac{\beta}{\gamma}\right)^2\operatorname{Var}(1 + X/\beta - Y/\gamma)\\ &= \left(\frac{\beta}{\gamma}\right)^2\left( \frac{1}{\beta^2}\operatorname{Var}(B) + \frac{1}{\gamma^2}\operatorname{Var}(C) - \frac{2}{\beta\gamma}\operatorname{Cov}(B,C)\right) \\ &= \frac{1}{\gamma^2} \operatorname{Var}(B) + \frac{\beta^2}{\gamma^4}\operatorname{Var}(C) - \frac{2\beta}{\gamma^3}\operatorname{Cov}(B,C). \end{aligned}$$ You might wonder why I fuss over the assumptions. They matter. One way to check them is to generate Normally distributed variates $B$ and $C$ in a simulation: it will provide a good estimate of the variance of $A$ and, to the extent $A$ appears approximately Normally distributed, will confirm the three bold assumptions needed to rely on this result do indeed hold. For instance, with the covariance matrix $\pmatrix{1&-0.9\\-0.9&1}$ and means $(\beta,\gamma)=(5, 10),$ the approximation does OK (left panel): The variance of these 100,000 simulated values is $0.0233,$ close to the formula's value of $0.0215.$ But reducing $\gamma$ from $10$ to $4,$ which looks innocent enough ($4$ is still four standard deviations of $C$ away from $0$) has profound effects due to the strong correlation of $B$ and $C,$ as seen in the right hand histogram. Evidently $C$ has a small but appreciable chance of being nearly $0,$ creating large values of $B/C$ (both negative and positive). This is a case where we should not neglect the $XY^2$ term in the MacLaurin expansion. Now the variance of these 100,000 simulated values of $A$ is $2.200$ but the formula gives $0.301,$ far too small. This is the R code that generated the first figure. A small change in the third line generates the second figure. n <- 1e5 # Simulation size beta <- 5 gamma <- 10 Sigma <- matrix(c(1, -0.9, -0.9, 1), 2) library(MASS) #mvrnorm bc <- mvrnorm(n, c(beta, gamma), Sigma) A <- bc[, 1] / bc[, 2] # # Report the simulated and approximate variances. # signif(c(`Var(A)`=var(A), Approx=(Sigma[1,1]/gamma^2 + beta^2*Sigma[2,2]/gamma^4 - 2*beta/gamma^3*Sigma[1,2])), 3) hist(A, freq=FALSE, breaks=50, col="#f0f0f0") curve(dnorm(x, mean(A), sd(A)), col="SkyBlue", lwd=2, add=TRUE)
How to combine standard errors for correlated variables I find a little algebraic manipulation of the following nature to provide a congenial path to solving problems like this -- where you know the covariance matrix of variables $(B,C)$ and wish to estima
34,644
Does there exist a Frequentist or Non-Bayesian solution to Gull's Lighthouse Problem?
Interesting problem indeed. Three non-Bayesian solutions follow. Classical Physicist's view Here's my physicist's solution to it. $$\alpha=median[x_k]=F^{-1}(1/2)$$ $$\beta=F^{-1}(3/4)-F^{-1}(1/4)$$ Here $F$ is the empirical CDF of Cauchy distribution, or you could also call it a quantile (percentile) function. Cauchy distribution (aka Breit-Wigner in physics) has no mean, but it is symmetric. So, the median is a decent estimate of the $\alpha$. Since, it has no variance either, in physics when using this distribution the notion of width at half height is used to describe its dispersion. It happens so that the width at half height of PDF is $\beta$ and corresponds to the span between first and third quartiles (interquartile range). MLE Maximum likelihood estimation, of course, is more efficient. However, mine is very simple and intuitive. OLS (doesn't work) There's also a terrible regression solution. Look at the quantile function of the distribution: $$Q(p; \alpha,\beta) = \alpha + \beta\,\tan\left[\pi\left(p-\tfrac{1}{2}\right)\right]$$ It appears that, we can convert it into OLS problem (assuming $x_k$ are ordered!): $$x_k = \alpha + \beta\,\tan\left[\pi\left((k-1/2)/N-\tfrac{1}{2}\right)\right]$$ $$x_k = \alpha + \beta\,z_k,$$ where $z_k=\tan\left[\pi\left((k-1/2)/N-\tfrac{1}{2}\right)\right]$ OLS will give you immediately the estimates $\hat\alpha,\hat\beta$. The problem is that, OLS assumes finite variance, and we don't have it with Cauchy. So, the output of OLS is garbage. Here's R code to experiment and see how my first method is pretty robust. alpha <- 10 # unkonwn true values beta <- 30 # this is known for now ################## set.seed(123) N <- 1024 theta_k <- runif(N,-pi/2,pi/2) x_k <- beta * tan(theta_k) + alpha q123 = quantile(x_k,c(1/4,1/2,3/4),type=1) print("alpha") print(q123[2]) print("beta") print((q123[3] - q123[1])/2) y = sort(x_k) x = tan(pi*((seq(1:N)-0.5)/N-1/2)) fit = lm(y~x) print(fit) Outputs: alpha <- 10 # unkonwn true values beta <- 30 # this is known for now ################## set.seed(123) N <- 1024 theta_k <- runif(N,-pi/2,pi/2) x_k <- beta * tan(theta_k) + alpha q123 = quantile(x_k,c(1/4,1/2,3/4),type=1) print("alpha") print(q123[2]) print("beta") print((q123[3] - q123[1])/2) y = sort(x_k) x = tan(pi*((seq(1:N)-0.5)/N-1/2)) fit = lm(y~x) print(fit)
Does there exist a Frequentist or Non-Bayesian solution to Gull's Lighthouse Problem?
Interesting problem indeed. Three non-Bayesian solutions follow. Classical Physicist's view Here's my physicist's solution to it. $$\alpha=median[x_k]=F^{-1}(1/2)$$ $$\beta=F^{-1}(3/4)-F^{-1}(1/4)$$ H
Does there exist a Frequentist or Non-Bayesian solution to Gull's Lighthouse Problem? Interesting problem indeed. Three non-Bayesian solutions follow. Classical Physicist's view Here's my physicist's solution to it. $$\alpha=median[x_k]=F^{-1}(1/2)$$ $$\beta=F^{-1}(3/4)-F^{-1}(1/4)$$ Here $F$ is the empirical CDF of Cauchy distribution, or you could also call it a quantile (percentile) function. Cauchy distribution (aka Breit-Wigner in physics) has no mean, but it is symmetric. So, the median is a decent estimate of the $\alpha$. Since, it has no variance either, in physics when using this distribution the notion of width at half height is used to describe its dispersion. It happens so that the width at half height of PDF is $\beta$ and corresponds to the span between first and third quartiles (interquartile range). MLE Maximum likelihood estimation, of course, is more efficient. However, mine is very simple and intuitive. OLS (doesn't work) There's also a terrible regression solution. Look at the quantile function of the distribution: $$Q(p; \alpha,\beta) = \alpha + \beta\,\tan\left[\pi\left(p-\tfrac{1}{2}\right)\right]$$ It appears that, we can convert it into OLS problem (assuming $x_k$ are ordered!): $$x_k = \alpha + \beta\,\tan\left[\pi\left((k-1/2)/N-\tfrac{1}{2}\right)\right]$$ $$x_k = \alpha + \beta\,z_k,$$ where $z_k=\tan\left[\pi\left((k-1/2)/N-\tfrac{1}{2}\right)\right]$ OLS will give you immediately the estimates $\hat\alpha,\hat\beta$. The problem is that, OLS assumes finite variance, and we don't have it with Cauchy. So, the output of OLS is garbage. Here's R code to experiment and see how my first method is pretty robust. alpha <- 10 # unkonwn true values beta <- 30 # this is known for now ################## set.seed(123) N <- 1024 theta_k <- runif(N,-pi/2,pi/2) x_k <- beta * tan(theta_k) + alpha q123 = quantile(x_k,c(1/4,1/2,3/4),type=1) print("alpha") print(q123[2]) print("beta") print((q123[3] - q123[1])/2) y = sort(x_k) x = tan(pi*((seq(1:N)-0.5)/N-1/2)) fit = lm(y~x) print(fit) Outputs: alpha <- 10 # unkonwn true values beta <- 30 # this is known for now ################## set.seed(123) N <- 1024 theta_k <- runif(N,-pi/2,pi/2) x_k <- beta * tan(theta_k) + alpha q123 = quantile(x_k,c(1/4,1/2,3/4),type=1) print("alpha") print(q123[2]) print("beta") print((q123[3] - q123[1])/2) y = sort(x_k) x = tan(pi*((seq(1:N)-0.5)/N-1/2)) fit = lm(y~x) print(fit)
Does there exist a Frequentist or Non-Bayesian solution to Gull's Lighthouse Problem? Interesting problem indeed. Three non-Bayesian solutions follow. Classical Physicist's view Here's my physicist's solution to it. $$\alpha=median[x_k]=F^{-1}(1/2)$$ $$\beta=F^{-1}(3/4)-F^{-1}(1/4)$$ H
34,645
Does there exist a Frequentist or Non-Bayesian solution to Gull's Lighthouse Problem?
Since $x \sim Cauchy(\alpha, \beta)$ and you want to estimate the position of the lighthouse, wich is $(\alpha, \beta)$, your problem is simply to make inference on a Cauchy distribution. Here you will find what you need: https://en.wikipedia.org/wiki/Cauchy_distribution#Estimation_of_parameters
Does there exist a Frequentist or Non-Bayesian solution to Gull's Lighthouse Problem?
Since $x \sim Cauchy(\alpha, \beta)$ and you want to estimate the position of the lighthouse, wich is $(\alpha, \beta)$, your problem is simply to make inference on a Cauchy distribution. Here you wil
Does there exist a Frequentist or Non-Bayesian solution to Gull's Lighthouse Problem? Since $x \sim Cauchy(\alpha, \beta)$ and you want to estimate the position of the lighthouse, wich is $(\alpha, \beta)$, your problem is simply to make inference on a Cauchy distribution. Here you will find what you need: https://en.wikipedia.org/wiki/Cauchy_distribution#Estimation_of_parameters
Does there exist a Frequentist or Non-Bayesian solution to Gull's Lighthouse Problem? Since $x \sim Cauchy(\alpha, \beta)$ and you want to estimate the position of the lighthouse, wich is $(\alpha, \beta)$, your problem is simply to make inference on a Cauchy distribution. Here you wil
34,646
Is there a probability distribution like the binomial distribution but with continuous rather than binary trial outputs?
The sum of i.i.d. uniform random variables follows the Irwin–Hall distribution.
Is there a probability distribution like the binomial distribution but with continuous rather than b
The sum of i.i.d. uniform random variables follows the Irwin–Hall distribution.
Is there a probability distribution like the binomial distribution but with continuous rather than binary trial outputs? The sum of i.i.d. uniform random variables follows the Irwin–Hall distribution.
Is there a probability distribution like the binomial distribution but with continuous rather than b The sum of i.i.d. uniform random variables follows the Irwin–Hall distribution.
34,647
Confidence Interval of CDF
You can do something like this with simultaneous-quantile regression with a set dummies corresponding to the 4 groups. This allows you to test and construct confidence intervals comparing coefficients describing different quantiles that you care about. Here's a toy example where we cannot reject the joint null that the 25th, 50th, and 75th quartile of car prices are all equal in all 4 MPG groups (the p-value is 0.374): . sysuse auto, clear (1978 Automobile Data) . xtile mpg_quartile = mpg, nq(4) . distplot price, over(mpg_quartile) legend(rows(1)) ylab(.25 .5 .75, angle(0) grid) xlab(#10, grid) /// > plotregion(fcolor(white) lcolor(white)) graphregion(fcolor(white) lcolor(white)) . . sqreg price i.mpg_quart, quantile(.25 .5 .75) reps(500) (fitting base model) Bootstrap replications (500) ----+--- 1 ---+--- 2 ---+--- 3 ---+--- 4 ---+--- 5 .................................................. 50 .................................................. 100 .................................................. 150 .................................................. 200 .................................................. 250 .................................................. 300 .................................................. 350 .................................................. 400 .................................................. 450 .................................................. 500 Simultaneous quantile regression Number of obs = 74 bootstrap(500) SEs .25 Pseudo R2 = 0.0909 .50 Pseudo R2 = 0.1228 .75 Pseudo R2 = 0.2639 ------------------------------------------------------------------------------ | Bootstrap price | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- q25 | mpg_quartile | 2 | -1297 528.3106 -2.45 0.017 -2350.682 -243.3178 3 | -1192 447.9346 -2.66 0.010 -2085.377 -298.6225 4 | -1484 458.6527 -3.24 0.002 -2398.754 -569.2459 | _cons | 5379 414.9198 12.96 0.000 4551.468 6206.532 -------------+---------------------------------------------------------------- q50 | mpg_quartile | 2 | -1442 1253.755 -1.15 0.254 -3942.535 1058.535 3 | -1086 1414.436 -0.77 0.445 -3907.004 1735.004 4 | -1776 1232.862 -1.44 0.154 -4234.867 682.8667 | _cons | 6165 1221.461 5.05 0.000 3728.873 8601.127 -------------+---------------------------------------------------------------- q75 | mpg_quartile | 2 | -6213 1591.987 -3.90 0.000 -9388.118 -3037.882 3 | -4535 1847.591 -2.45 0.017 -8219.904 -850.0963 4 | -6796 1592.095 -4.27 0.000 -9971.334 -3620.666 | _cons | 11385 1556.486 7.31 0.000 8280.686 14489.31 ------------------------------------------------------------------------------ . test /// > ([q25]2.mpg_quart=[q25]3.mpg_quart=[q25]4.mpg_quart) /// > ([q50]2.mpg_quart=[q50]3.mpg_quart=[q50]4.mpg_quart) /// > ([q75]2.mpg_quart=[q75]3.mpg_quart=[q75]4.mpg_quart) ( 1) [q25]2.mpg_quartile - [q25]3.mpg_quartile = 0 ( 2) [q25]2.mpg_quartile - [q25]4.mpg_quartile = 0 ( 3) [q50]2.mpg_quartile - [q50]3.mpg_quartile = 0 ( 4) [q50]2.mpg_quartile - [q50]4.mpg_quartile = 0 ( 5) [q75]2.mpg_quartile - [q75]3.mpg_quartile = 0 ( 6) [q75]2.mpg_quartile - [q75]4.mpg_quartile = 0 F( 6, 70) = 1.10 Prob > F = 0.3740 The ECDF looks like this: Though there seem to be large differences between group 1 and groups 2-4 for the 3 quantiles in the graph. However, this is not a lot of data, so the failure to reject with the formal test is perhaps not that surprising because of the "micronumerosity". Interestingly, the Kruskal-Wallis test of the hypothesis that 4 groups are from the same population rejects: . kwallis price , by(mpg_quartile) Kruskal-Wallis equality-of-populations rank test +---------------------------+ | mpg_qu~e | Obs | Rank Sum | |----------+-----+----------| | 1 | 27 | 1397.00 | | 2 | 11 | 286.00 | | 3 | 22 | 798.00 | | 4 | 14 | 294.00 | +---------------------------+ chi-squared = 23.297 with 3 d.f. probability = 0.0001 chi-squared with ties = 23.297 with 3 d.f. probability = 0.0001
Confidence Interval of CDF
You can do something like this with simultaneous-quantile regression with a set dummies corresponding to the 4 groups. This allows you to test and construct confidence intervals comparing coefficients
Confidence Interval of CDF You can do something like this with simultaneous-quantile regression with a set dummies corresponding to the 4 groups. This allows you to test and construct confidence intervals comparing coefficients describing different quantiles that you care about. Here's a toy example where we cannot reject the joint null that the 25th, 50th, and 75th quartile of car prices are all equal in all 4 MPG groups (the p-value is 0.374): . sysuse auto, clear (1978 Automobile Data) . xtile mpg_quartile = mpg, nq(4) . distplot price, over(mpg_quartile) legend(rows(1)) ylab(.25 .5 .75, angle(0) grid) xlab(#10, grid) /// > plotregion(fcolor(white) lcolor(white)) graphregion(fcolor(white) lcolor(white)) . . sqreg price i.mpg_quart, quantile(.25 .5 .75) reps(500) (fitting base model) Bootstrap replications (500) ----+--- 1 ---+--- 2 ---+--- 3 ---+--- 4 ---+--- 5 .................................................. 50 .................................................. 100 .................................................. 150 .................................................. 200 .................................................. 250 .................................................. 300 .................................................. 350 .................................................. 400 .................................................. 450 .................................................. 500 Simultaneous quantile regression Number of obs = 74 bootstrap(500) SEs .25 Pseudo R2 = 0.0909 .50 Pseudo R2 = 0.1228 .75 Pseudo R2 = 0.2639 ------------------------------------------------------------------------------ | Bootstrap price | Coef. Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- q25 | mpg_quartile | 2 | -1297 528.3106 -2.45 0.017 -2350.682 -243.3178 3 | -1192 447.9346 -2.66 0.010 -2085.377 -298.6225 4 | -1484 458.6527 -3.24 0.002 -2398.754 -569.2459 | _cons | 5379 414.9198 12.96 0.000 4551.468 6206.532 -------------+---------------------------------------------------------------- q50 | mpg_quartile | 2 | -1442 1253.755 -1.15 0.254 -3942.535 1058.535 3 | -1086 1414.436 -0.77 0.445 -3907.004 1735.004 4 | -1776 1232.862 -1.44 0.154 -4234.867 682.8667 | _cons | 6165 1221.461 5.05 0.000 3728.873 8601.127 -------------+---------------------------------------------------------------- q75 | mpg_quartile | 2 | -6213 1591.987 -3.90 0.000 -9388.118 -3037.882 3 | -4535 1847.591 -2.45 0.017 -8219.904 -850.0963 4 | -6796 1592.095 -4.27 0.000 -9971.334 -3620.666 | _cons | 11385 1556.486 7.31 0.000 8280.686 14489.31 ------------------------------------------------------------------------------ . test /// > ([q25]2.mpg_quart=[q25]3.mpg_quart=[q25]4.mpg_quart) /// > ([q50]2.mpg_quart=[q50]3.mpg_quart=[q50]4.mpg_quart) /// > ([q75]2.mpg_quart=[q75]3.mpg_quart=[q75]4.mpg_quart) ( 1) [q25]2.mpg_quartile - [q25]3.mpg_quartile = 0 ( 2) [q25]2.mpg_quartile - [q25]4.mpg_quartile = 0 ( 3) [q50]2.mpg_quartile - [q50]3.mpg_quartile = 0 ( 4) [q50]2.mpg_quartile - [q50]4.mpg_quartile = 0 ( 5) [q75]2.mpg_quartile - [q75]3.mpg_quartile = 0 ( 6) [q75]2.mpg_quartile - [q75]4.mpg_quartile = 0 F( 6, 70) = 1.10 Prob > F = 0.3740 The ECDF looks like this: Though there seem to be large differences between group 1 and groups 2-4 for the 3 quantiles in the graph. However, this is not a lot of data, so the failure to reject with the formal test is perhaps not that surprising because of the "micronumerosity". Interestingly, the Kruskal-Wallis test of the hypothesis that 4 groups are from the same population rejects: . kwallis price , by(mpg_quartile) Kruskal-Wallis equality-of-populations rank test +---------------------------+ | mpg_qu~e | Obs | Rank Sum | |----------+-----+----------| | 1 | 27 | 1397.00 | | 2 | 11 | 286.00 | | 3 | 22 | 798.00 | | 4 | 14 | 294.00 | +---------------------------+ chi-squared = 23.297 with 3 d.f. probability = 0.0001 chi-squared with ties = 23.297 with 3 d.f. probability = 0.0001
Confidence Interval of CDF You can do something like this with simultaneous-quantile regression with a set dummies corresponding to the 4 groups. This allows you to test and construct confidence intervals comparing coefficients
34,648
Confidence Interval of CDF
Assuming that your curves represent the empirical CDFs obtained from data, the usual way to test for a difference between more than two groups would be some kind of multi-sample non-parametric test akin to the Kolmogorov-Smirnov test, or a rank-based ANOVA test like the multi-sample Kruskal-Wallis test. There are a number of papers in the statistical literature looking at multi-sample non-parameetric tests of this kind (see e.g., Kiefer 1959, Birnbaum and Hall 1960, Conover 1965, Sen 1973 for early literature). If you reduce down to a pairwise comparison of interest, you can of course use the traditional two-sample tests. There is an R package called ksamples that implements the multi-sample Kruskal-Wallis test and some other multi-sample non-parametric tests. I am not aware of a package that does the multi-sample KS test, but others may be able to point you to additional resources.
Confidence Interval of CDF
Assuming that your curves represent the empirical CDFs obtained from data, the usual way to test for a difference between more than two groups would be some kind of multi-sample non-parametric test ak
Confidence Interval of CDF Assuming that your curves represent the empirical CDFs obtained from data, the usual way to test for a difference between more than two groups would be some kind of multi-sample non-parametric test akin to the Kolmogorov-Smirnov test, or a rank-based ANOVA test like the multi-sample Kruskal-Wallis test. There are a number of papers in the statistical literature looking at multi-sample non-parameetric tests of this kind (see e.g., Kiefer 1959, Birnbaum and Hall 1960, Conover 1965, Sen 1973 for early literature). If you reduce down to a pairwise comparison of interest, you can of course use the traditional two-sample tests. There is an R package called ksamples that implements the multi-sample Kruskal-Wallis test and some other multi-sample non-parametric tests. I am not aware of a package that does the multi-sample KS test, but others may be able to point you to additional resources.
Confidence Interval of CDF Assuming that your curves represent the empirical CDFs obtained from data, the usual way to test for a difference between more than two groups would be some kind of multi-sample non-parametric test ak
34,649
Confidence Interval of CDF
For comparing 2 distributions at a time ("pairwise"), it's possible to find all the ranges of values for which the CDFs are statistically significantly different, while controlling the familywise error rate (FWER) at your desired level. This (new) approach is described in detail in this 2018 Journal of Econometrics paper, as well as in this 2019 Stata Journal article. R and Stata code (and open drafts of articles, and replication files) are at https://faculty.missouri.edu/~kaplandm. Both articles include examples with real data. Everything is fully nonparametric, and the "strong control" of FWER is exact even in small samples.
Confidence Interval of CDF
For comparing 2 distributions at a time ("pairwise"), it's possible to find all the ranges of values for which the CDFs are statistically significantly different, while controlling the familywise erro
Confidence Interval of CDF For comparing 2 distributions at a time ("pairwise"), it's possible to find all the ranges of values for which the CDFs are statistically significantly different, while controlling the familywise error rate (FWER) at your desired level. This (new) approach is described in detail in this 2018 Journal of Econometrics paper, as well as in this 2019 Stata Journal article. R and Stata code (and open drafts of articles, and replication files) are at https://faculty.missouri.edu/~kaplandm. Both articles include examples with real data. Everything is fully nonparametric, and the "strong control" of FWER is exact even in small samples.
Confidence Interval of CDF For comparing 2 distributions at a time ("pairwise"), it's possible to find all the ranges of values for which the CDFs are statistically significantly different, while controlling the familywise erro
34,650
Shall we use log(diff(x)) or diff(log(x))?
Lets's have at look at both options. diff(log(x)) diff(log(x)) calculates relative changes. This also takes care of exponential trends. For example, you would use this to detrend the stock price development of Google. According to the logarithms laws: $$log(a) - log(b) = log(a/b)$$ all.equal(log(3) - log(5), log(3/5)) This means, instead of using the absolute difference for detrending you are using the relative change. As a bonus differences calculated using the natural logarithm can also be interpreted as a precentage change. For more information I recommend: Cole, T. J., & Altman, D. G. (2017). Statistics Notes: Percentage differences, symmetry, and natural logarithms. BMJ, 358(August), j3683. https://doi.org/10.1136/bmj.j3683 log(diff(x)) On the other hand log(diff(x)) calculates the absolute differences before the logarithm is applied. If you calculate a trend using this method, the trend would be more outlier resistant (but this also applies to diff(log(x))). This is helpful if there are a small number of big jumps in the time-series. Beware this method would potentially break your analyses when the difference is 0 or negative. (in R: log(0) = -Inf or log(-1) = NaN) In my opinion diff(log(x)) is the better default choice. While there probably is a use-case for log(diff(x)), it's quite hard to think of one.
Shall we use log(diff(x)) or diff(log(x))?
Lets's have at look at both options. diff(log(x)) diff(log(x)) calculates relative changes. This also takes care of exponential trends. For example, you would use this to detrend the stock price devel
Shall we use log(diff(x)) or diff(log(x))? Lets's have at look at both options. diff(log(x)) diff(log(x)) calculates relative changes. This also takes care of exponential trends. For example, you would use this to detrend the stock price development of Google. According to the logarithms laws: $$log(a) - log(b) = log(a/b)$$ all.equal(log(3) - log(5), log(3/5)) This means, instead of using the absolute difference for detrending you are using the relative change. As a bonus differences calculated using the natural logarithm can also be interpreted as a precentage change. For more information I recommend: Cole, T. J., & Altman, D. G. (2017). Statistics Notes: Percentage differences, symmetry, and natural logarithms. BMJ, 358(August), j3683. https://doi.org/10.1136/bmj.j3683 log(diff(x)) On the other hand log(diff(x)) calculates the absolute differences before the logarithm is applied. If you calculate a trend using this method, the trend would be more outlier resistant (but this also applies to diff(log(x))). This is helpful if there are a small number of big jumps in the time-series. Beware this method would potentially break your analyses when the difference is 0 or negative. (in R: log(0) = -Inf or log(-1) = NaN) In my opinion diff(log(x)) is the better default choice. While there probably is a use-case for log(diff(x)), it's quite hard to think of one.
Shall we use log(diff(x)) or diff(log(x))? Lets's have at look at both options. diff(log(x)) diff(log(x)) calculates relative changes. This also takes care of exponential trends. For example, you would use this to detrend the stock price devel
34,651
Is the absolute value of a stationary series also stationary?
In one particular case this is somewhat true: If your time series is stationary with normally distributed error, then the absolute values of your original time series follow a stationary folded normal distribution. Since even weak stationarity means both the mean and variance are constant over time, the absolute values will also be stationary. For other distributions this means that the absolute values of the original time series are at least weakly stationary, as constant variance of the original values translates to a constant mean of the new values. However, if your original time series only has a constant mean, the variance may change over time, which will affect the mean of the absolute values. Hence, the absolute values will not be (weakly) stationary themselves. A more general answer would require some study of the moment generating function of the absolute value of a random variable. Perhaps someone with more mathematical background can answer that.
Is the absolute value of a stationary series also stationary?
In one particular case this is somewhat true: If your time series is stationary with normally distributed error, then the absolute values of your original time series follow a stationary folded normal
Is the absolute value of a stationary series also stationary? In one particular case this is somewhat true: If your time series is stationary with normally distributed error, then the absolute values of your original time series follow a stationary folded normal distribution. Since even weak stationarity means both the mean and variance are constant over time, the absolute values will also be stationary. For other distributions this means that the absolute values of the original time series are at least weakly stationary, as constant variance of the original values translates to a constant mean of the new values. However, if your original time series only has a constant mean, the variance may change over time, which will affect the mean of the absolute values. Hence, the absolute values will not be (weakly) stationary themselves. A more general answer would require some study of the moment generating function of the absolute value of a random variable. Perhaps someone with more mathematical background can answer that.
Is the absolute value of a stationary series also stationary? In one particular case this is somewhat true: If your time series is stationary with normally distributed error, then the absolute values of your original time series follow a stationary folded normal
34,652
Is the absolute value of a stationary series also stationary?
Let $\{X_n\colon n \in \mathbb Z\}$ be a time series where $X_n$ is a discrete random variable taking on values $\cos(n), \sin(n), -\cos(n), -\sin(n)$ with equal probability $\frac 14$. It is easily verified that $E[X_n] = 0$ and \begin{align}E[X_mX_{m+n}] &= \frac 14\bigg[\cos(m)\cos(m+n)+\sin(m)\sin(m+n)\\ &= ~~~~~ + (-\cos(m))(-\cos(m+n))+(-\sin(m))(-\sin(m+n))\bigg]\\ &= \frac 12\bigg[\cos(m)\cos(m+n)+\sin(m)\sin(m+n)\bigg]\\ &= \frac 12\,\cos(n)\end{align} and so the process is weakly stationary. It is also obviously not strictly stationary since $X_0$ and $X_n$, $n\neq 0$ take on different values and so the distributions of $X_n$ and $X_m$ are different instead of being the same as is needed (along with many other requirements) for strict stationarity. For the weakly stationary process described above, the process $\{|X_n|\colon n \in \mathbb Z\}$ is not weakly stationary because $E[|X_n|] = \left.\left.\frac 12\right[\cos(n) + \sin(n)\right]$ is not a constant as is needed for weak stationarity (though it is true that the autocorrelation function $E[|X_m|\cdot|X_{m+n}|]$ is a function of $n$ alone). On the other hand, as noted by @bananach in a comment on the main question, if stationarity is interpreted as strict stationarity, then strict stationarity of $\{X_n\colon n \in \mathbb Z\}$ implies that $\{|X_n|\colon n \in \mathbb Z\}$ is also a strictly stationary process. Strictly stationary processes with finite variance are also weakly stationary processes, and thus for this subclass, it is true that weak stationarity of $\{X_n\colon n \in \mathbb Z\}$ implies weak stationarity of $\{|X_n|\colon n \in \mathbb Z\}$. But, as described in the first part of this answer, one cannot always conclude that weak stationarity of $\{X_n\colon n \in \mathbb Z\}$ implies weak stationarity of $\{|X_n|\colon n \in \mathbb Z\}$.
Is the absolute value of a stationary series also stationary?
Let $\{X_n\colon n \in \mathbb Z\}$ be a time series where $X_n$ is a discrete random variable taking on values $\cos(n), \sin(n), -\cos(n), -\sin(n)$ with equal probability $\frac 14$. It is easily v
Is the absolute value of a stationary series also stationary? Let $\{X_n\colon n \in \mathbb Z\}$ be a time series where $X_n$ is a discrete random variable taking on values $\cos(n), \sin(n), -\cos(n), -\sin(n)$ with equal probability $\frac 14$. It is easily verified that $E[X_n] = 0$ and \begin{align}E[X_mX_{m+n}] &= \frac 14\bigg[\cos(m)\cos(m+n)+\sin(m)\sin(m+n)\\ &= ~~~~~ + (-\cos(m))(-\cos(m+n))+(-\sin(m))(-\sin(m+n))\bigg]\\ &= \frac 12\bigg[\cos(m)\cos(m+n)+\sin(m)\sin(m+n)\bigg]\\ &= \frac 12\,\cos(n)\end{align} and so the process is weakly stationary. It is also obviously not strictly stationary since $X_0$ and $X_n$, $n\neq 0$ take on different values and so the distributions of $X_n$ and $X_m$ are different instead of being the same as is needed (along with many other requirements) for strict stationarity. For the weakly stationary process described above, the process $\{|X_n|\colon n \in \mathbb Z\}$ is not weakly stationary because $E[|X_n|] = \left.\left.\frac 12\right[\cos(n) + \sin(n)\right]$ is not a constant as is needed for weak stationarity (though it is true that the autocorrelation function $E[|X_m|\cdot|X_{m+n}|]$ is a function of $n$ alone). On the other hand, as noted by @bananach in a comment on the main question, if stationarity is interpreted as strict stationarity, then strict stationarity of $\{X_n\colon n \in \mathbb Z\}$ implies that $\{|X_n|\colon n \in \mathbb Z\}$ is also a strictly stationary process. Strictly stationary processes with finite variance are also weakly stationary processes, and thus for this subclass, it is true that weak stationarity of $\{X_n\colon n \in \mathbb Z\}$ implies weak stationarity of $\{|X_n|\colon n \in \mathbb Z\}$. But, as described in the first part of this answer, one cannot always conclude that weak stationarity of $\{X_n\colon n \in \mathbb Z\}$ implies weak stationarity of $\{|X_n|\colon n \in \mathbb Z\}$.
Is the absolute value of a stationary series also stationary? Let $\{X_n\colon n \in \mathbb Z\}$ be a time series where $X_n$ is a discrete random variable taking on values $\cos(n), \sin(n), -\cos(n), -\sin(n)$ with equal probability $\frac 14$. It is easily v
34,653
Is the absolute value of a stationary series also stationary?
The answer is no. This can be seen by considering a sequence of independent r.vs. $X_i$ with their marginal distribution taken in a parametric family depending on three parameters. To get a generic example, we can consider a distribution which can be re-parameterized by using the first two moments along with the absolute moment $\mathbb{E}[|X|]$. We can then keep the first two parameters constant while the third $\mathbb{E}[|X_i|]$ depends on $i$. As a specific example we can take a discrete distribution with support $\{-2, \, -1, \, 1, \, 2\}$; the three moments $\mathbb{E}[X]$, $\mathbb{E}[X^2]$ and $\mathbb{E}[|X|]$ express as linear combinations of the four probabilities $p_k := \text{Pr}\{X = k\}$. Since the three linear combinations are linearly independent, we can use the three moments to re-parameterize as wanted.
Is the absolute value of a stationary series also stationary?
The answer is no. This can be seen by considering a sequence of independent r.vs. $X_i$ with their marginal distribution taken in a parametric family depending on three parameters. To get a generic
Is the absolute value of a stationary series also stationary? The answer is no. This can be seen by considering a sequence of independent r.vs. $X_i$ with their marginal distribution taken in a parametric family depending on three parameters. To get a generic example, we can consider a distribution which can be re-parameterized by using the first two moments along with the absolute moment $\mathbb{E}[|X|]$. We can then keep the first two parameters constant while the third $\mathbb{E}[|X_i|]$ depends on $i$. As a specific example we can take a discrete distribution with support $\{-2, \, -1, \, 1, \, 2\}$; the three moments $\mathbb{E}[X]$, $\mathbb{E}[X^2]$ and $\mathbb{E}[|X|]$ express as linear combinations of the four probabilities $p_k := \text{Pr}\{X = k\}$. Since the three linear combinations are linearly independent, we can use the three moments to re-parameterize as wanted.
Is the absolute value of a stationary series also stationary? The answer is no. This can be seen by considering a sequence of independent r.vs. $X_i$ with their marginal distribution taken in a parametric family depending on three parameters. To get a generic
34,654
Is the absolute value of a stationary series also stationary?
As several others have shown, weak stationarity does not necessarily remain when you take the absolute value of the time-series. The reason for this is that taking the absolute value of each element of the time-series can change the mean and variance in a non-uniform way, due to differences in the underlying distributions of the values. Although weak stationarity does not transfer over in this way, it is worth nothing that strong stationarity does remain under the absolute-value transformation.
Is the absolute value of a stationary series also stationary?
As several others have shown, weak stationarity does not necessarily remain when you take the absolute value of the time-series. The reason for this is that taking the absolute value of each element
Is the absolute value of a stationary series also stationary? As several others have shown, weak stationarity does not necessarily remain when you take the absolute value of the time-series. The reason for this is that taking the absolute value of each element of the time-series can change the mean and variance in a non-uniform way, due to differences in the underlying distributions of the values. Although weak stationarity does not transfer over in this way, it is worth nothing that strong stationarity does remain under the absolute-value transformation.
Is the absolute value of a stationary series also stationary? As several others have shown, weak stationarity does not necessarily remain when you take the absolute value of the time-series. The reason for this is that taking the absolute value of each element
34,655
Appropriate way to get Cross Validated AUC
As Provost explains in 'An Introduction to ROC Analysis', ROC averaging can be simply done by combining the scores from multiple sets $T_1, ..., T_k$ as you suggested in method (2). This is preferred to method (1) since it can be quite hard to average actual ROC curves, since the specificity (x-axis) values of the points are expected to be different. Therefore, you would need to do a lot of interpolation to average the curves. Another advantage is that the resulting curve from method (2) is smoother and approximates the AUC better, as a low number of scores tends to underestimate the AUROC (at least when calculated via the trapezoidal rule). However, one should note that an advantage of method (1) is that it enables you to estimate the variance of the AUC.
Appropriate way to get Cross Validated AUC
As Provost explains in 'An Introduction to ROC Analysis', ROC averaging can be simply done by combining the scores from multiple sets $T_1, ..., T_k$ as you suggested in method (2). This is preferred
Appropriate way to get Cross Validated AUC As Provost explains in 'An Introduction to ROC Analysis', ROC averaging can be simply done by combining the scores from multiple sets $T_1, ..., T_k$ as you suggested in method (2). This is preferred to method (1) since it can be quite hard to average actual ROC curves, since the specificity (x-axis) values of the points are expected to be different. Therefore, you would need to do a lot of interpolation to average the curves. Another advantage is that the resulting curve from method (2) is smoother and approximates the AUC better, as a low number of scores tends to underestimate the AUROC (at least when calculated via the trapezoidal rule). However, one should note that an advantage of method (1) is that it enables you to estimate the variance of the AUC.
Appropriate way to get Cross Validated AUC As Provost explains in 'An Introduction to ROC Analysis', ROC averaging can be simply done by combining the scores from multiple sets $T_1, ..., T_k$ as you suggested in method (2). This is preferred
34,656
Appropriate way to get Cross Validated AUC
The preferred method to do this is (1), rather than (2). For example, based on Forman, G., Scholz, M. & Scholz, M. Apples-to-Apples in Cross-Validation Studies: Pitfalls in Classifier Performance Measurement (2009): The problem with AUC merge is that by sorting different folds together, it assumes that the classifier should produce well-calibrated probability estimates. Usually a researcher interested in measuring the quality of the probability estimates will use Brier score or such. By contrast, researchers who measure performance based on AUC typically are unconcerned with calibration or specific threshold values, being only concerned with the classifier’s ability to rank positives ahead of negatives. So, AUC merge adds a usually unintended requirement on the study: it will downgrade classifiers that rank well if they have poor calibration across folds, as we illustrate in Section 3.2
Appropriate way to get Cross Validated AUC
The preferred method to do this is (1), rather than (2). For example, based on Forman, G., Scholz, M. & Scholz, M. Apples-to-Apples in Cross-Validation Studies: Pitfalls in Classifier Performance Meas
Appropriate way to get Cross Validated AUC The preferred method to do this is (1), rather than (2). For example, based on Forman, G., Scholz, M. & Scholz, M. Apples-to-Apples in Cross-Validation Studies: Pitfalls in Classifier Performance Measurement (2009): The problem with AUC merge is that by sorting different folds together, it assumes that the classifier should produce well-calibrated probability estimates. Usually a researcher interested in measuring the quality of the probability estimates will use Brier score or such. By contrast, researchers who measure performance based on AUC typically are unconcerned with calibration or specific threshold values, being only concerned with the classifier’s ability to rank positives ahead of negatives. So, AUC merge adds a usually unintended requirement on the study: it will downgrade classifiers that rank well if they have poor calibration across folds, as we illustrate in Section 3.2
Appropriate way to get Cross Validated AUC The preferred method to do this is (1), rather than (2). For example, based on Forman, G., Scholz, M. & Scholz, M. Apples-to-Apples in Cross-Validation Studies: Pitfalls in Classifier Performance Meas
34,657
Correcting for multiple pairwise comparisons with GAM objects {mgcv} in R
The glht() function for generalized linear hypotheses from the multcomp package can be used to carry out various kinds of contrasts using a range of different p-value adjustments. The contrasts you are looking for are also called "Tukey" contrasts for all pairwise comparisons. The p-value adjustments include single-step, Shaffer, Westfall, and all p.adjust methods, see ?summary.glht. As @GavinSimpson pointed out in the comments: For gam() objects from mgcv this does not work out of the box but requires some manual intervention. For lmer() from lme4 everything works conveniently. I illustrate below how both packages can be used with multcomp to obtain equivalent results. For illustration I use the sleepstudy data from lme4 but collapse the numeric regressor Days to a three-level factor (merely for illustration purposes): library("lme4") data("sleepstudy", package = "lme4") sleepstudy$Days <- cut(sleepstudy$Days, breaks = c(-Inf, 2.5, 5.5, Inf), labels = c("low", "med", "high")) m1 <- lmer(Reaction ~ Days + (1 | Subject), data = sleepstudy) summary(m1) ## ... ## Fixed effects: ## Estimate Std. Error t value ## (Intercept) 262.170 9.802 26.747 ## Daysmed 31.217 6.365 4.905 ## Dayshigh 67.433 5.954 11.326 ## ... Then glht() can be used to set up all pairwise (aka Tukey) contrasts for the Days factor. The summary() method then applies the p-value adjustment (single-step, by default). library("multcomp") g1 <- glht(m1, linfct = mcp(Days = "Tukey")) summary(g1) ## Simultaneous Tests for General Linear Hypotheses ## ## Multiple Comparisons of Means: Tukey Contrasts ## ## Fit: lmer(formula = Reaction ~ Days + (1 | Subject), data = sleepstudy) ## ## Linear Hypotheses: ## Estimate Std. Error z value Pr(>|z|) ## med - low == 0 31.217 6.365 4.905 2.28e-06 *** ## high - low == 0 67.433 5.954 11.326 < 1e-06 *** ## high - med == 0 36.216 5.954 6.083 < 1e-06 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## (Adjusted p values reported -- single-step method) The same model can be fitted with gam() as described in the question. library("mgcv") m2 <- gam(Reaction ~ Days + s(Subject, bs = "re"), data = sleepstudy) summary(m2) ## ... ## Parametric coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 262.170 9.802 26.747 < 2e-16 *** ## Daysmed 31.217 6.365 4.905 2.27e-06 *** ## Dayshigh 67.433 5.954 11.326 < 2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ... However, the mcp(Days = "Tukey") method for describing the Tukey contrasts does not cooperate with gam() output and hence fails: g2 <- glht(m2, linfct = mcp(Days = "Tukey")) ## Error in linfct[[nm]] %*% C : ## requires numeric/complex matrix/vector arguments However, it is not difficult (albeit a bit technical and tedious) to set up the contrast matrix by hand. contr <- matrix(0, nrow = 3, ncol = length(coef(m2))) colnames(contr) <- names(coef(m2)) rownames(contr) <- c("med - low", "high - low", "high - med") contr[, 2:3] <- rbind(c(1, 0), c(0, 1), c(-1, 1)) The first columns of the contrast matrix show what is needed here: As the low coefficient is constrained to zero in the model, med - low is simply med and analogously for high - low. The last row then shows the contrast for high - med: contr[, 1:5] ## (Intercept) Daysmed Dayshigh s(Subject).1 s(Subject).2 ## med - low 0 1 0 0 0 ## high - low 0 0 1 0 0 ## high - med 0 -1 1 0 0 And with this contrast matrix we can conduct the pairwise comparison with glht(): g2 <- glht(m2, linfct = contr) summary(g2) ## Simultaneous Tests for General Linear Hypotheses ## ## Fit: gam(formula = Reaction ~ Days + s(Subject, bs = "re"), data = sleepstudy) ## ## Linear Hypotheses: ## Estimate Std. Error z value Pr(>|z|) ## med - low == 0 31.217 6.365 4.905 2.35e-06 *** ## high - low == 0 67.433 5.954 11.326 < 1e-06 *** ## high - med == 0 36.216 5.954 6.083 < 1e-06 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## (Adjusted p values reported -- single-step method) Another quite convenient way to indicate the contrasts to be tested is through character strings. This can set up linear functions based on the effect names from names(coef(m2)). And for factors with fewer levels (and hence fewer Tukey contrasts) this works quite nicely - but if the comparisons become more complex it's possibly easier to constract the contrast matrix as above. g3 <- glht(m2, linfct = c("Daysmed = 0", "Dayshigh = 0", "Dayshigh - Daysmed = 0")) summary(g3) ## Simultaneous Tests for General Linear Hypotheses ## ## Fit: gam(formula = Reaction ~ Days + s(Subject, bs = "re"), data = sleepstudy) ## ## Linear Hypotheses: ## Estimate Std. Error z value Pr(>|z|) ## Daysmed == 0 31.217 6.365 4.905 2.53e-06 *** ## Dayshigh == 0 67.433 5.954 11.326 < 1e-06 *** ## Dayshigh - Daysmed == 0 36.216 5.954 6.083 < 1e-06 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## (Adjusted p values reported -- single-step method)
Correcting for multiple pairwise comparisons with GAM objects {mgcv} in R
The glht() function for generalized linear hypotheses from the multcomp package can be used to carry out various kinds of contrasts using a range of different p-value adjustments. The contrasts you ar
Correcting for multiple pairwise comparisons with GAM objects {mgcv} in R The glht() function for generalized linear hypotheses from the multcomp package can be used to carry out various kinds of contrasts using a range of different p-value adjustments. The contrasts you are looking for are also called "Tukey" contrasts for all pairwise comparisons. The p-value adjustments include single-step, Shaffer, Westfall, and all p.adjust methods, see ?summary.glht. As @GavinSimpson pointed out in the comments: For gam() objects from mgcv this does not work out of the box but requires some manual intervention. For lmer() from lme4 everything works conveniently. I illustrate below how both packages can be used with multcomp to obtain equivalent results. For illustration I use the sleepstudy data from lme4 but collapse the numeric regressor Days to a three-level factor (merely for illustration purposes): library("lme4") data("sleepstudy", package = "lme4") sleepstudy$Days <- cut(sleepstudy$Days, breaks = c(-Inf, 2.5, 5.5, Inf), labels = c("low", "med", "high")) m1 <- lmer(Reaction ~ Days + (1 | Subject), data = sleepstudy) summary(m1) ## ... ## Fixed effects: ## Estimate Std. Error t value ## (Intercept) 262.170 9.802 26.747 ## Daysmed 31.217 6.365 4.905 ## Dayshigh 67.433 5.954 11.326 ## ... Then glht() can be used to set up all pairwise (aka Tukey) contrasts for the Days factor. The summary() method then applies the p-value adjustment (single-step, by default). library("multcomp") g1 <- glht(m1, linfct = mcp(Days = "Tukey")) summary(g1) ## Simultaneous Tests for General Linear Hypotheses ## ## Multiple Comparisons of Means: Tukey Contrasts ## ## Fit: lmer(formula = Reaction ~ Days + (1 | Subject), data = sleepstudy) ## ## Linear Hypotheses: ## Estimate Std. Error z value Pr(>|z|) ## med - low == 0 31.217 6.365 4.905 2.28e-06 *** ## high - low == 0 67.433 5.954 11.326 < 1e-06 *** ## high - med == 0 36.216 5.954 6.083 < 1e-06 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## (Adjusted p values reported -- single-step method) The same model can be fitted with gam() as described in the question. library("mgcv") m2 <- gam(Reaction ~ Days + s(Subject, bs = "re"), data = sleepstudy) summary(m2) ## ... ## Parametric coefficients: ## Estimate Std. Error t value Pr(>|t|) ## (Intercept) 262.170 9.802 26.747 < 2e-16 *** ## Daysmed 31.217 6.365 4.905 2.27e-06 *** ## Dayshigh 67.433 5.954 11.326 < 2e-16 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## ... However, the mcp(Days = "Tukey") method for describing the Tukey contrasts does not cooperate with gam() output and hence fails: g2 <- glht(m2, linfct = mcp(Days = "Tukey")) ## Error in linfct[[nm]] %*% C : ## requires numeric/complex matrix/vector arguments However, it is not difficult (albeit a bit technical and tedious) to set up the contrast matrix by hand. contr <- matrix(0, nrow = 3, ncol = length(coef(m2))) colnames(contr) <- names(coef(m2)) rownames(contr) <- c("med - low", "high - low", "high - med") contr[, 2:3] <- rbind(c(1, 0), c(0, 1), c(-1, 1)) The first columns of the contrast matrix show what is needed here: As the low coefficient is constrained to zero in the model, med - low is simply med and analogously for high - low. The last row then shows the contrast for high - med: contr[, 1:5] ## (Intercept) Daysmed Dayshigh s(Subject).1 s(Subject).2 ## med - low 0 1 0 0 0 ## high - low 0 0 1 0 0 ## high - med 0 -1 1 0 0 And with this contrast matrix we can conduct the pairwise comparison with glht(): g2 <- glht(m2, linfct = contr) summary(g2) ## Simultaneous Tests for General Linear Hypotheses ## ## Fit: gam(formula = Reaction ~ Days + s(Subject, bs = "re"), data = sleepstudy) ## ## Linear Hypotheses: ## Estimate Std. Error z value Pr(>|z|) ## med - low == 0 31.217 6.365 4.905 2.35e-06 *** ## high - low == 0 67.433 5.954 11.326 < 1e-06 *** ## high - med == 0 36.216 5.954 6.083 < 1e-06 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## (Adjusted p values reported -- single-step method) Another quite convenient way to indicate the contrasts to be tested is through character strings. This can set up linear functions based on the effect names from names(coef(m2)). And for factors with fewer levels (and hence fewer Tukey contrasts) this works quite nicely - but if the comparisons become more complex it's possibly easier to constract the contrast matrix as above. g3 <- glht(m2, linfct = c("Daysmed = 0", "Dayshigh = 0", "Dayshigh - Daysmed = 0")) summary(g3) ## Simultaneous Tests for General Linear Hypotheses ## ## Fit: gam(formula = Reaction ~ Days + s(Subject, bs = "re"), data = sleepstudy) ## ## Linear Hypotheses: ## Estimate Std. Error z value Pr(>|z|) ## Daysmed == 0 31.217 6.365 4.905 2.53e-06 *** ## Dayshigh == 0 67.433 5.954 11.326 < 1e-06 *** ## Dayshigh - Daysmed == 0 36.216 5.954 6.083 < 1e-06 *** ## --- ## Signif. codes: 0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1 ## (Adjusted p values reported -- single-step method)
Correcting for multiple pairwise comparisons with GAM objects {mgcv} in R The glht() function for generalized linear hypotheses from the multcomp package can be used to carry out various kinds of contrasts using a range of different p-value adjustments. The contrasts you ar
34,658
In CNN, do we have learn kernel values at every convolution layer?
The answer by @Shehryar Malik is correct (+1), but it sounds a bit confusing, especially for people new to convolutional neural networks. In the usual CNN scenario, each layer has its own set of convolution kernels that has to be learned. This can be easily seen in the following (famous) image: The left block shows learned kernels in the first layer. The central and right block show kernels learned in deeper layers1. This is very important feature of convolutional neural networks: At different layers the network learns to detect stuff at different levels of abstraction. Therefore the kernels are different. In theory, nothing prevents you from using the same kernels at each layer. In fact, that thing is called recurrent convolutional neural network. 1 More precisely, they show to what kind of image features these kernels respond to, since visualizing kernel with shape 3$\times$3$\times$256 is not very easy/intuitive/useful.
In CNN, do we have learn kernel values at every convolution layer?
The answer by @Shehryar Malik is correct (+1), but it sounds a bit confusing, especially for people new to convolutional neural networks. In the usual CNN scenario, each layer has its own set of convo
In CNN, do we have learn kernel values at every convolution layer? The answer by @Shehryar Malik is correct (+1), but it sounds a bit confusing, especially for people new to convolutional neural networks. In the usual CNN scenario, each layer has its own set of convolution kernels that has to be learned. This can be easily seen in the following (famous) image: The left block shows learned kernels in the first layer. The central and right block show kernels learned in deeper layers1. This is very important feature of convolutional neural networks: At different layers the network learns to detect stuff at different levels of abstraction. Therefore the kernels are different. In theory, nothing prevents you from using the same kernels at each layer. In fact, that thing is called recurrent convolutional neural network. 1 More precisely, they show to what kind of image features these kernels respond to, since visualizing kernel with shape 3$\times$3$\times$256 is not very easy/intuitive/useful.
In CNN, do we have learn kernel values at every convolution layer? The answer by @Shehryar Malik is correct (+1), but it sounds a bit confusing, especially for people new to convolutional neural networks. In the usual CNN scenario, each layer has its own set of convo
34,659
In CNN, do we have learn kernel values at every convolution layer?
That is entirely up to you. You can define only one set of kernel values and use it for all your layers or instead you could define a separate set of kernel values for each layer. Of course, it would be more prudent to define different sets of kernel values for each layer. This is because the kernel's job is to extract specific information from an input image. Different sets of kernel values at each layer will allow the network greater flexibility in deciding the best features to extract at each layer.
In CNN, do we have learn kernel values at every convolution layer?
That is entirely up to you. You can define only one set of kernel values and use it for all your layers or instead you could define a separate set of kernel values for each layer. Of course, it would
In CNN, do we have learn kernel values at every convolution layer? That is entirely up to you. You can define only one set of kernel values and use it for all your layers or instead you could define a separate set of kernel values for each layer. Of course, it would be more prudent to define different sets of kernel values for each layer. This is because the kernel's job is to extract specific information from an input image. Different sets of kernel values at each layer will allow the network greater flexibility in deciding the best features to extract at each layer.
In CNN, do we have learn kernel values at every convolution layer? That is entirely up to you. You can define only one set of kernel values and use it for all your layers or instead you could define a separate set of kernel values for each layer. Of course, it would
34,660
KL divergence between which distributions could be infinity
What happens to $D_{KL}(p \parallel q)$ when $p(x)$ and/or $q(x)$ is zero? In a strict sense, the log of zero is undefined because there's no value of $x$ such that $e^x = 0$. But, the definition of KL divergence uses the following conventions (see Cover and Thomas, Elements of Information Theory): $$0 \log \frac{0}{0} = 0, \quad 0 \log \frac{0}{q(x)} = 0, \quad p(x) \log \frac{p(x)}{0} = \infty$$ This implies that KL divergence is infinite if there exists an $x$ where $p(x) > 0$ and $q(x) = 0$. Conversely, $p(x)$ being zero doesn't produce infinity, whether or not $q(x)$ is also zero. Another way of saying this is that KL divergence is finite only if the support of $p$ is contained within the support of $q$. However, note that KL divergence can be infinite even if $p(x)$ and $q(x)$ are nonzero for all $x$ (see here for an example). Regarding your example, this means that KL divergence will be infinite if $p$ is Gaussian and $q$ is uniform, but not the other way around.
KL divergence between which distributions could be infinity
What happens to $D_{KL}(p \parallel q)$ when $p(x)$ and/or $q(x)$ is zero? In a strict sense, the log of zero is undefined because there's no value of $x$ such that $e^x = 0$. But, the definition of K
KL divergence between which distributions could be infinity What happens to $D_{KL}(p \parallel q)$ when $p(x)$ and/or $q(x)$ is zero? In a strict sense, the log of zero is undefined because there's no value of $x$ such that $e^x = 0$. But, the definition of KL divergence uses the following conventions (see Cover and Thomas, Elements of Information Theory): $$0 \log \frac{0}{0} = 0, \quad 0 \log \frac{0}{q(x)} = 0, \quad p(x) \log \frac{p(x)}{0} = \infty$$ This implies that KL divergence is infinite if there exists an $x$ where $p(x) > 0$ and $q(x) = 0$. Conversely, $p(x)$ being zero doesn't produce infinity, whether or not $q(x)$ is also zero. Another way of saying this is that KL divergence is finite only if the support of $p$ is contained within the support of $q$. However, note that KL divergence can be infinite even if $p(x)$ and $q(x)$ are nonzero for all $x$ (see here for an example). Regarding your example, this means that KL divergence will be infinite if $p$ is Gaussian and $q$ is uniform, but not the other way around.
KL divergence between which distributions could be infinity What happens to $D_{KL}(p \parallel q)$ when $p(x)$ and/or $q(x)$ is zero? In a strict sense, the log of zero is undefined because there's no value of $x$ such that $e^x = 0$. But, the definition of K
34,661
Difference between strided and non-strided convolution
Stride is the distance between spatial locations where the convolution kernel is applied. In the default scenario, the distance is 1 in each dimension. This is also the default value in Tensor Flow, as @Axel Vanraes mentions. I suppose this is sometimes referred to as non-strided convolution, although that is incorrect: the stride is one. When the stride is larger than one, one usually talks about strided convolution to make the difference explicit. To visualize the difference: Stride-1 convolution ("non-strided"): Stride-2 convolution ("strided"): Images from https://github.com/vdumoulin/conv_arithmetic
Difference between strided and non-strided convolution
Stride is the distance between spatial locations where the convolution kernel is applied. In the default scenario, the distance is 1 in each dimension. This is also the default value in Tensor Flow, a
Difference between strided and non-strided convolution Stride is the distance between spatial locations where the convolution kernel is applied. In the default scenario, the distance is 1 in each dimension. This is also the default value in Tensor Flow, as @Axel Vanraes mentions. I suppose this is sometimes referred to as non-strided convolution, although that is incorrect: the stride is one. When the stride is larger than one, one usually talks about strided convolution to make the difference explicit. To visualize the difference: Stride-1 convolution ("non-strided"): Stride-2 convolution ("strided"): Images from https://github.com/vdumoulin/conv_arithmetic
Difference between strided and non-strided convolution Stride is the distance between spatial locations where the convolution kernel is applied. In the default scenario, the distance is 1 in each dimension. This is also the default value in Tensor Flow, a
34,662
Difference between strided and non-strided convolution
There's always a stride. The whole idea of convolution is that you stride the window over the input vector, matrix or tensor otherwise. Stride parameter tells you the length of the step in your stride. By default it's probably 1 in any framework. You can increase the stride (step) length in order to save space or cut calculation time. You'll be foregoing some information when doing so, it's a trade-off between resource consumption (whether it's CPU or memory) and information retrieved.
Difference between strided and non-strided convolution
There's always a stride. The whole idea of convolution is that you stride the window over the input vector, matrix or tensor otherwise. Stride parameter tells you the length of the step in your strid
Difference between strided and non-strided convolution There's always a stride. The whole idea of convolution is that you stride the window over the input vector, matrix or tensor otherwise. Stride parameter tells you the length of the step in your stride. By default it's probably 1 in any framework. You can increase the stride (step) length in order to save space or cut calculation time. You'll be foregoing some information when doing so, it's a trade-off between resource consumption (whether it's CPU or memory) and information retrieved.
Difference between strided and non-strided convolution There's always a stride. The whole idea of convolution is that you stride the window over the input vector, matrix or tensor otherwise. Stride parameter tells you the length of the step in your strid
34,663
Difference between strided and non-strided convolution
Applying convolution means sliding a kernel over an input signal outputting a weighted sum where the weights are the values inside the kernel. The stride is the sliding step. You can not have a stride of 0, this would mean not sliding at all. In the paper they use a convolution with a 2X2 stride, the step is 2 in both x and y direction, followed by non-strided convolution, stride 1, step 1. You can observe from fig2 from the paper that the output of a convolution with stride 2 halves the width and height of the input (first 3 conv layer) whereas the output of a convolution with stride 1 (last 2 conv layers) has width = input_width-2 and height = height-2 because the kernel is 3x3.
Difference between strided and non-strided convolution
Applying convolution means sliding a kernel over an input signal outputting a weighted sum where the weights are the values inside the kernel. The stride is the sliding step. You can not have a strid
Difference between strided and non-strided convolution Applying convolution means sliding a kernel over an input signal outputting a weighted sum where the weights are the values inside the kernel. The stride is the sliding step. You can not have a stride of 0, this would mean not sliding at all. In the paper they use a convolution with a 2X2 stride, the step is 2 in both x and y direction, followed by non-strided convolution, stride 1, step 1. You can observe from fig2 from the paper that the output of a convolution with stride 2 halves the width and height of the input (first 3 conv layer) whereas the output of a convolution with stride 1 (last 2 conv layers) has width = input_width-2 and height = height-2 because the kernel is 3x3.
Difference between strided and non-strided convolution Applying convolution means sliding a kernel over an input signal outputting a weighted sum where the weights are the values inside the kernel. The stride is the sliding step. You can not have a strid
34,664
Difference between strided and non-strided convolution
First of all, there is no such thing as a non-strided convolution. Using the low-level API, the following statement will give an error because every element in strides argument must be greater than zero! y = tf.nn.conv2d(x, strides=[1, 0, 0, 1]) # error ! y = tf.nn.conv2d(x, strides=[1, 1, 1, 1]) # OK Note that the first (=sample index) and the last (=channel) dimension must equal to one. Secondly, when using the TF Layers API, the strides argument has a default value: tf.layers.conv2d( inputs, filters, kernel_size, strides=(1, 1), ... Note that there are only two entries in the strides tuple which correspond to the second and the third entry in the Layers API. The first and the last dimension are dropped. So if you haven't set the strides argument, the convolution filters will move with of one pixel by default and I suppose this is what they mean with non-strided convolutions in the paper.
Difference between strided and non-strided convolution
First of all, there is no such thing as a non-strided convolution. Using the low-level API, the following statement will give an error because every element in strides argument must be greater than ze
Difference between strided and non-strided convolution First of all, there is no such thing as a non-strided convolution. Using the low-level API, the following statement will give an error because every element in strides argument must be greater than zero! y = tf.nn.conv2d(x, strides=[1, 0, 0, 1]) # error ! y = tf.nn.conv2d(x, strides=[1, 1, 1, 1]) # OK Note that the first (=sample index) and the last (=channel) dimension must equal to one. Secondly, when using the TF Layers API, the strides argument has a default value: tf.layers.conv2d( inputs, filters, kernel_size, strides=(1, 1), ... Note that there are only two entries in the strides tuple which correspond to the second and the third entry in the Layers API. The first and the last dimension are dropped. So if you haven't set the strides argument, the convolution filters will move with of one pixel by default and I suppose this is what they mean with non-strided convolutions in the paper.
Difference between strided and non-strided convolution First of all, there is no such thing as a non-strided convolution. Using the low-level API, the following statement will give an error because every element in strides argument must be greater than ze
34,665
Difference between strided and non-strided convolution
These are absolutely standard names. There is a nice description of the standard meaning of these terms here: https://cs231n.github.io/convolutional-networks/ Which is to my knowledge is the best description. If a link will disappear for 3 years please just type CS231N in Google. p.s. Unfortunately, not all people during publishing append Glossary into scientific papers and papers about some practical modeling technics (written not in a way how classical scientist is working, but still it's useful). As you see the document that you have shared looks like a paper, but really it's too bad paper for publishing it does not contain Glossary Term, experimental section, theory, limitations of the solution, future work, complete experimental setup... People in my academic field (Optimization) have another style of publishing: https://arxiv.org/abs/2102.07845
Difference between strided and non-strided convolution
These are absolutely standard names. There is a nice description of the standard meaning of these terms here: https://cs231n.github.io/convolutional-networks/ Which is to my knowledge is the best desc
Difference between strided and non-strided convolution These are absolutely standard names. There is a nice description of the standard meaning of these terms here: https://cs231n.github.io/convolutional-networks/ Which is to my knowledge is the best description. If a link will disappear for 3 years please just type CS231N in Google. p.s. Unfortunately, not all people during publishing append Glossary into scientific papers and papers about some practical modeling technics (written not in a way how classical scientist is working, but still it's useful). As you see the document that you have shared looks like a paper, but really it's too bad paper for publishing it does not contain Glossary Term, experimental section, theory, limitations of the solution, future work, complete experimental setup... People in my academic field (Optimization) have another style of publishing: https://arxiv.org/abs/2102.07845
Difference between strided and non-strided convolution These are absolutely standard names. There is a nice description of the standard meaning of these terms here: https://cs231n.github.io/convolutional-networks/ Which is to my knowledge is the best desc
34,666
Simulating a bimodal distribution in the range of [1;5] in R
The easiest approach would be to draw $\frac{n}{2}$ samples from a truncated normal distribution with one mean and another $\frac{n}{2}$ samples from a truncated normal distribution with a different mean. This is a mixture, specifically one with equal weights; you could also use different weights by varying the proportions by which you draw from both distributions. library(truncnorm) nn <- 1e4 set.seed(1) sims <- c(rtruncnorm(nn/2, a=1, b=5, mean=2, sd=.5), rtruncnorm(nn/2, a=1, b=5, mean=4, sd=.5)) hist(sims)
Simulating a bimodal distribution in the range of [1;5] in R
The easiest approach would be to draw $\frac{n}{2}$ samples from a truncated normal distribution with one mean and another $\frac{n}{2}$ samples from a truncated normal distribution with a different m
Simulating a bimodal distribution in the range of [1;5] in R The easiest approach would be to draw $\frac{n}{2}$ samples from a truncated normal distribution with one mean and another $\frac{n}{2}$ samples from a truncated normal distribution with a different mean. This is a mixture, specifically one with equal weights; you could also use different weights by varying the proportions by which you draw from both distributions. library(truncnorm) nn <- 1e4 set.seed(1) sims <- c(rtruncnorm(nn/2, a=1, b=5, mean=2, sd=.5), rtruncnorm(nn/2, a=1, b=5, mean=4, sd=.5)) hist(sims)
Simulating a bimodal distribution in the range of [1;5] in R The easiest approach would be to draw $\frac{n}{2}$ samples from a truncated normal distribution with one mean and another $\frac{n}{2}$ samples from a truncated normal distribution with a different m
34,667
Simulating a bimodal distribution in the range of [1;5] in R
Another way is to use beta distribution. It is bounded on $[0;1]$. So you just need to "move" half of simulated sample to $[1;3]$ and another half to $[3;5]$. Here I use Beta(2,2) and Stephan Kolassa's framework: nn <- 1e4 set.seed(1) betas<-rbeta(nn,2,2) sims <- c(betas[1:(nn/2)]*2+1, betas[(nn/2+1):nn]*2+3) hist(sims)
Simulating a bimodal distribution in the range of [1;5] in R
Another way is to use beta distribution. It is bounded on $[0;1]$. So you just need to "move" half of simulated sample to $[1;3]$ and another half to $[3;5]$. Here I use Beta(2,2) and Stephan Kolassa
Simulating a bimodal distribution in the range of [1;5] in R Another way is to use beta distribution. It is bounded on $[0;1]$. So you just need to "move" half of simulated sample to $[1;3]$ and another half to $[3;5]$. Here I use Beta(2,2) and Stephan Kolassa's framework: nn <- 1e4 set.seed(1) betas<-rbeta(nn,2,2) sims <- c(betas[1:(nn/2)]*2+1, betas[(nn/2+1):nn]*2+3) hist(sims)
Simulating a bimodal distribution in the range of [1;5] in R Another way is to use beta distribution. It is bounded on $[0;1]$. So you just need to "move" half of simulated sample to $[1;3]$ and another half to $[3;5]$. Here I use Beta(2,2) and Stephan Kolassa
34,668
Why is gradient descent so bad at optimizing polynomial regression?
Is this because the cost function is non convex ? Not smooth ? Due to numerical instability or collinearity ? This appears to be simple linear regression with a sum-of-squares loss function. If you are able to obtain a closed form solution (i.e. $X^TX$ is invertible) then that loss function is both convex and continuously differentiable (smooth). (1, 2) Why is gradient descent, and to a certain extent the scipy.optimize algorithm, so bad a optimizing polynomial regression ? Gradient descent is known to be both slow (compared to second-derivative methods) and sensitive to step size. I also want to second what @Sycorax and @Jonny Lomond put in the comments - this particular problem is a difficult one for GD because of the massive magnitude difference across your dimensions, and your closed form solution may also be unstable. This link has has some really fantastic material on optimization challenges and momentum-based solutions including a polynomial regression example. A few approaches you might consider: As @Jonny Lomond suggested, standardize each polynomial separately, or tune your step size. Plot your loss function over iterations to determine if there are any obvious problems with your optimization. If your gradient is "overshooting", you could try using an adaptive step size (reducing it as a function of the number of iterations). Use backtracking to dynamically determine a better step size at each iteration. Use a momentum-based gradient method like Nesterov accelerated gradient descent. These approaches are almost as fast (in terms of convergence) as second order methods in practice.
Why is gradient descent so bad at optimizing polynomial regression?
Is this because the cost function is non convex ? Not smooth ? Due to numerical instability or collinearity ? This appears to be simple linear regression with a sum-of-squares loss function. If you
Why is gradient descent so bad at optimizing polynomial regression? Is this because the cost function is non convex ? Not smooth ? Due to numerical instability or collinearity ? This appears to be simple linear regression with a sum-of-squares loss function. If you are able to obtain a closed form solution (i.e. $X^TX$ is invertible) then that loss function is both convex and continuously differentiable (smooth). (1, 2) Why is gradient descent, and to a certain extent the scipy.optimize algorithm, so bad a optimizing polynomial regression ? Gradient descent is known to be both slow (compared to second-derivative methods) and sensitive to step size. I also want to second what @Sycorax and @Jonny Lomond put in the comments - this particular problem is a difficult one for GD because of the massive magnitude difference across your dimensions, and your closed form solution may also be unstable. This link has has some really fantastic material on optimization challenges and momentum-based solutions including a polynomial regression example. A few approaches you might consider: As @Jonny Lomond suggested, standardize each polynomial separately, or tune your step size. Plot your loss function over iterations to determine if there are any obvious problems with your optimization. If your gradient is "overshooting", you could try using an adaptive step size (reducing it as a function of the number of iterations). Use backtracking to dynamically determine a better step size at each iteration. Use a momentum-based gradient method like Nesterov accelerated gradient descent. These approaches are almost as fast (in terms of convergence) as second order methods in practice.
Why is gradient descent so bad at optimizing polynomial regression? Is this because the cost function is non convex ? Not smooth ? Due to numerical instability or collinearity ? This appears to be simple linear regression with a sum-of-squares loss function. If you
34,669
Why is gradient descent so bad at optimizing polynomial regression?
Thanks for your responses, after investigating the shape of the cost function and the behaviour of the gradient descent algorithm here are my findings (which won't surprise any one but some self-learners might find this useful) 1) The cost function exhibits a very "flat" bottom Plotting the convergence of the cost function for various polynomial orders and step sizes shows that the gradient descent algorithm converges very rapidly at first, and then slows down significantly. Here is a plot for $X = [x, x^2]$ but the behaviour is the same for higher orders My intuition is that a gradient descent algorithm which automatically increases the step size when the cost function is flat would perform much better. 2) Since the cost function looks like a flat valley, the starting point matters a lot In fact, initializing at $[0,0]$ was not a particularly good idea because the value is very close to the bottom of the valley already. Hence the gradient descent struggles to reach to global minimum. Initializing at random values and comparing results would improve this 3) Scipy.optimize algorithms are doing just fine The 'BFGS' algorithm is in fact very good at finding the global minimum. The issue was that the default tolerance value was too large and the algorithm terminated before reaching the global minimum. Setting the option: 'gtol': 1e-10 leads to convergence in a few hundred iterations (based on 1st order derivative only) 4) Numerical instability Indeed as @Jonny Lomond hinted, very small $x$ values led to extreme numbers for high order polynomials, so I have truncated values close to zero. This improved the behaviour of the algorithms for polynomials order 15 and more Code here
Why is gradient descent so bad at optimizing polynomial regression?
Thanks for your responses, after investigating the shape of the cost function and the behaviour of the gradient descent algorithm here are my findings (which won't surprise any one but some self-learn
Why is gradient descent so bad at optimizing polynomial regression? Thanks for your responses, after investigating the shape of the cost function and the behaviour of the gradient descent algorithm here are my findings (which won't surprise any one but some self-learners might find this useful) 1) The cost function exhibits a very "flat" bottom Plotting the convergence of the cost function for various polynomial orders and step sizes shows that the gradient descent algorithm converges very rapidly at first, and then slows down significantly. Here is a plot for $X = [x, x^2]$ but the behaviour is the same for higher orders My intuition is that a gradient descent algorithm which automatically increases the step size when the cost function is flat would perform much better. 2) Since the cost function looks like a flat valley, the starting point matters a lot In fact, initializing at $[0,0]$ was not a particularly good idea because the value is very close to the bottom of the valley already. Hence the gradient descent struggles to reach to global minimum. Initializing at random values and comparing results would improve this 3) Scipy.optimize algorithms are doing just fine The 'BFGS' algorithm is in fact very good at finding the global minimum. The issue was that the default tolerance value was too large and the algorithm terminated before reaching the global minimum. Setting the option: 'gtol': 1e-10 leads to convergence in a few hundred iterations (based on 1st order derivative only) 4) Numerical instability Indeed as @Jonny Lomond hinted, very small $x$ values led to extreme numbers for high order polynomials, so I have truncated values close to zero. This improved the behaviour of the algorithms for polynomials order 15 and more Code here
Why is gradient descent so bad at optimizing polynomial regression? Thanks for your responses, after investigating the shape of the cost function and the behaviour of the gradient descent algorithm here are my findings (which won't surprise any one but some self-learn
34,670
Where are most points in a uniformly distributed high-dimensional ball?
As pointed out by @Xi'an, the OP's question is actually about a uniform distribution on the $n$-dimensional ball of radius $r$, the set of points at distance no more than $r$ from the center of the ball, and not about a uniform distribution on the $n$-dimensional hypersphere which is the surface of the ball (the set of points at distance exactly $r$ from the center). Note that it is being assumed that the joint density of the $n$ random variables has constant value $V^{-1}$ where $V$ is the volume of the ball. This is not the same as assuming that the distance of the random point is uniformly distributed on $[0,r]$ (or $[0,r)$ for those who do not want to include the surface of the hypersphere). Almost the entire volume of a $n$-dimensional ball lies close to the surface. This is because $V$ is proportional to the $n$-th power of the radius of the ball, and $r^n$ is a very rapidly increasing function. Even in $3$-space, $\frac 78 = 1 - \left(\frac 12\right)^3$th of the volume lies closer to the surface than to the origin, and this fraction gets closer and closer to $1$ as $n$ increases. Turning the calculation around, for a fixed proportion $\alpha$, say $\alpha=0.95$, $100\alpha\%$ of the volume lies in a shell of inner radius $\sqrt[n]{\alpha}r$ and outer radius $r$ and so $1-\sqrt[n]{\alpha}$, the relative thickness of the shell, decreases towards $0$ with increasing $n$ for any choice of $\alpha \in (0,1)$.
Where are most points in a uniformly distributed high-dimensional ball?
As pointed out by @Xi'an, the OP's question is actually about a uniform distribution on the $n$-dimensional ball of radius $r$, the set of points at distance no more than $r$ from the center of the ba
Where are most points in a uniformly distributed high-dimensional ball? As pointed out by @Xi'an, the OP's question is actually about a uniform distribution on the $n$-dimensional ball of radius $r$, the set of points at distance no more than $r$ from the center of the ball, and not about a uniform distribution on the $n$-dimensional hypersphere which is the surface of the ball (the set of points at distance exactly $r$ from the center). Note that it is being assumed that the joint density of the $n$ random variables has constant value $V^{-1}$ where $V$ is the volume of the ball. This is not the same as assuming that the distance of the random point is uniformly distributed on $[0,r]$ (or $[0,r)$ for those who do not want to include the surface of the hypersphere). Almost the entire volume of a $n$-dimensional ball lies close to the surface. This is because $V$ is proportional to the $n$-th power of the radius of the ball, and $r^n$ is a very rapidly increasing function. Even in $3$-space, $\frac 78 = 1 - \left(\frac 12\right)^3$th of the volume lies closer to the surface than to the origin, and this fraction gets closer and closer to $1$ as $n$ increases. Turning the calculation around, for a fixed proportion $\alpha$, say $\alpha=0.95$, $100\alpha\%$ of the volume lies in a shell of inner radius $\sqrt[n]{\alpha}r$ and outer radius $r$ and so $1-\sqrt[n]{\alpha}$, the relative thickness of the shell, decreases towards $0$ with increasing $n$ for any choice of $\alpha \in (0,1)$.
Where are most points in a uniformly distributed high-dimensional ball? As pointed out by @Xi'an, the OP's question is actually about a uniform distribution on the $n$-dimensional ball of radius $r$, the set of points at distance no more than $r$ from the center of the ba
34,671
Why is the dickey fuller test different from a simple t-test
You are right that the test statistic is just a standard t-statistic. It, however, follows a different null distribution, i.e., using critical values from the t or normal distribution would lead to tests that would not reject in $\alpha$% of the cases when the null is true. See Estimation of unit-root AR(1) model with OLS for an assumption that is violated and How is the augmented Dickey–Fuller test (ADF) table of critical values calculated? for some information on the asymptotic null distribution. From the first link, we note that $$ T^{-1}\sum_{t=1}^Tx_{t-1}\epsilon_{t}\Rightarrow\sigma^2/2\{W(1)^2-1\}. $$ In particular, $W(1)^2-1$ is a demeaned $\chi^2_1$ random variable (as the Wiener process has $W(s)\sim N(0,s)$), which has probability 0.682 of being less than zero, leading to the skew in the distribution of the DF statistic.
Why is the dickey fuller test different from a simple t-test
You are right that the test statistic is just a standard t-statistic. It, however, follows a different null distribution, i.e., using critical values from the t or normal distribution would lead to t
Why is the dickey fuller test different from a simple t-test You are right that the test statistic is just a standard t-statistic. It, however, follows a different null distribution, i.e., using critical values from the t or normal distribution would lead to tests that would not reject in $\alpha$% of the cases when the null is true. See Estimation of unit-root AR(1) model with OLS for an assumption that is violated and How is the augmented Dickey–Fuller test (ADF) table of critical values calculated? for some information on the asymptotic null distribution. From the first link, we note that $$ T^{-1}\sum_{t=1}^Tx_{t-1}\epsilon_{t}\Rightarrow\sigma^2/2\{W(1)^2-1\}. $$ In particular, $W(1)^2-1$ is a demeaned $\chi^2_1$ random variable (as the Wiener process has $W(s)\sim N(0,s)$), which has probability 0.682 of being less than zero, leading to the skew in the distribution of the DF statistic.
Why is the dickey fuller test different from a simple t-test You are right that the test statistic is just a standard t-statistic. It, however, follows a different null distribution, i.e., using critical values from the t or normal distribution would lead to t
34,672
Stationarity: time series vs regression
Some notes: OLS is a fitting algorithm, just like ML. A regression model is somewhat hard to define, see this thread. A time series is a type of data. Now you can take a type of data, use a model for it and estimate the model with an estimation algorithm. Consider an autoregression; it is a regression model for times series data, and OLS is used to estimate it. This is quite close to what you are describing. How does stationarity fit in? It is relevant for the model and for its estimation. A model must account for possible nonstationarity as otherwise it might fail to adequately reflect the data generating process. Moreover, estimation algorithms often do not deliver quality results under nonstationarity, e.g. the estimates of the model parameters become inconsistent. But this is not always the case. If $y_t$ is nonstationary because it is integrated of order one, i.e. I(1), you can still run a regression of $y_t$ on $y_{t−1}$ and get a consistent estimate of the slope coefficient with OLS. This is again something that you have observed.
Stationarity: time series vs regression
Some notes: OLS is a fitting algorithm, just like ML. A regression model is somewhat hard to define, see this thread. A time series is a type of data. Now you can take a type of data, use a model for
Stationarity: time series vs regression Some notes: OLS is a fitting algorithm, just like ML. A regression model is somewhat hard to define, see this thread. A time series is a type of data. Now you can take a type of data, use a model for it and estimate the model with an estimation algorithm. Consider an autoregression; it is a regression model for times series data, and OLS is used to estimate it. This is quite close to what you are describing. How does stationarity fit in? It is relevant for the model and for its estimation. A model must account for possible nonstationarity as otherwise it might fail to adequately reflect the data generating process. Moreover, estimation algorithms often do not deliver quality results under nonstationarity, e.g. the estimates of the model parameters become inconsistent. But this is not always the case. If $y_t$ is nonstationary because it is integrated of order one, i.e. I(1), you can still run a regression of $y_t$ on $y_{t−1}$ and get a consistent estimate of the slope coefficient with OLS. This is again something that you have observed.
Stationarity: time series vs regression Some notes: OLS is a fitting algorithm, just like ML. A regression model is somewhat hard to define, see this thread. A time series is a type of data. Now you can take a type of data, use a model for
34,673
Stationarity: time series vs regression
In time series the data is ordered. It makes a big difference. For instance, is OLS you have a typical model: $$y_i=c+\phi x_i+e_i$$ Here, there is no particular order in the index $i$. You might be measuring the output (GDP) of countries indexed by $i$, and it doesn't matter in each order you add them to the data set. In time series instead of some random sample $i$ you get the ordered time intervals $t$. Now if you look at the US GDP time series, the observations come in a very particular order. Also, stationarity is a time series version of the exogeneity. It's a weakened version of exogneity requirement from OLS. So, it's not like OLS doesn't care at all about these issues. It does, but time series due to its time ordering complicates the things, so econometricians came up with a weaker version of exogeneity. It allows to do something. Note how in time series a well known Gauss-markov conditions are losened up a bit. In dynamic model such as you mentioned with lagged dependent variable as a regressor, some benign (in OLS) problems become serious issues, e.g. autocorrelation in residuals.
Stationarity: time series vs regression
In time series the data is ordered. It makes a big difference. For instance, is OLS you have a typical model: $$y_i=c+\phi x_i+e_i$$ Here, there is no particular order in the index $i$. You might be m
Stationarity: time series vs regression In time series the data is ordered. It makes a big difference. For instance, is OLS you have a typical model: $$y_i=c+\phi x_i+e_i$$ Here, there is no particular order in the index $i$. You might be measuring the output (GDP) of countries indexed by $i$, and it doesn't matter in each order you add them to the data set. In time series instead of some random sample $i$ you get the ordered time intervals $t$. Now if you look at the US GDP time series, the observations come in a very particular order. Also, stationarity is a time series version of the exogeneity. It's a weakened version of exogneity requirement from OLS. So, it's not like OLS doesn't care at all about these issues. It does, but time series due to its time ordering complicates the things, so econometricians came up with a weaker version of exogeneity. It allows to do something. Note how in time series a well known Gauss-markov conditions are losened up a bit. In dynamic model such as you mentioned with lagged dependent variable as a regressor, some benign (in OLS) problems become serious issues, e.g. autocorrelation in residuals.
Stationarity: time series vs regression In time series the data is ordered. It makes a big difference. For instance, is OLS you have a typical model: $$y_i=c+\phi x_i+e_i$$ Here, there is no particular order in the index $i$. You might be m
34,674
Stationarity: time series vs regression
Does it have to do with the fitting algorithm we use(OLS against ML)? Yes. Stationarity is a condition for some time series models, but not others. It is required for ARMA models. ARIMA models are used when the time series is not stationary to transform it so that it is suitable for ARMA modeling. Exponential smoothing forecasting methods on the other hand don't require stationarity. What is strange to me is that they don't check for stationarity and the predictions they make are accurate. Checking for stationarity isn't about improving the accuracy of the model per se , it is about keeping the model stable. See this post and this post. Trying to fit an ARMA model to non stationary data would lead to a model that diverges very quickly. they use OLS regression in order to analyse and predict a certain variable y. Among the different explanatory variables they use, they have the lagged values of y. It is possible that the data is already stationary by nature - which is why OLS works on the lagged values of y without any transformations. Also, for the case of ARIMA models, ML is used only if moving average terms are used. If only auto-regressive terms are used that OLS works. From what you describe they don't have any moving average terms in the explanatory variables.
Stationarity: time series vs regression
Does it have to do with the fitting algorithm we use(OLS against ML)? Yes. Stationarity is a condition for some time series models, but not others. It is required for ARMA models. ARIMA models are us
Stationarity: time series vs regression Does it have to do with the fitting algorithm we use(OLS against ML)? Yes. Stationarity is a condition for some time series models, but not others. It is required for ARMA models. ARIMA models are used when the time series is not stationary to transform it so that it is suitable for ARMA modeling. Exponential smoothing forecasting methods on the other hand don't require stationarity. What is strange to me is that they don't check for stationarity and the predictions they make are accurate. Checking for stationarity isn't about improving the accuracy of the model per se , it is about keeping the model stable. See this post and this post. Trying to fit an ARMA model to non stationary data would lead to a model that diverges very quickly. they use OLS regression in order to analyse and predict a certain variable y. Among the different explanatory variables they use, they have the lagged values of y. It is possible that the data is already stationary by nature - which is why OLS works on the lagged values of y without any transformations. Also, for the case of ARIMA models, ML is used only if moving average terms are used. If only auto-regressive terms are used that OLS works. From what you describe they don't have any moving average terms in the explanatory variables.
Stationarity: time series vs regression Does it have to do with the fitting algorithm we use(OLS against ML)? Yes. Stationarity is a condition for some time series models, but not others. It is required for ARMA models. ARIMA models are us
34,675
Stationarity: time series vs regression
It is possible that the data is already stationary by nature - which is why OLS works on the lagged values of y without any transformations. Also, for the case of ARIMA models, ML is used only if moving average terms are used. If only auto-regressive terms are used that OLS works. From what you describe they don't have any moving average terms in the explanatory variables. Note: The above argument seems realistic. What happens the application of lagged dependent variable in OLS or the ML in ARIMA actually cancels out the most of the effects of the higher orders say the trend effects or cyclical effects in the terms of moving average. That's why, even if both processes are non-stationary processes, but they remain cancelled out and do not become visible in the residual error component even if individual Tests of UNIT roots will show I(1) characteristics.
Stationarity: time series vs regression
It is possible that the data is already stationary by nature - which is why OLS works on the lagged values of y without any transformations. Also, for the case of ARIMA models, ML is used only if movi
Stationarity: time series vs regression It is possible that the data is already stationary by nature - which is why OLS works on the lagged values of y without any transformations. Also, for the case of ARIMA models, ML is used only if moving average terms are used. If only auto-regressive terms are used that OLS works. From what you describe they don't have any moving average terms in the explanatory variables. Note: The above argument seems realistic. What happens the application of lagged dependent variable in OLS or the ML in ARIMA actually cancels out the most of the effects of the higher orders say the trend effects or cyclical effects in the terms of moving average. That's why, even if both processes are non-stationary processes, but they remain cancelled out and do not become visible in the residual error component even if individual Tests of UNIT roots will show I(1) characteristics.
Stationarity: time series vs regression It is possible that the data is already stationary by nature - which is why OLS works on the lagged values of y without any transformations. Also, for the case of ARIMA models, ML is used only if movi
34,676
Difference between Random forest vs Bagging in sklearn
we can use "Iris" dataset to demonstrate both Bagging and Random Forests   first, Let's take a look at the Iris dataset: 150 samples of flowers 3 classes $y$ 4 continuous features $x_j$   to do example in sklearn, we need to import the usual suspects... from sklearn.datasets import load_iris from sklearn.ensemble import BaggingClassifier, RandomForestClassifier import pandas as pd iris = load_iris() y= pd.Series(iris.target) then we specify which of two ML models we will be using: # create Ensemble object/instance model = BaggingClassifier(base_estimator=None) model = RandomForestClassifier() regardless of which of the 2 we use, our sklearn steps will be the same: # Train the model using the training sets model.fit(X_train, y_train) # OUTPUT ## check score model.score(X_train, y_train) ## Predict on test set predicted= model.predict(X_test)     In order to visually and fully demonstrate the Bagging voting†, WLOG we have a simplified version "Iris" dataset has: 7 total observations only◊ 4 features $\mathbf{x}_j$ 3 different possible species as the labels $y$ instead of their Latin names, lets for simplicity denote the 3 different species as Species $\mathbf{A})$ Species $\mathbf{B})$ Species $\mathbf{C})$ because sklearn needs numeric input always, their sklearn numeric labels $= [0, 1, 2]$ Species $\mathbf{A}) \mapsto 0$ Species $\mathbf{B})\mapsto 1$ Species $\mathbf{C})\mapsto 2$     for our 7-observation simplified "Iris" Training dataset, we: Take 5 Bootstrap-samples each Bootstrap-sample is of cardinality 7 selected WITH replacement from our simplified "Iris" dataset i.e., each bootstrap-sample has 7 observations WITH possible Repetitions   then in step 2, we Train either: Train a distinct decision tree classifier on each of the Bootstrap-samples each tree is fit on its own distinct Bootstrap-sample of 7 observations Train a distinct Random Forests decision tree classifier on each of the Bootstrap-samples Set each Random Forests classifier to decision its tree node using only 3 of the 4 features $\mathbf{x}_j$ the 3 features selected are a random subset of the 4 possible features $\mathbf{x}_j$ (subset are drawn withOUT replacement) so we end up with a Bootstrap-samples-aggregated ensemble of a total of 5 decision trees classifiers, which are either fully-grown in the case of Bagging or grown using feature subsets in the case of Random Forests.   Predict new data labels using Bootstrap-samples-aggregated ensemble meta-estimator Random Forests & other Bagging meta-estimators use all the 5 decision trees to predict the label $y$ all 5 trees predict and RF meta-estimator averages their individual predictions (see example calculation below)     for a Test-set X_test consisting of a single previously-unseen flower: if the $(A, B, C)$ predict_proba output of the 5 individual sklearn subestimator trees are: $(0.75, 0.20, 0.05)$ $(0.60, 0.35, 0.05)$ $(0.55, 0.40, 0.05)$ $(0.35, 0.60, 0.05)$ $(0.50, 0.45, 0.05)$ then either of the Bootstrap-samples-aggregating meta-estimators' Test-set predict = A) on this specific flower-sample i.e., our new Test-set observation would be classified as Species A) by either the Bagging or Random Forests meta-estimator because the highest of the 3 classes' average predicted probability is for Species $\mathbf{A}) = \frac{3 \, = \sum bootstraps \,}{5 \; bootstraps} = 0.55$ because all of sklearn's Bootstrap-samples-aggregating classification meta-estimators have a predict method whose averaging procedure returns the class with the highest average predicted probability   †(and also to be able to use an image i found on the interwebs) ◊this (ridiculous) 7-datapoint"Iris" data set is for explanation purposes only
Difference between Random forest vs Bagging in sklearn
we can use "Iris" dataset to demonstrate both Bagging and Random Forests   first, Let's take a look at the Iris dataset: 150 samples of flowers 3 classes $y$ 4 continuous features $x_j$   to do exa
Difference between Random forest vs Bagging in sklearn we can use "Iris" dataset to demonstrate both Bagging and Random Forests   first, Let's take a look at the Iris dataset: 150 samples of flowers 3 classes $y$ 4 continuous features $x_j$   to do example in sklearn, we need to import the usual suspects... from sklearn.datasets import load_iris from sklearn.ensemble import BaggingClassifier, RandomForestClassifier import pandas as pd iris = load_iris() y= pd.Series(iris.target) then we specify which of two ML models we will be using: # create Ensemble object/instance model = BaggingClassifier(base_estimator=None) model = RandomForestClassifier() regardless of which of the 2 we use, our sklearn steps will be the same: # Train the model using the training sets model.fit(X_train, y_train) # OUTPUT ## check score model.score(X_train, y_train) ## Predict on test set predicted= model.predict(X_test)     In order to visually and fully demonstrate the Bagging voting†, WLOG we have a simplified version "Iris" dataset has: 7 total observations only◊ 4 features $\mathbf{x}_j$ 3 different possible species as the labels $y$ instead of their Latin names, lets for simplicity denote the 3 different species as Species $\mathbf{A})$ Species $\mathbf{B})$ Species $\mathbf{C})$ because sklearn needs numeric input always, their sklearn numeric labels $= [0, 1, 2]$ Species $\mathbf{A}) \mapsto 0$ Species $\mathbf{B})\mapsto 1$ Species $\mathbf{C})\mapsto 2$     for our 7-observation simplified "Iris" Training dataset, we: Take 5 Bootstrap-samples each Bootstrap-sample is of cardinality 7 selected WITH replacement from our simplified "Iris" dataset i.e., each bootstrap-sample has 7 observations WITH possible Repetitions   then in step 2, we Train either: Train a distinct decision tree classifier on each of the Bootstrap-samples each tree is fit on its own distinct Bootstrap-sample of 7 observations Train a distinct Random Forests decision tree classifier on each of the Bootstrap-samples Set each Random Forests classifier to decision its tree node using only 3 of the 4 features $\mathbf{x}_j$ the 3 features selected are a random subset of the 4 possible features $\mathbf{x}_j$ (subset are drawn withOUT replacement) so we end up with a Bootstrap-samples-aggregated ensemble of a total of 5 decision trees classifiers, which are either fully-grown in the case of Bagging or grown using feature subsets in the case of Random Forests.   Predict new data labels using Bootstrap-samples-aggregated ensemble meta-estimator Random Forests & other Bagging meta-estimators use all the 5 decision trees to predict the label $y$ all 5 trees predict and RF meta-estimator averages their individual predictions (see example calculation below)     for a Test-set X_test consisting of a single previously-unseen flower: if the $(A, B, C)$ predict_proba output of the 5 individual sklearn subestimator trees are: $(0.75, 0.20, 0.05)$ $(0.60, 0.35, 0.05)$ $(0.55, 0.40, 0.05)$ $(0.35, 0.60, 0.05)$ $(0.50, 0.45, 0.05)$ then either of the Bootstrap-samples-aggregating meta-estimators' Test-set predict = A) on this specific flower-sample i.e., our new Test-set observation would be classified as Species A) by either the Bagging or Random Forests meta-estimator because the highest of the 3 classes' average predicted probability is for Species $\mathbf{A}) = \frac{3 \, = \sum bootstraps \,}{5 \; bootstraps} = 0.55$ because all of sklearn's Bootstrap-samples-aggregating classification meta-estimators have a predict method whose averaging procedure returns the class with the highest average predicted probability   †(and also to be able to use an image i found on the interwebs) ◊this (ridiculous) 7-datapoint"Iris" data set is for explanation purposes only
Difference between Random forest vs Bagging in sklearn we can use "Iris" dataset to demonstrate both Bagging and Random Forests   first, Let's take a look at the Iris dataset: 150 samples of flowers 3 classes $y$ 4 continuous features $x_j$   to do exa
34,677
Difference between Random forest vs Bagging in sklearn
The difference is at the node level splitting for both. So Bagging algorithm using a decision tree would use all the features to decide the best split. On the other hand, the trees built in Random forest use a random subset of the features at every node, to decide the best split.
Difference between Random forest vs Bagging in sklearn
The difference is at the node level splitting for both. So Bagging algorithm using a decision tree would use all the features to decide the best split. On the other hand, the trees built in Random for
Difference between Random forest vs Bagging in sklearn The difference is at the node level splitting for both. So Bagging algorithm using a decision tree would use all the features to decide the best split. On the other hand, the trees built in Random forest use a random subset of the features at every node, to decide the best split.
Difference between Random forest vs Bagging in sklearn The difference is at the node level splitting for both. So Bagging algorithm using a decision tree would use all the features to decide the best split. On the other hand, the trees built in Random for
34,678
Why am I not getting a uniform p-value distribution from logistic regression with random predictors?
There are several issues here. In particular, there seem to be some confusions about how to simulate a standard logistic regression. Briefly, you don't add noise around... the probability "signal". As a result of the way you did this, there is a huge amount of variability in the resulting 'binomial'(-esque) data, way more than there should be. Here are the probabilities in your dataset: plot(flips[,1]/rowSums(flips)) If those .4+ observations end up on one side or the other, they will act as 'outliers' (they aren't really) and drive a type 1 error in a model that doesn't take into account the fact that these data aren't really binomial. Here is a version that uses a simple hack to allow the model to detect and account for overdispersion: set.seed(5082) pval <- numeric(ntrial) for (i in 1:ntrial){ yrandom <- rnorm(lseries) s <- summary(glm(flips ~ yrandom, family="quasibinomial")) # changed family pval[i] <- s$coefficients[2,4] } hist(pval, breaks=100) print(quantile(pval, probs=c(.01,.05))) # 1% 5% # 0.006924617 0.046977246 actualCL <- sapply(qprobs, function(c){ sum(pval <= c) / length(pval) }) print(data.frame(nominalCL=qprobs, actualCL)) # nominalCL actualCL # 1 0.05 0.0536 # 2 0.01 0.0128 This is the model summary from the last iteration. Note that the dispersion is estimated to be $\approx 12\times$ what it should be for a true binomial: s # Call: # glm(formula = flips ~ yrandom, family = "quasibinomial") # # Deviance Residuals: # Min 1Q Median 3Q Max # -5.167 -2.925 -1.111 1.101 8.110 # # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) -1.96910 0.14942 -13.178 <2e-16 *** # yrandom -0.02736 0.14587 -0.188 0.852 # --- # Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # # (Dispersion parameter for quasibinomial family taken to be 11.97867) # # Null deviance: 532.38 on 49 degrees of freedom # Residual deviance: 531.96 on 48 degrees of freedom # AIC: NA # # Number of Fisher Scoring iterations: 5 Here is another version, where I fit the same model that you do, but just generate the data without the added noise around the signal. (Note that code that is otherwise the same is omitted for brevity.) set.seed(541713) ... pactual <- 1 / (1 + exp(-(log(pavg / (1-pavg))))) # deleted yactual ... for (i in 1:ntrial){ yrandom <- rnorm(lseries) if (orthogonalPredictor){ yrandom <- residuals(lm(yrandom ~ yactual)) } s <- summary(glm(flips ~ yrandom, family="binomial")) pval[i] <- s$coefficients[2,4] } hist(pval, breaks=100) print(quantile(pval, probs=c(.01,.05))) # 1% 5% # 0.01993318 0.07027473 actualCL <- sapply(qprobs, function(c){ sum(pval <= c) / length(pval) }) print(data.frame(nominalCL=qprobs, actualCL)) # nominalCL actualCL # 1 0.05 0.0372 # 2 0.01 0.0036
Why am I not getting a uniform p-value distribution from logistic regression with random predictors?
There are several issues here. In particular, there seem to be some confusions about how to simulate a standard logistic regression. Briefly, you don't add noise around... the probability "signal".
Why am I not getting a uniform p-value distribution from logistic regression with random predictors? There are several issues here. In particular, there seem to be some confusions about how to simulate a standard logistic regression. Briefly, you don't add noise around... the probability "signal". As a result of the way you did this, there is a huge amount of variability in the resulting 'binomial'(-esque) data, way more than there should be. Here are the probabilities in your dataset: plot(flips[,1]/rowSums(flips)) If those .4+ observations end up on one side or the other, they will act as 'outliers' (they aren't really) and drive a type 1 error in a model that doesn't take into account the fact that these data aren't really binomial. Here is a version that uses a simple hack to allow the model to detect and account for overdispersion: set.seed(5082) pval <- numeric(ntrial) for (i in 1:ntrial){ yrandom <- rnorm(lseries) s <- summary(glm(flips ~ yrandom, family="quasibinomial")) # changed family pval[i] <- s$coefficients[2,4] } hist(pval, breaks=100) print(quantile(pval, probs=c(.01,.05))) # 1% 5% # 0.006924617 0.046977246 actualCL <- sapply(qprobs, function(c){ sum(pval <= c) / length(pval) }) print(data.frame(nominalCL=qprobs, actualCL)) # nominalCL actualCL # 1 0.05 0.0536 # 2 0.01 0.0128 This is the model summary from the last iteration. Note that the dispersion is estimated to be $\approx 12\times$ what it should be for a true binomial: s # Call: # glm(formula = flips ~ yrandom, family = "quasibinomial") # # Deviance Residuals: # Min 1Q Median 3Q Max # -5.167 -2.925 -1.111 1.101 8.110 # # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) -1.96910 0.14942 -13.178 <2e-16 *** # yrandom -0.02736 0.14587 -0.188 0.852 # --- # Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 # # (Dispersion parameter for quasibinomial family taken to be 11.97867) # # Null deviance: 532.38 on 49 degrees of freedom # Residual deviance: 531.96 on 48 degrees of freedom # AIC: NA # # Number of Fisher Scoring iterations: 5 Here is another version, where I fit the same model that you do, but just generate the data without the added noise around the signal. (Note that code that is otherwise the same is omitted for brevity.) set.seed(541713) ... pactual <- 1 / (1 + exp(-(log(pavg / (1-pavg))))) # deleted yactual ... for (i in 1:ntrial){ yrandom <- rnorm(lseries) if (orthogonalPredictor){ yrandom <- residuals(lm(yrandom ~ yactual)) } s <- summary(glm(flips ~ yrandom, family="binomial")) pval[i] <- s$coefficients[2,4] } hist(pval, breaks=100) print(quantile(pval, probs=c(.01,.05))) # 1% 5% # 0.01993318 0.07027473 actualCL <- sapply(qprobs, function(c){ sum(pval <= c) / length(pval) }) print(data.frame(nominalCL=qprobs, actualCL)) # nominalCL actualCL # 1 0.05 0.0372 # 2 0.01 0.0036
Why am I not getting a uniform p-value distribution from logistic regression with random predictors? There are several issues here. In particular, there seem to be some confusions about how to simulate a standard logistic regression. Briefly, you don't add noise around... the probability "signal".
34,679
Why am I not getting a uniform p-value distribution from logistic regression with random predictors?
The code's relation to the experimental goal is confusing. Do you expect a significant predictor for any selection of orthogonalPredictor or sd? I do not. Based on my interpretation, it looks like the experiment does not align with what we are trying to test. Where the noise is generated, it's implicitly being repeatedly (i.e. non-randomly) attached to individual observations, which provides a signal to the regression. Here's what I think was intended: lseries <- 50 nbinom <- 100 ntrial <- 5000 pavg <- .1 # median probability run_experiment <- function(sd = 0, orthogonalPredictor = FALSE, predictor_noise_sd = NA) { qprobs <- c(.05,.01) # find the true quantiles for these p-values yactual <- sd * rnorm(lseries) # random signal pactual <- 1 / (1 + exp(-(yactual + log(pavg / (1-pavg))))) heads <- rbinom(lseries, nbinom, pactual) ## test data, binomial noise around pactual, the probability "signal" flips_expanded <- rbind(data.frame(flip_result = rep(rep(1, length(heads)), heads), y_actual = rep(yactual, heads)), data.frame(flip_result = rep(rep(0, length(heads)), nbinom-heads), y_actual = rep(yactual, nbinom-heads)) ) summary(glm(flip_result ~ y_actual, flips_expanded, family = "binomial")) pval <- numeric(ntrial) for (i in 1:ntrial){ flips_expanded$y <- rnorm(nrow(flips_expanded)) if (orthogonalPredictor){ flips_expanded$y <- residuals(lm(y ~ y_actual, flips_expanded)) } if (!is.na(predictor_noise_sd)) {flips_expanded$y <- rnorm(nrow(flips_expanded), flips_expanded$y_actual, predictor_noise_sd)} s <- summary(glm(flip_result ~ y, flips_expanded, family="binomial")) pval[i] <- s$coefficients[2,4] } hist(pval, breaks=100) print(quantile(pval, probs=c(.01,.05))) actualCL <- sapply(qprobs, function(c){ sum(pval <= c) / length(pval) }) print(data.frame(nominalCL=qprobs, actualCL)) } The critical changes are: Expanding the data frame to per-observation format instead of a condensed format (flips_expanded) Also experimenting with a correlated predictor For no correlation between y_actual & our predictor y: > run_experiment() 1% 5% 0.01077116 0.05045712 nominalCL actualCL 1 0.05 0.0496 2 0.01 0.0096 And creating a fairly strong correlation: > run_experiment(1,FALSE,10) 1% 5% 0.0001252817 0.0019125482 nominalCL actualCL 1 0.05 0.3002 2 0.01 0.1286
Why am I not getting a uniform p-value distribution from logistic regression with random predictors?
The code's relation to the experimental goal is confusing. Do you expect a significant predictor for any selection of orthogonalPredictor or sd? I do not. Based on my interpretation, it looks like the
Why am I not getting a uniform p-value distribution from logistic regression with random predictors? The code's relation to the experimental goal is confusing. Do you expect a significant predictor for any selection of orthogonalPredictor or sd? I do not. Based on my interpretation, it looks like the experiment does not align with what we are trying to test. Where the noise is generated, it's implicitly being repeatedly (i.e. non-randomly) attached to individual observations, which provides a signal to the regression. Here's what I think was intended: lseries <- 50 nbinom <- 100 ntrial <- 5000 pavg <- .1 # median probability run_experiment <- function(sd = 0, orthogonalPredictor = FALSE, predictor_noise_sd = NA) { qprobs <- c(.05,.01) # find the true quantiles for these p-values yactual <- sd * rnorm(lseries) # random signal pactual <- 1 / (1 + exp(-(yactual + log(pavg / (1-pavg))))) heads <- rbinom(lseries, nbinom, pactual) ## test data, binomial noise around pactual, the probability "signal" flips_expanded <- rbind(data.frame(flip_result = rep(rep(1, length(heads)), heads), y_actual = rep(yactual, heads)), data.frame(flip_result = rep(rep(0, length(heads)), nbinom-heads), y_actual = rep(yactual, nbinom-heads)) ) summary(glm(flip_result ~ y_actual, flips_expanded, family = "binomial")) pval <- numeric(ntrial) for (i in 1:ntrial){ flips_expanded$y <- rnorm(nrow(flips_expanded)) if (orthogonalPredictor){ flips_expanded$y <- residuals(lm(y ~ y_actual, flips_expanded)) } if (!is.na(predictor_noise_sd)) {flips_expanded$y <- rnorm(nrow(flips_expanded), flips_expanded$y_actual, predictor_noise_sd)} s <- summary(glm(flip_result ~ y, flips_expanded, family="binomial")) pval[i] <- s$coefficients[2,4] } hist(pval, breaks=100) print(quantile(pval, probs=c(.01,.05))) actualCL <- sapply(qprobs, function(c){ sum(pval <= c) / length(pval) }) print(data.frame(nominalCL=qprobs, actualCL)) } The critical changes are: Expanding the data frame to per-observation format instead of a condensed format (flips_expanded) Also experimenting with a correlated predictor For no correlation between y_actual & our predictor y: > run_experiment() 1% 5% 0.01077116 0.05045712 nominalCL actualCL 1 0.05 0.0496 2 0.01 0.0096 And creating a fairly strong correlation: > run_experiment(1,FALSE,10) 1% 5% 0.0001252817 0.0019125482 nominalCL actualCL 1 0.05 0.3002 2 0.01 0.1286
Why am I not getting a uniform p-value distribution from logistic regression with random predictors? The code's relation to the experimental goal is confusing. Do you expect a significant predictor for any selection of orthogonalPredictor or sd? I do not. Based on my interpretation, it looks like the
34,680
Simulating data - correlation vs causation
All this is easier with a theory of causality. For example, let's use here the Structural Causal Models (which includes the Potential Outcomes) approach. A Structural Causal Model (SCM) is triplet $M = \langle V, U, F\rangle$ where $U$ is a set of exogeneous variables, $V$ a set of endogenous variables and $F$ is a set of structural equations that determines the values of each endogenous variable. The structural equations are in the sense of assignments not equalities. For example, consider the simple structural equation $Y = X^2$. This is meant to be read $Y\leftarrow X^2$, in the sense that if I experimentally set $X=2$ then this causally determines the value of $Y = 4$ but experimentally setting $Y = 4$ does nothing to $X$. The asymmetry is important/fundamental in causality: rain causes the floor to be wet, but making the floor wet does not cause rain. So our causal model can be thought as functional relationships among variables and we are considering these relationships as autonomous. You can think of it as an idealized representation of the real world, where the variables $V$, the endogenous variables, are what we choose to model, and the variables $U$ are the aspects we chose to ignore. Since we chose not to model the $U$, what we usually do is to represent our ignorance about $U$ with a probability distribution $P(U)$ over the domain of $U$, giving us a probabilistic SCM which is pair $\langle M, P(U) \rangle$. Notice this means that causal relationships are ultimately functional relationships, therefore causal relationships may or may not translate to specific probabilistic dependencies. Finally, every causal model can be associated with a directed (acyclic) graph $G(M)$. Hence, one way to simulate from a probabilistic causal model is by specifying: (i) the endogenous variables $V$ you are going to model; (ii) the exogenous variables $U$ which are usually the "disturbances", along with their joint probability distribution; and, (iii) the (causal) structural relationships among the variables. It might be easier to start this process qualitatively by first drawing the causal DAG with the main features that you want to illustrate and then add the details of the simulation (functional forms) later. To see how this can be easily done in practice, let's simulate a simple causal model that illustrates simpson's paradox (for more see Pearl). Suppose our model $M$ is given by the following causal DAG, where the variables in parenthesis are "unobserved" and each variable has an associated exogenous disturbance $U$ which is omitted for convenience: More specifically we will assume the following structural equations $F$: $$ \begin{aligned} W_1 &= U_{W_1}\\ W_2 &= U_{W_2}\\ Z &= W_1 + W_2 + U_{Z}\\ X &= W_1 + U_{x}\\ Y &= X+ 10W_2+ U_{y}\\ \end{aligned} $$ Finally, assume all disturbances in $U$ are independent standard normal random variables. Now it's easy to simulate from our causal model. In R for instance: rm(list = ls()) set.seed(1) n <- 1e5 w1 <- rnorm(n) w2 <- rnorm(n) z <- w1 + w2 + rnorm(n) x <- w1 + rnorm(n) y <- x + 10*w2 + rnorm(n) This example is interesting because if you run the regression $Y \sim X$ you get $1$: lm(y ~ x) Call: lm(formula = y ~ x) Coefficients: (Intercept) x 0.01036 1.00081 But if you further "control" for $Z$, which is a pre-treatment variable correlated with both $X$ and $Y$ --- and some people still erroneously would say it's a confounder --- you will actually see a sign reversal of the estimate and get $-1$: lm(y ~ x + z) Call: lm(formula = y ~ x + z) Coefficients: (Intercept) x z 0.00845 -1.01127 4.00041 In this example, since we simulated the data, we know the true causal effect is $1$ which is captured by the first regression. But you can only know that if you know the true causal structure. There's nothing in the data itself that tells you which one is the correct answer. Hence if you simulate this and give to a researcher only the variables $x$, $y$ and $z$ he can't tell the right answer just from looking at the correlations. If you want further play/simulate causal models with multi-stage Simpson's paradox reversals, you can check it here. Simulating correlations/dependencies To simulate correlations/dependencies you can take a similar approach. You can simply create a causal model that gives you the correlations/dependencies you want (adding latent variables if needed), simulate as above and the resulting data will have the desired correlations/dependencies. To make it easier, you can start by drawing the causal DAG (bayesian network) and read from the graph if the desired conditional dependencies/independencies are implied by your model. After that you might think of specific functional forms to get other quantitative aspects that you want. Notice that several models with different causal interpretations can give you the same correlations.
Simulating data - correlation vs causation
All this is easier with a theory of causality. For example, let's use here the Structural Causal Models (which includes the Potential Outcomes) approach. A Structural Causal Model (SCM) is triplet $M
Simulating data - correlation vs causation All this is easier with a theory of causality. For example, let's use here the Structural Causal Models (which includes the Potential Outcomes) approach. A Structural Causal Model (SCM) is triplet $M = \langle V, U, F\rangle$ where $U$ is a set of exogeneous variables, $V$ a set of endogenous variables and $F$ is a set of structural equations that determines the values of each endogenous variable. The structural equations are in the sense of assignments not equalities. For example, consider the simple structural equation $Y = X^2$. This is meant to be read $Y\leftarrow X^2$, in the sense that if I experimentally set $X=2$ then this causally determines the value of $Y = 4$ but experimentally setting $Y = 4$ does nothing to $X$. The asymmetry is important/fundamental in causality: rain causes the floor to be wet, but making the floor wet does not cause rain. So our causal model can be thought as functional relationships among variables and we are considering these relationships as autonomous. You can think of it as an idealized representation of the real world, where the variables $V$, the endogenous variables, are what we choose to model, and the variables $U$ are the aspects we chose to ignore. Since we chose not to model the $U$, what we usually do is to represent our ignorance about $U$ with a probability distribution $P(U)$ over the domain of $U$, giving us a probabilistic SCM which is pair $\langle M, P(U) \rangle$. Notice this means that causal relationships are ultimately functional relationships, therefore causal relationships may or may not translate to specific probabilistic dependencies. Finally, every causal model can be associated with a directed (acyclic) graph $G(M)$. Hence, one way to simulate from a probabilistic causal model is by specifying: (i) the endogenous variables $V$ you are going to model; (ii) the exogenous variables $U$ which are usually the "disturbances", along with their joint probability distribution; and, (iii) the (causal) structural relationships among the variables. It might be easier to start this process qualitatively by first drawing the causal DAG with the main features that you want to illustrate and then add the details of the simulation (functional forms) later. To see how this can be easily done in practice, let's simulate a simple causal model that illustrates simpson's paradox (for more see Pearl). Suppose our model $M$ is given by the following causal DAG, where the variables in parenthesis are "unobserved" and each variable has an associated exogenous disturbance $U$ which is omitted for convenience: More specifically we will assume the following structural equations $F$: $$ \begin{aligned} W_1 &= U_{W_1}\\ W_2 &= U_{W_2}\\ Z &= W_1 + W_2 + U_{Z}\\ X &= W_1 + U_{x}\\ Y &= X+ 10W_2+ U_{y}\\ \end{aligned} $$ Finally, assume all disturbances in $U$ are independent standard normal random variables. Now it's easy to simulate from our causal model. In R for instance: rm(list = ls()) set.seed(1) n <- 1e5 w1 <- rnorm(n) w2 <- rnorm(n) z <- w1 + w2 + rnorm(n) x <- w1 + rnorm(n) y <- x + 10*w2 + rnorm(n) This example is interesting because if you run the regression $Y \sim X$ you get $1$: lm(y ~ x) Call: lm(formula = y ~ x) Coefficients: (Intercept) x 0.01036 1.00081 But if you further "control" for $Z$, which is a pre-treatment variable correlated with both $X$ and $Y$ --- and some people still erroneously would say it's a confounder --- you will actually see a sign reversal of the estimate and get $-1$: lm(y ~ x + z) Call: lm(formula = y ~ x + z) Coefficients: (Intercept) x z 0.00845 -1.01127 4.00041 In this example, since we simulated the data, we know the true causal effect is $1$ which is captured by the first regression. But you can only know that if you know the true causal structure. There's nothing in the data itself that tells you which one is the correct answer. Hence if you simulate this and give to a researcher only the variables $x$, $y$ and $z$ he can't tell the right answer just from looking at the correlations. If you want further play/simulate causal models with multi-stage Simpson's paradox reversals, you can check it here. Simulating correlations/dependencies To simulate correlations/dependencies you can take a similar approach. You can simply create a causal model that gives you the correlations/dependencies you want (adding latent variables if needed), simulate as above and the resulting data will have the desired correlations/dependencies. To make it easier, you can start by drawing the causal DAG (bayesian network) and read from the graph if the desired conditional dependencies/independencies are implied by your model. After that you might think of specific functional forms to get other quantitative aspects that you want. Notice that several models with different causal interpretations can give you the same correlations.
Simulating data - correlation vs causation All this is easier with a theory of causality. For example, let's use here the Structural Causal Models (which includes the Potential Outcomes) approach. A Structural Causal Model (SCM) is triplet $M
34,681
Simulating data - correlation vs causation
First, causality cannot be observed in the output. Only correlation. You can imagine two functions that output exactly the same $(x_1,x_2)$ pairs (given the same random seed). Yet you could interpret one function as creating causally related pairs, the other not. It depends on how the code is written (or how you interpret it), not what it actually computes as a final result. Causality as defined with DAGs requires the possibility of intervention on one of the two variables at least as a thought experiment. Imagine it as a debugger: you interrupt your program just after the first variable was computed, reset it to a certain value, and then restore execution. Will this impact the second variable? In the way you explain the process (second case) call:` $A$: choice for the distribution $B$: choice for its parameters $(X_1,X_2)$: final output The DAG is : There is no causal relationship between $X_1$ and $X_2$.
Simulating data - correlation vs causation
First, causality cannot be observed in the output. Only correlation. You can imagine two functions that output exactly the same $(x_1,x_2)$ pairs (given the same random seed). Yet you could interpret
Simulating data - correlation vs causation First, causality cannot be observed in the output. Only correlation. You can imagine two functions that output exactly the same $(x_1,x_2)$ pairs (given the same random seed). Yet you could interpret one function as creating causally related pairs, the other not. It depends on how the code is written (or how you interpret it), not what it actually computes as a final result. Causality as defined with DAGs requires the possibility of intervention on one of the two variables at least as a thought experiment. Imagine it as a debugger: you interrupt your program just after the first variable was computed, reset it to a certain value, and then restore execution. Will this impact the second variable? In the way you explain the process (second case) call:` $A$: choice for the distribution $B$: choice for its parameters $(X_1,X_2)$: final output The DAG is : There is no causal relationship between $X_1$ and $X_2$.
Simulating data - correlation vs causation First, causality cannot be observed in the output. Only correlation. You can imagine two functions that output exactly the same $(x_1,x_2)$ pairs (given the same random seed). Yet you could interpret
34,682
Is MSE decreasing with increasing number of explanatory variables?
I am assuming that you are talking about an ordinary least squares regression scenario and are referring to in-sample MSE, and that $Y$ is an n-by-1 vector and $X$ is an n-by-p matrix of orthogonal predictors (or variables, by your terminology). Remember that the columns of any matrix $X$ can be orthogonalized; this will become important for making an intuitive leap later on. Let's also assume that the columns of $X$ have variance $1/n$ and are centered, such that their means are zero. Granted the foregoing, the answer to (1) is yes. Here's why. MSE = $(1/n)\|Y-\hat{Y}\|^2$ $=(1/n)\|Y-Xβ\|^2$ $=(1/n)\|Y-X(X^TX)^{-1}X^TY\|^2$ Now, $(X^TX)^{-1}$ is simply a p-by-p identity matrix (this follows from the orthogonality we imposed earlier). It then follows that $(X^TX)^{-1}X^T=X^T$, and we have MSE = $(1/n)\|Y-XX^TY\|^2$ So, what can we say about $XX^T$? We know it is an n-by-n matrix, and in the special case of p=n, it is an n-by-n identity matrix. That is, for p=n, MSE = $(1/n)\|Y-Y\|^2 = 0$ Which we know to be intuitively correct. Furthermore, we know that MSE is at its maximum when we lack any predictors and $X$ is simply a column of ones; this is how we would fit an intercept-only model. In such a case, $XX^T$ is an n-by-n matrix of ones. As p gets larger, the off-diagonal elements of $XX^T$ shrink, eventually reaching zero when p=n. This is not a rigorous proof and it does not, in fact, demonstrate that MSE is monotonically decreasing with p, but I think it provides a good intuitive foundation for understanding the behavior of least squares fitting. Edit: If you want to extend this analysis to estimating MSE out of sample, then you would consider the following: $\hat{MSE}=\hat{bias}^2+\hat{var}$ $\hat{bias}^2$ is monotonically decreasing with p, and $\hat{var}$ is monotonically increasing with p. There are some relationships between p, n, and $\hat{MSE}$, for that I recommend Wessel van Wieringen's lecture notes on ridge regression as well as Elements of Statistical Learning, as mentioned in another answer to your original question. Hopefully that answers (2). Edit: I thought about this some more and is are two additional points I'd like to make. The first is the specific conditions under which an additional predictor will reduce in-sample MSE. Those conditions are: 1) The additional predictor does not lie entirely within the column space of $X$; that is, it cannot be obtained via any linear combination of the existing predictors, and 2) The component of the new predictor lying outside the column space of $X$ is not orthogonal to $Y$. The second point is that we can do a simple thought experiment showing that the addition of new predictors does, in general, tend to decrease in-sample MSE. Imagine we have solved our linear regression and obtained $β$, that is, a p-by-1 vector of model coefficients. Now imagine that we add an additional predictor. Unless BOTH of the two aforementioned conditions are satisfied, the (p+1) value of $β$ will be zero, and the model is exactly the same as it was prior to the addition of the new predictor (same MSE). In general, though, both of those conditions will be satisfied, and therefore the (p+1) value of $β$ will be something other than zero. Since both the zero-appended $β$ and nonzero-appended $β$ lie within the solution space of the least squares regression with p+1 predictors, we conclude that the p+1 model must have lower MSE than the p model if the new coefficient is anything other than zero.
Is MSE decreasing with increasing number of explanatory variables?
I am assuming that you are talking about an ordinary least squares regression scenario and are referring to in-sample MSE, and that $Y$ is an n-by-1 vector and $X$ is an n-by-p matrix of orthogonal pr
Is MSE decreasing with increasing number of explanatory variables? I am assuming that you are talking about an ordinary least squares regression scenario and are referring to in-sample MSE, and that $Y$ is an n-by-1 vector and $X$ is an n-by-p matrix of orthogonal predictors (or variables, by your terminology). Remember that the columns of any matrix $X$ can be orthogonalized; this will become important for making an intuitive leap later on. Let's also assume that the columns of $X$ have variance $1/n$ and are centered, such that their means are zero. Granted the foregoing, the answer to (1) is yes. Here's why. MSE = $(1/n)\|Y-\hat{Y}\|^2$ $=(1/n)\|Y-Xβ\|^2$ $=(1/n)\|Y-X(X^TX)^{-1}X^TY\|^2$ Now, $(X^TX)^{-1}$ is simply a p-by-p identity matrix (this follows from the orthogonality we imposed earlier). It then follows that $(X^TX)^{-1}X^T=X^T$, and we have MSE = $(1/n)\|Y-XX^TY\|^2$ So, what can we say about $XX^T$? We know it is an n-by-n matrix, and in the special case of p=n, it is an n-by-n identity matrix. That is, for p=n, MSE = $(1/n)\|Y-Y\|^2 = 0$ Which we know to be intuitively correct. Furthermore, we know that MSE is at its maximum when we lack any predictors and $X$ is simply a column of ones; this is how we would fit an intercept-only model. In such a case, $XX^T$ is an n-by-n matrix of ones. As p gets larger, the off-diagonal elements of $XX^T$ shrink, eventually reaching zero when p=n. This is not a rigorous proof and it does not, in fact, demonstrate that MSE is monotonically decreasing with p, but I think it provides a good intuitive foundation for understanding the behavior of least squares fitting. Edit: If you want to extend this analysis to estimating MSE out of sample, then you would consider the following: $\hat{MSE}=\hat{bias}^2+\hat{var}$ $\hat{bias}^2$ is monotonically decreasing with p, and $\hat{var}$ is monotonically increasing with p. There are some relationships between p, n, and $\hat{MSE}$, for that I recommend Wessel van Wieringen's lecture notes on ridge regression as well as Elements of Statistical Learning, as mentioned in another answer to your original question. Hopefully that answers (2). Edit: I thought about this some more and is are two additional points I'd like to make. The first is the specific conditions under which an additional predictor will reduce in-sample MSE. Those conditions are: 1) The additional predictor does not lie entirely within the column space of $X$; that is, it cannot be obtained via any linear combination of the existing predictors, and 2) The component of the new predictor lying outside the column space of $X$ is not orthogonal to $Y$. The second point is that we can do a simple thought experiment showing that the addition of new predictors does, in general, tend to decrease in-sample MSE. Imagine we have solved our linear regression and obtained $β$, that is, a p-by-1 vector of model coefficients. Now imagine that we add an additional predictor. Unless BOTH of the two aforementioned conditions are satisfied, the (p+1) value of $β$ will be zero, and the model is exactly the same as it was prior to the addition of the new predictor (same MSE). In general, though, both of those conditions will be satisfied, and therefore the (p+1) value of $β$ will be something other than zero. Since both the zero-appended $β$ and nonzero-appended $β$ lie within the solution space of the least squares regression with p+1 predictors, we conclude that the p+1 model must have lower MSE than the p model if the new coefficient is anything other than zero.
Is MSE decreasing with increasing number of explanatory variables? I am assuming that you are talking about an ordinary least squares regression scenario and are referring to in-sample MSE, and that $Y$ is an n-by-1 vector and $X$ is an n-by-p matrix of orthogonal pr
34,683
Is MSE decreasing with increasing number of explanatory variables?
It is unfortunate that Empirical Risk Minimization is a topic where the Internet is full of incorrect information including https://en.wikipedia.org/wiki/Mean_squared_error, where Wikipedia changes its mind whether it is an estimate or a population parameter within the same article. First of all MSE is a population metric, therefore it is incorrect to talk about in-sample or out-of-sample MSE. Let's assume we have $N$ $(y,X)$ pairs for training where $y$ is a sacalar and $X$ is a vector containing predictors. Let's also assume that $\hat{y}_N(X)$ is a predictor found using a fitting procedure using those $N$ samples, then $MSE$ of prediction of fitted model is: $MSE(\hat{Y}) = E[(\hat{y}(X)-y)^2] = \int (\hat{y}(X)-y)^2 p(y,X) dydX$ On the other hand we can have an estimate of MSE. There are two flavors of this: the in sample and out-of-sample estimate. Please take note of the hat I am putting over $MSE$ below. $\hat{MSE_{in}} = \frac{1}{N}\sum_i^N ((\hat{y}(X_i)-y_i)^2$ Let's assume we have another $M$ test samples we haven't used during the fitting procedure: $\hat{MSE_{out}} = \frac{1}{M}\sum_i^M ((\hat{y}(X_i)-y_i)^2$ Now what can we say about the expected values of these estimators? It is easy to see that $E[\hat{MSE_{out}}] = MSE$ simply push the expectation inside the summation and use that fact that $\hat{y}$ is independent of the test samples. Also we use the $IID$ assumption over the samples. Same cannot be concluded for in sample estimate because $\hat{y}$ is not independent of the training samples (obviously as we used the training samples to find $\hat{y}$), hence: $E[\hat{MSE_{in}}] \ne MSE$ In fact it is downward biased (https://web.stanford.edu/~hastie/Papers/ESLII.pdf, section 7.4): $E[\hat{MSE_{out}}] = E[\hat{MSE_{in}}] + \frac{2}{N}\sum_1^N Cov(y_i,\hat{y_i})$ This is the source of the nasty overfitting problem. Having all that cleared now let's look at how these estimates change according to number of predictors and number of samples used in the fitting process The difference between the red and blue lines illustrates the bias between the in-sample and out-of-sample estimates of MSE. We also added a new concept: "MSE of proposed model". This is not the MSE of the fit but the asymptotic MSE as if we had infinite training samples. As the number of training samples increase both in-sample estimate and out-of-sample estimate converge to the flat line given that the training procedure is consistent!!! (http://bengio.abracadoudou.com/lectures/theory.pdf, slide 10) Now let's examine the effect of adding predictors. For nested models adding predictors always decreases (at least doesn't increase) $E[\hat{MSE_{in}}]$ given that the fitting procedure reached the global minimum. For nonlinear models optimization usually gets stuck in local minimum, and this may appear to have increased the $\hat{MSE_{in}}$, but this is not an effect of the added predictor, but a side-effect of the fitting procedure. For non-nested cases it can go either way. I believe the original question was towards a nested linear model. What happens to $\hat{MSE_{out}}$ is more interesting and deserves a demonstration. It is also more important as it provides an unbiased estimate of $MSE$ and is also a key objective in Cross Validation procedures. Suppose that we have data generated by a polynomial of degree 4 with additive Gaussian noise. Below is a bunch of samples to illustrate how it looks like. Now I will plot $\hat{MSE_{out}}$ for fitted polynomials from degree 2 to degree 6. vertical axis is logarithm of error and horizontal axis is again number of samples used in fitting procedure. I always used 1000 test samples to estimate $\hat{MSE_{out}}$. The correct model is degree 4 and plotted in red color. magenta is degree 2, green degree 3, yellow degree 5, and blue degree 6. As you can see when the number of training samples is less than 10, the best $MSE$ is achived with a polynomial degree 2, after that polynomial degree 3 takes over, and when we reach 15, degree 4 dominates and from that point on it becomes unbeatable. This is a known phenomenon where true model degree may not give the best prediction ability on small samples. When will a less true model predict better than a truer model? Another interesting observation is that even though degree 2 and degree 3 do a better job at small samples, as number of training samples increase not only they are dominated by degree 4 (the true model), but also degree 5 and 6. This is natural as degree 5 and degree 6 subsume degree 4.
Is MSE decreasing with increasing number of explanatory variables?
It is unfortunate that Empirical Risk Minimization is a topic where the Internet is full of incorrect information including https://en.wikipedia.org/wiki/Mean_squared_error, where Wikipedia changes it
Is MSE decreasing with increasing number of explanatory variables? It is unfortunate that Empirical Risk Minimization is a topic where the Internet is full of incorrect information including https://en.wikipedia.org/wiki/Mean_squared_error, where Wikipedia changes its mind whether it is an estimate or a population parameter within the same article. First of all MSE is a population metric, therefore it is incorrect to talk about in-sample or out-of-sample MSE. Let's assume we have $N$ $(y,X)$ pairs for training where $y$ is a sacalar and $X$ is a vector containing predictors. Let's also assume that $\hat{y}_N(X)$ is a predictor found using a fitting procedure using those $N$ samples, then $MSE$ of prediction of fitted model is: $MSE(\hat{Y}) = E[(\hat{y}(X)-y)^2] = \int (\hat{y}(X)-y)^2 p(y,X) dydX$ On the other hand we can have an estimate of MSE. There are two flavors of this: the in sample and out-of-sample estimate. Please take note of the hat I am putting over $MSE$ below. $\hat{MSE_{in}} = \frac{1}{N}\sum_i^N ((\hat{y}(X_i)-y_i)^2$ Let's assume we have another $M$ test samples we haven't used during the fitting procedure: $\hat{MSE_{out}} = \frac{1}{M}\sum_i^M ((\hat{y}(X_i)-y_i)^2$ Now what can we say about the expected values of these estimators? It is easy to see that $E[\hat{MSE_{out}}] = MSE$ simply push the expectation inside the summation and use that fact that $\hat{y}$ is independent of the test samples. Also we use the $IID$ assumption over the samples. Same cannot be concluded for in sample estimate because $\hat{y}$ is not independent of the training samples (obviously as we used the training samples to find $\hat{y}$), hence: $E[\hat{MSE_{in}}] \ne MSE$ In fact it is downward biased (https://web.stanford.edu/~hastie/Papers/ESLII.pdf, section 7.4): $E[\hat{MSE_{out}}] = E[\hat{MSE_{in}}] + \frac{2}{N}\sum_1^N Cov(y_i,\hat{y_i})$ This is the source of the nasty overfitting problem. Having all that cleared now let's look at how these estimates change according to number of predictors and number of samples used in the fitting process The difference between the red and blue lines illustrates the bias between the in-sample and out-of-sample estimates of MSE. We also added a new concept: "MSE of proposed model". This is not the MSE of the fit but the asymptotic MSE as if we had infinite training samples. As the number of training samples increase both in-sample estimate and out-of-sample estimate converge to the flat line given that the training procedure is consistent!!! (http://bengio.abracadoudou.com/lectures/theory.pdf, slide 10) Now let's examine the effect of adding predictors. For nested models adding predictors always decreases (at least doesn't increase) $E[\hat{MSE_{in}}]$ given that the fitting procedure reached the global minimum. For nonlinear models optimization usually gets stuck in local minimum, and this may appear to have increased the $\hat{MSE_{in}}$, but this is not an effect of the added predictor, but a side-effect of the fitting procedure. For non-nested cases it can go either way. I believe the original question was towards a nested linear model. What happens to $\hat{MSE_{out}}$ is more interesting and deserves a demonstration. It is also more important as it provides an unbiased estimate of $MSE$ and is also a key objective in Cross Validation procedures. Suppose that we have data generated by a polynomial of degree 4 with additive Gaussian noise. Below is a bunch of samples to illustrate how it looks like. Now I will plot $\hat{MSE_{out}}$ for fitted polynomials from degree 2 to degree 6. vertical axis is logarithm of error and horizontal axis is again number of samples used in fitting procedure. I always used 1000 test samples to estimate $\hat{MSE_{out}}$. The correct model is degree 4 and plotted in red color. magenta is degree 2, green degree 3, yellow degree 5, and blue degree 6. As you can see when the number of training samples is less than 10, the best $MSE$ is achived with a polynomial degree 2, after that polynomial degree 3 takes over, and when we reach 15, degree 4 dominates and from that point on it becomes unbeatable. This is a known phenomenon where true model degree may not give the best prediction ability on small samples. When will a less true model predict better than a truer model? Another interesting observation is that even though degree 2 and degree 3 do a better job at small samples, as number of training samples increase not only they are dominated by degree 4 (the true model), but also degree 5 and 6. This is natural as degree 5 and degree 6 subsume degree 4.
Is MSE decreasing with increasing number of explanatory variables? It is unfortunate that Empirical Risk Minimization is a topic where the Internet is full of incorrect information including https://en.wikipedia.org/wiki/Mean_squared_error, where Wikipedia changes it
34,684
Is MSE decreasing with increasing number of explanatory variables?
The short answers are: Yes. A more precise answer should be "non-increasing", as mentioned in comment. For example, if we include a complete random noise as independent variable, it will not make MSE decrease, but make MSE the same. See the links mentioned below. Assuming we only have one data set, and want to build a model to have as low MSE as possible. Then, we will have a over-fitting problem. Basically we the model we build are too specific on this given data set and the model is not able to generalize. People also call this as a "false high R square problem", see this post for reasons why people use adjusted R square. Why is adjusted R-squared less than R-squared if adjusted R-squared predicts the model better? The intuition behind adjusted R square or other regularization algorithms are trying to consider both performance (such as MSE) and number of parameters used int he model. As a result, although increasing number of independent variables will not hurt MSE, but if the improvements are "marginal", the recommendations would still be not adding it.
Is MSE decreasing with increasing number of explanatory variables?
The short answers are: Yes. A more precise answer should be "non-increasing", as mentioned in comment. For example, if we include a complete random noise as independent variable, it will not make MSE
Is MSE decreasing with increasing number of explanatory variables? The short answers are: Yes. A more precise answer should be "non-increasing", as mentioned in comment. For example, if we include a complete random noise as independent variable, it will not make MSE decrease, but make MSE the same. See the links mentioned below. Assuming we only have one data set, and want to build a model to have as low MSE as possible. Then, we will have a over-fitting problem. Basically we the model we build are too specific on this given data set and the model is not able to generalize. People also call this as a "false high R square problem", see this post for reasons why people use adjusted R square. Why is adjusted R-squared less than R-squared if adjusted R-squared predicts the model better? The intuition behind adjusted R square or other regularization algorithms are trying to consider both performance (such as MSE) and number of parameters used int he model. As a result, although increasing number of independent variables will not hurt MSE, but if the improvements are "marginal", the recommendations would still be not adding it.
Is MSE decreasing with increasing number of explanatory variables? The short answers are: Yes. A more precise answer should be "non-increasing", as mentioned in comment. For example, if we include a complete random noise as independent variable, it will not make MSE
34,685
Is MSE decreasing with increasing number of explanatory variables?
Ironically, the answer begins far away in a nether world of philosophy. Error is rarely intrinsically an "error" at all, except in quantum physics. It is the result of a virtually infinite number of factors, unobserved, contributing to an almost chaotic effect. If we had a correctly specified, infinite dimensional model, there would be no error in deterministic systems, like weather patterns, stock prices, planetary motion, and so on. We would have effects for the proverbial butterfly that flaps its wings in China, causing the Tsunami in Guam. And if we reliably estimate a high dimensional, though finite, model, the "errors" are still smaller than would be seen in a lower dimensional model. So the answer to 1 is yes, in general a well-estimated, high dimensional models confer better predictiveness, and thus a lower MSE, than a model with fewer predictors. The major caveat is reliability. Pitifully often, people use internal validation to check their model. Mean squared error is a quantity which we estimate using split-sample validation (good), cross-validation (good), bootstrapping (good), or internal validation (bad). In internal validation we use residuals which are, as a consequence of the model fitting procedure, perfectly orthogonal to the design matrix, to estimate error. This is problematic because any external, independent dataset will not have that added benefit, in that the linear combinations which comprise predicted values optimally reduce the error in that dataset. We would have to refit the model in every new instance of data, making the whole purpose of prediction a moot point. Coincidentally, if you overfit a model then use internal validation: it is true that including more features will lead to a lower estimate of MSE, but in fact, the MSE of the model (as determined by validation in new, independent data) will be higher. This is because overfitting is out-of-sample variance, and the MSE is the sum of variance and squared bias. The answer to 2 is myriad. But regarding prediction and especially the point above about overfitting, and better understanding bias/variance tradeoff, reliability, and prediction: an accessible (free) text is Elements of Statistical Learning.
Is MSE decreasing with increasing number of explanatory variables?
Ironically, the answer begins far away in a nether world of philosophy. Error is rarely intrinsically an "error" at all, except in quantum physics. It is the result of a virtually infinite number of
Is MSE decreasing with increasing number of explanatory variables? Ironically, the answer begins far away in a nether world of philosophy. Error is rarely intrinsically an "error" at all, except in quantum physics. It is the result of a virtually infinite number of factors, unobserved, contributing to an almost chaotic effect. If we had a correctly specified, infinite dimensional model, there would be no error in deterministic systems, like weather patterns, stock prices, planetary motion, and so on. We would have effects for the proverbial butterfly that flaps its wings in China, causing the Tsunami in Guam. And if we reliably estimate a high dimensional, though finite, model, the "errors" are still smaller than would be seen in a lower dimensional model. So the answer to 1 is yes, in general a well-estimated, high dimensional models confer better predictiveness, and thus a lower MSE, than a model with fewer predictors. The major caveat is reliability. Pitifully often, people use internal validation to check their model. Mean squared error is a quantity which we estimate using split-sample validation (good), cross-validation (good), bootstrapping (good), or internal validation (bad). In internal validation we use residuals which are, as a consequence of the model fitting procedure, perfectly orthogonal to the design matrix, to estimate error. This is problematic because any external, independent dataset will not have that added benefit, in that the linear combinations which comprise predicted values optimally reduce the error in that dataset. We would have to refit the model in every new instance of data, making the whole purpose of prediction a moot point. Coincidentally, if you overfit a model then use internal validation: it is true that including more features will lead to a lower estimate of MSE, but in fact, the MSE of the model (as determined by validation in new, independent data) will be higher. This is because overfitting is out-of-sample variance, and the MSE is the sum of variance and squared bias. The answer to 2 is myriad. But regarding prediction and especially the point above about overfitting, and better understanding bias/variance tradeoff, reliability, and prediction: an accessible (free) text is Elements of Statistical Learning.
Is MSE decreasing with increasing number of explanatory variables? Ironically, the answer begins far away in a nether world of philosophy. Error is rarely intrinsically an "error" at all, except in quantum physics. It is the result of a virtually infinite number of
34,686
Can a standard deviation of raw scores be reported as a standard deviation of percentages?
The standard deviation is just a statistical property that you can measure for a set of data points. The standard deviation does not itself make any assumptions that your data is normally distributed or has/has not passed through any transformations, linear or otherwise. Therefore, it's perfectly acceptable to use the standard deviation on any data, including the percentage scores. Note that, in your particular case, the transformation you are applying is a linear transform, of the form: $$ y = Ax + b $$ i.e. an affine transform. So you can calculate the standard deviation on the original, untransformed data and then multiply by A to get the standard deviation after the transform. There seems to be no particular advantage to doing this rather than simply calculating the standard deviation on the already transformed data, but it might be reassuring. We can see that an affine transformation will transform the standard deviation linearly by $A$, as follows: Given we have input data $\{X_1, X_2, ..., X_n\}$, the original standard deviation, $\sigma$, will be given by: $$ \sigma_X^2 = \frac{1}{n}\sum_{i=1}^n \left(X_i - \frac{1}{n}\sum_{j=1}^n X_j\right)^2 $$ Let's apply the transform $Y = AX + b$. Then we have $$ \sigma_Y^2 = \frac{1}{n}\sum_{i=1}^n \left( AX_i + b - \frac{1}{n} \sum_{j=1}^n \left( AX_j + b \right) \right)^2 $$ $$ = \frac{1}{n}\sum_{i=1}^n \left( AX_i + b - n\frac{1}{n}b - \frac{1}{n} \sum_{j=1}^n \left( AX_j \right) \right)^2 $$ $$ = \frac{1}{n}\sum_{i=1}^n \left( AX_i - \frac{1}{n} \sum_{j=1}^n \left( AX_j \right) \right)^2 $$ $$ = A^2 \left( \frac{1}{n}\sum_{i=1}^n \left( X_i - \frac{1}{n} \sum_{j=1}^n \left( X_j \right) \right)^2 \right) $$ $$ = A^2 \sigma_X^2 $$ Therefore $$ \sigma_Y = A \sigma_X. $$
Can a standard deviation of raw scores be reported as a standard deviation of percentages?
The standard deviation is just a statistical property that you can measure for a set of data points. The standard deviation does not itself make any assumptions that your data is normally distributed
Can a standard deviation of raw scores be reported as a standard deviation of percentages? The standard deviation is just a statistical property that you can measure for a set of data points. The standard deviation does not itself make any assumptions that your data is normally distributed or has/has not passed through any transformations, linear or otherwise. Therefore, it's perfectly acceptable to use the standard deviation on any data, including the percentage scores. Note that, in your particular case, the transformation you are applying is a linear transform, of the form: $$ y = Ax + b $$ i.e. an affine transform. So you can calculate the standard deviation on the original, untransformed data and then multiply by A to get the standard deviation after the transform. There seems to be no particular advantage to doing this rather than simply calculating the standard deviation on the already transformed data, but it might be reassuring. We can see that an affine transformation will transform the standard deviation linearly by $A$, as follows: Given we have input data $\{X_1, X_2, ..., X_n\}$, the original standard deviation, $\sigma$, will be given by: $$ \sigma_X^2 = \frac{1}{n}\sum_{i=1}^n \left(X_i - \frac{1}{n}\sum_{j=1}^n X_j\right)^2 $$ Let's apply the transform $Y = AX + b$. Then we have $$ \sigma_Y^2 = \frac{1}{n}\sum_{i=1}^n \left( AX_i + b - \frac{1}{n} \sum_{j=1}^n \left( AX_j + b \right) \right)^2 $$ $$ = \frac{1}{n}\sum_{i=1}^n \left( AX_i + b - n\frac{1}{n}b - \frac{1}{n} \sum_{j=1}^n \left( AX_j \right) \right)^2 $$ $$ = \frac{1}{n}\sum_{i=1}^n \left( AX_i - \frac{1}{n} \sum_{j=1}^n \left( AX_j \right) \right)^2 $$ $$ = A^2 \left( \frac{1}{n}\sum_{i=1}^n \left( X_i - \frac{1}{n} \sum_{j=1}^n \left( X_j \right) \right)^2 \right) $$ $$ = A^2 \sigma_X^2 $$ Therefore $$ \sigma_Y = A \sigma_X. $$
Can a standard deviation of raw scores be reported as a standard deviation of percentages? The standard deviation is just a statistical property that you can measure for a set of data points. The standard deviation does not itself make any assumptions that your data is normally distributed
34,687
Signs of related covariances
No, this is not implied. The sign of a covariance is essentially only preserved in a consistent way by linear transformations: for all other functions, including $f(x) = x^{-1}$ you can exploit the curvature of the function to make the sign whatever you want. Here's a quick example I got by playing around with the numbers: suppose you sample $(1,1)$, $(2.5, 0.1)$ and $(3,2)$ uniformly to generate $(X,Y)$ pairs. This gives positive covariance, and still does if we replace the $Y$ values by $1, 10, 0.5$. There may be numerically simpler examples available, but at least three points are necessary.
Signs of related covariances
No, this is not implied. The sign of a covariance is essentially only preserved in a consistent way by linear transformations: for all other functions, including $f(x) = x^{-1}$ you can exploit the c
Signs of related covariances No, this is not implied. The sign of a covariance is essentially only preserved in a consistent way by linear transformations: for all other functions, including $f(x) = x^{-1}$ you can exploit the curvature of the function to make the sign whatever you want. Here's a quick example I got by playing around with the numbers: suppose you sample $(1,1)$, $(2.5, 0.1)$ and $(3,2)$ uniformly to generate $(X,Y)$ pairs. This gives positive covariance, and still does if we replace the $Y$ values by $1, 10, 0.5$. There may be numerically simpler examples available, but at least three points are necessary.
Signs of related covariances No, this is not implied. The sign of a covariance is essentially only preserved in a consistent way by linear transformations: for all other functions, including $f(x) = x^{-1}$ you can exploit the c
34,688
How to stack machine learning models in R
What you're doing here is what I refer to as "Holdout Stacking" (sometimes also called Blending but that term is also used for regular Stacking), where you use a holdout set to generate the training data for the metalearning algorithm (i.e. predDF). I use the term Holdout Stacking to differentiate from regular Stacking (or "Super Learning") where you generate cross-validated predicted values from the base learners to generate the training data for the metalearner algorithm (in your case, a Random Forest) rather than a holdout set (your testing frame). The problem here is not how you're doing the stacking, but how you're evaluating the results. Once you've used the testing frame to generate the predDF frame, you have to throw that data away and not use it for model evaluation. In your example, you are also using the testing frame to evaluate the performance of the base models and the ensemble learner. To fix this, just partition off another chunk of your data. You should have three datasets: training, validation and testing. Use the validation set to create predDF (also known as the "level one" dataset in stacking terminology). # Generate level-one dataset for training the ensemble metalearner predRF <- predict(modelFitRF, newdata = validation) predGBM <- predict(modelFitGBM, newdata = validation) prefLDA <- predict(modelFitLDA, newdata = validation) predDF <- data.frame(predRF, predGBM, prefLDA, diagnosis = validation$diagnosis, stringsAsFactors = F) # Train the ensemble modelStack <- train(diagnosis ~ ., data = predDF, method = "rf") Then evaluate your base learners and your ensemble on the testing set to get a better idea of how the ensemble compares to the individual learners. # Generate predictions on the test set testPredRF <- predict(modelFitRF, newdata = testing) testPredGBM <- predict(modelFitGBM, newdata = testing) testPredLDA <- predict(modelFitLDA, newdata = testing) # Using the base learner test set predictions, # create the level-one dataset to feed to the ensemble testPredLevelOne <- data.frame(testPredRF, testPredGBM, testPredLDA, diagnosis = testing$diagnosis, stringsAsFactors = F) combPred <- predict(modelStack, testPredLevelOne) # Evaluate ensemble test performance confusionMatrix(combPred, testing$diagnosis)$overall[1] # Evaluate base learner test performance confusionMatrix(testPredRF, testing$diagnosis)$overall[1] confusionMatrix(testPredGBM, testing$diagnosis)$overall[1] confusionMatrix(testPredLDA, testing$diagnosis)$overall[1] Lastly, as a suggestion, I'd recommend trying a GLM for the metalearning algorithm because they seem to perform better than tree-based models in my experience, though that is not always the case. If you're specifically looking for multiclass support in Stacking, it will be available soon in the h2o R package. If you don't need multiclass, then you can check out either the SuperLearner or h2o packages to do stacking more easily than writing it all out by hand. See the SuperLearner() or the h2o.stackedEnsemble() functions to do Stacking with one line of code.
How to stack machine learning models in R
What you're doing here is what I refer to as "Holdout Stacking" (sometimes also called Blending but that term is also used for regular Stacking), where you use a holdout set to generate the training d
How to stack machine learning models in R What you're doing here is what I refer to as "Holdout Stacking" (sometimes also called Blending but that term is also used for regular Stacking), where you use a holdout set to generate the training data for the metalearning algorithm (i.e. predDF). I use the term Holdout Stacking to differentiate from regular Stacking (or "Super Learning") where you generate cross-validated predicted values from the base learners to generate the training data for the metalearner algorithm (in your case, a Random Forest) rather than a holdout set (your testing frame). The problem here is not how you're doing the stacking, but how you're evaluating the results. Once you've used the testing frame to generate the predDF frame, you have to throw that data away and not use it for model evaluation. In your example, you are also using the testing frame to evaluate the performance of the base models and the ensemble learner. To fix this, just partition off another chunk of your data. You should have three datasets: training, validation and testing. Use the validation set to create predDF (also known as the "level one" dataset in stacking terminology). # Generate level-one dataset for training the ensemble metalearner predRF <- predict(modelFitRF, newdata = validation) predGBM <- predict(modelFitGBM, newdata = validation) prefLDA <- predict(modelFitLDA, newdata = validation) predDF <- data.frame(predRF, predGBM, prefLDA, diagnosis = validation$diagnosis, stringsAsFactors = F) # Train the ensemble modelStack <- train(diagnosis ~ ., data = predDF, method = "rf") Then evaluate your base learners and your ensemble on the testing set to get a better idea of how the ensemble compares to the individual learners. # Generate predictions on the test set testPredRF <- predict(modelFitRF, newdata = testing) testPredGBM <- predict(modelFitGBM, newdata = testing) testPredLDA <- predict(modelFitLDA, newdata = testing) # Using the base learner test set predictions, # create the level-one dataset to feed to the ensemble testPredLevelOne <- data.frame(testPredRF, testPredGBM, testPredLDA, diagnosis = testing$diagnosis, stringsAsFactors = F) combPred <- predict(modelStack, testPredLevelOne) # Evaluate ensemble test performance confusionMatrix(combPred, testing$diagnosis)$overall[1] # Evaluate base learner test performance confusionMatrix(testPredRF, testing$diagnosis)$overall[1] confusionMatrix(testPredGBM, testing$diagnosis)$overall[1] confusionMatrix(testPredLDA, testing$diagnosis)$overall[1] Lastly, as a suggestion, I'd recommend trying a GLM for the metalearning algorithm because they seem to perform better than tree-based models in my experience, though that is not always the case. If you're specifically looking for multiclass support in Stacking, it will be available soon in the h2o R package. If you don't need multiclass, then you can check out either the SuperLearner or h2o packages to do stacking more easily than writing it all out by hand. See the SuperLearner() or the h2o.stackedEnsemble() functions to do Stacking with one line of code.
How to stack machine learning models in R What you're doing here is what I refer to as "Holdout Stacking" (sometimes also called Blending but that term is also used for regular Stacking), where you use a holdout set to generate the training d
34,689
What is the number of parameters needed for a joint probability distribution?
It takes $3\times 2 \times 2 \times 3 = 36$ numbers to write down a probability distribution on all possible values of these variables. They are redundant, because they must sum to $1$. Therefore the number of (functionally independent) parameters is $35$. If you need more convincing (that was a rather hand-waving argument), read on. By definition, a sequence of such random variables is a measurable function $$\mathbf{X}=(X_1,X_2,X_3,X_4):\Omega\to\mathbb{R}^4$$ defined on a probability space $(\Omega, \mathcal{F}, \mathbb{P})$. By limiting the range of $X_1$ to a set of three elements ("states"), etc., you guarantee the range of $\mathbf{X}$ itself is limited to $3\times 2\times 2 \times 3=36$ possible values. Any probability distribution for $\mathbf{X}$ can be written as a set of $36$ probabilities, one for each one of those values. The axioms of probability impose $36+1$ constraints on those probabilities: they must be nonnegative ($36$ inequality constraints) and sum to unity (one equality constraint). Conversely, any set of $36$ numbers satisfying all $37$ constraints gives a possible probability measure on $\Omega$. It should be obvious how this works, but to be explicit, let's introduce some notation: Let the possible values of $X_i$ be $a_i^{(1)}, a_i^{(2)}, \ldots, a_i^{(k_i)}$ where $X_i$ has $k_i$ possible values. Let the nonnegative numbers, summing to $1$, associated with $\mathbf{a}=(a_1^{(i_1)}, a_2^{(i_2)}, a_3^{(i_3)}, a_4^{(i_4)})$ be written $p_{i_1i_2i_3i_4}$. For any vector of possible values $\mathbf{a}$ for $\mathbf{X}$, we know (because random variables are measureable) that $$\mathbf{X}^{-1}(\mathbf{a}) = \{\omega\in\Omega\mid \mathbf{X}(\omega)=\mathbf{a}\}$$ is a measurable set (in $\mathcal{F}$). Define $$\mathbb{P}\left(\mathbf{X}^{-1}(\mathbf{a})\right) = p_{i_1i_2i_3i_4}.$$ It is trivial to check that $\mathbb{P}$ is an $\mathcal{F}$-measurable probability measure on $\Omega$. The set of all such $p_{i_1i_2i_3i_4}$, with $36$ subscripts, nonnegative values, and summing to unity, form the unit simplex in $\mathbb{R}^{36}$. We have thereby a established a natural one-to-one correspondence between the points of this simplex and the set of all possible probability distributions of all such $\mathbf{X}$ (regardless of what $\Omega$ or $\mathcal{F}$ might happen to be). The unit simplex in this case is a $36-1=35$-dimensional submanifold-with-corners: any continuous (or differentiable, or algebraic) coordinate system for this set requires $35$ numbers. This construction is closely related to a basic tool used by Efron, Tibshirani, and others for studying the Bootstrap as well as to the influence function used to study M-estimators. It is called the "sampling representation." To see the connection, suppose you have a batch of $36$ data points $y_1, y_2, \ldots, y_{36}$. A bootstrap sample consists of $36$ independent realizations from the random variable $\mathbf X$ that has a $p_1=1/36$ chance of equaling $y_1$, a $p_2=1/36$ chance of equaling $y_2$, and so on: it is the empirical distribution. To understand the properties of the Bootstrap and other resampling statistics, Efron et al consider modifying this to some other distribution where the $p_i$ are no longer necessarily equal to one another. For instance, by changing $p_k$ to $1/36 + \epsilon$ and changing all the other $p_j$ ($j\ne k$) by $-\epsilon/35$ you obtain (for sufficiently small $\epsilon$) a distribution that represents overweighting the data value $X_k$ (when $\epsilon$ is positive) or underweighting it (when $\epsilon$ is negative) or even deleting it altogether (when $\epsilon=-1/36$), which leads to the "Jackknife". As such, this representation of all the weighted resampling possibilities by means of a vector $\mathbf{p} = (p_1,p_2, \ldots, p_{36})$ allows us to visualize and reason about different resampling schemes as points on the unit simplex. The influence function of the value $X_k$ for any (differentiable) functional statistic $t$, for instance, is simply proportional to the partial derivative of $t(X)$ with respect to $p_k$. Reference Efron and Tibshirani (1993), An Introduction to The Bootstrap (Chapters 20 and 21).
What is the number of parameters needed for a joint probability distribution?
It takes $3\times 2 \times 2 \times 3 = 36$ numbers to write down a probability distribution on all possible values of these variables. They are redundant, because they must sum to $1$. Therefore th
What is the number of parameters needed for a joint probability distribution? It takes $3\times 2 \times 2 \times 3 = 36$ numbers to write down a probability distribution on all possible values of these variables. They are redundant, because they must sum to $1$. Therefore the number of (functionally independent) parameters is $35$. If you need more convincing (that was a rather hand-waving argument), read on. By definition, a sequence of such random variables is a measurable function $$\mathbf{X}=(X_1,X_2,X_3,X_4):\Omega\to\mathbb{R}^4$$ defined on a probability space $(\Omega, \mathcal{F}, \mathbb{P})$. By limiting the range of $X_1$ to a set of three elements ("states"), etc., you guarantee the range of $\mathbf{X}$ itself is limited to $3\times 2\times 2 \times 3=36$ possible values. Any probability distribution for $\mathbf{X}$ can be written as a set of $36$ probabilities, one for each one of those values. The axioms of probability impose $36+1$ constraints on those probabilities: they must be nonnegative ($36$ inequality constraints) and sum to unity (one equality constraint). Conversely, any set of $36$ numbers satisfying all $37$ constraints gives a possible probability measure on $\Omega$. It should be obvious how this works, but to be explicit, let's introduce some notation: Let the possible values of $X_i$ be $a_i^{(1)}, a_i^{(2)}, \ldots, a_i^{(k_i)}$ where $X_i$ has $k_i$ possible values. Let the nonnegative numbers, summing to $1$, associated with $\mathbf{a}=(a_1^{(i_1)}, a_2^{(i_2)}, a_3^{(i_3)}, a_4^{(i_4)})$ be written $p_{i_1i_2i_3i_4}$. For any vector of possible values $\mathbf{a}$ for $\mathbf{X}$, we know (because random variables are measureable) that $$\mathbf{X}^{-1}(\mathbf{a}) = \{\omega\in\Omega\mid \mathbf{X}(\omega)=\mathbf{a}\}$$ is a measurable set (in $\mathcal{F}$). Define $$\mathbb{P}\left(\mathbf{X}^{-1}(\mathbf{a})\right) = p_{i_1i_2i_3i_4}.$$ It is trivial to check that $\mathbb{P}$ is an $\mathcal{F}$-measurable probability measure on $\Omega$. The set of all such $p_{i_1i_2i_3i_4}$, with $36$ subscripts, nonnegative values, and summing to unity, form the unit simplex in $\mathbb{R}^{36}$. We have thereby a established a natural one-to-one correspondence between the points of this simplex and the set of all possible probability distributions of all such $\mathbf{X}$ (regardless of what $\Omega$ or $\mathcal{F}$ might happen to be). The unit simplex in this case is a $36-1=35$-dimensional submanifold-with-corners: any continuous (or differentiable, or algebraic) coordinate system for this set requires $35$ numbers. This construction is closely related to a basic tool used by Efron, Tibshirani, and others for studying the Bootstrap as well as to the influence function used to study M-estimators. It is called the "sampling representation." To see the connection, suppose you have a batch of $36$ data points $y_1, y_2, \ldots, y_{36}$. A bootstrap sample consists of $36$ independent realizations from the random variable $\mathbf X$ that has a $p_1=1/36$ chance of equaling $y_1$, a $p_2=1/36$ chance of equaling $y_2$, and so on: it is the empirical distribution. To understand the properties of the Bootstrap and other resampling statistics, Efron et al consider modifying this to some other distribution where the $p_i$ are no longer necessarily equal to one another. For instance, by changing $p_k$ to $1/36 + \epsilon$ and changing all the other $p_j$ ($j\ne k$) by $-\epsilon/35$ you obtain (for sufficiently small $\epsilon$) a distribution that represents overweighting the data value $X_k$ (when $\epsilon$ is positive) or underweighting it (when $\epsilon$ is negative) or even deleting it altogether (when $\epsilon=-1/36$), which leads to the "Jackknife". As such, this representation of all the weighted resampling possibilities by means of a vector $\mathbf{p} = (p_1,p_2, \ldots, p_{36})$ allows us to visualize and reason about different resampling schemes as points on the unit simplex. The influence function of the value $X_k$ for any (differentiable) functional statistic $t$, for instance, is simply proportional to the partial derivative of $t(X)$ with respect to $p_k$. Reference Efron and Tibshirani (1993), An Introduction to The Bootstrap (Chapters 20 and 21).
What is the number of parameters needed for a joint probability distribution? It takes $3\times 2 \times 2 \times 3 = 36$ numbers to write down a probability distribution on all possible values of these variables. They are redundant, because they must sum to $1$. Therefore th
34,690
What is the number of parameters needed for a joint probability distribution?
The number of parameters needed to represent a random variable is only defined with reference to a model, that is, a family of cumulative distribution functions equipped with a set of parameters that can be used to index them. For example, a normally distributed random variable with mean 3 and standard deviation 1 could be represented with a 0-parameter model (where the only legal distribution is $N(3, 1)$), a 2-parameter model (e.g., $N(μ, σ)$ where $μ$ and $σ$ are parameters), or a 4-parameter model (e.g., $N(μ_1, σ_1) + N(μ_2, σ_2)$).
What is the number of parameters needed for a joint probability distribution?
The number of parameters needed to represent a random variable is only defined with reference to a model, that is, a family of cumulative distribution functions equipped with a set of parameters that
What is the number of parameters needed for a joint probability distribution? The number of parameters needed to represent a random variable is only defined with reference to a model, that is, a family of cumulative distribution functions equipped with a set of parameters that can be used to index them. For example, a normally distributed random variable with mean 3 and standard deviation 1 could be represented with a 0-parameter model (where the only legal distribution is $N(3, 1)$), a 2-parameter model (e.g., $N(μ, σ)$ where $μ$ and $σ$ are parameters), or a 4-parameter model (e.g., $N(μ_1, σ_1) + N(μ_2, σ_2)$).
What is the number of parameters needed for a joint probability distribution? The number of parameters needed to represent a random variable is only defined with reference to a model, that is, a family of cumulative distribution functions equipped with a set of parameters that
34,691
What is the number of parameters needed for a joint probability distribution?
I would get concrete here. Suppose one has this table, credit | t | w | p(t,w) | |------|------|--------| | hot | sun | 0.4 | | hot | rain | 0.1 | | cold | sun | 0.2 | | cold | rain | 0.3 | To calculate all $p(t,w)$, do we need four params? Yes, or we can get away with three params and let the last param be 1 minus the sum of the other three params. Now, what if $t$ and $w$ are independent? To generate the whole table, we only need $2-1=1$ param for $t$, let's say $p(t=hot)$ and $2-1=1$ param for $w$, say $p(w=sun)$.
What is the number of parameters needed for a joint probability distribution?
I would get concrete here. Suppose one has this table, credit | t | w | p(t,w) | |------|------|--------| | hot | sun | 0.4 | | hot | rain | 0.1 | | cold | sun | 0.2 | | cold | rai
What is the number of parameters needed for a joint probability distribution? I would get concrete here. Suppose one has this table, credit | t | w | p(t,w) | |------|------|--------| | hot | sun | 0.4 | | hot | rain | 0.1 | | cold | sun | 0.2 | | cold | rain | 0.3 | To calculate all $p(t,w)$, do we need four params? Yes, or we can get away with three params and let the last param be 1 minus the sum of the other three params. Now, what if $t$ and $w$ are independent? To generate the whole table, we only need $2-1=1$ param for $t$, let's say $p(t=hot)$ and $2-1=1$ param for $w$, say $p(w=sun)$.
What is the number of parameters needed for a joint probability distribution? I would get concrete here. Suppose one has this table, credit | t | w | p(t,w) | |------|------|--------| | hot | sun | 0.4 | | hot | rain | 0.1 | | cold | sun | 0.2 | | cold | rai
34,692
PDF of sum of truncated exponential distribution
Updated answer The solution is going to be an $n$-part piecewise pdf on (0,1). Given that the OP has noted he is interested in large $n$, expressing the exact pdf of the sample mean is likely to get messy. For large $n$ (as given), one should obtain an excellent neat simple approximation via the Central Limit Theorem. Structure Let $X \sim \text{TruncatedExponential}(\lambda)$ (truncated above at 1), with pdf: $$f(x)=\frac{\lambda e^{-\lambda x}}{1-e^{-\lambda }} \quad \text{ for } 0 <x<1$$ where: $$\mathbb{E}[X] = \frac{1}{\lambda }+\frac{1}{1-e^{\lambda }} \quad \quad \text{and} \quad \quad \text{Var}(X) = \frac{1}{\lambda ^2}-\frac{e^{\lambda }}{\left(e^{\lambda }-1\right)^2}$$ Then if the random variables ${X_1, X_2, ...}$ are iid, by the Central Limit Theorem: $$\bar{X}_n \;\overset{a} \sim\; N\left(\mathbb{E}[X] ,\frac{\text{Var}(X)}{n}\right)$$ All done. The following diagram compares: the EXACT distribution of the sample mean (blue curve) with the asymptotic Normal distribution (dashed red curve) when the sample size is just $n = 6$: Even with this tiny sample size, the simple Normal approximation already performs well in the $\lambda = 1$ case (LHS diagram). If $\lambda$ becomes larger, the distribution becomes more peaked and shifts to the left, and larger sample sizes will be needed ... but will still perform extremely well for large $n$. For comparison, the exact pdf when $n = 6$ is: Derivation of Exact PDF To illustrate the calculation of the exact pdf, consider first two independent Truncated Exponential variables, say $X$ and $Y$ which will have joint pdf $f(x,y)$: Then, the cdf of $S=X+Y$, i.e. $P(X+Y<s)$ is: where I am using the Prob function from the mathStatica package for Mathematica to automate the calculation. The pdf of $S=X+Y$is just the derivative of the cdf wrt $s$: Here is a plot of the exact pdf just derived in the $n= 2$ case (here for the sample sum) when $\lambda = 1$: One can derive the exact pdf of the sample sum (or sample mean) for larger $n$ in this same manner ... though for large $n$, the Central Limit Theorem will rapidly become your friend.
PDF of sum of truncated exponential distribution
Updated answer The solution is going to be an $n$-part piecewise pdf on (0,1). Given that the OP has noted he is interested in large $n$, expressing the exact pdf of the sample mean is likely to get m
PDF of sum of truncated exponential distribution Updated answer The solution is going to be an $n$-part piecewise pdf on (0,1). Given that the OP has noted he is interested in large $n$, expressing the exact pdf of the sample mean is likely to get messy. For large $n$ (as given), one should obtain an excellent neat simple approximation via the Central Limit Theorem. Structure Let $X \sim \text{TruncatedExponential}(\lambda)$ (truncated above at 1), with pdf: $$f(x)=\frac{\lambda e^{-\lambda x}}{1-e^{-\lambda }} \quad \text{ for } 0 <x<1$$ where: $$\mathbb{E}[X] = \frac{1}{\lambda }+\frac{1}{1-e^{\lambda }} \quad \quad \text{and} \quad \quad \text{Var}(X) = \frac{1}{\lambda ^2}-\frac{e^{\lambda }}{\left(e^{\lambda }-1\right)^2}$$ Then if the random variables ${X_1, X_2, ...}$ are iid, by the Central Limit Theorem: $$\bar{X}_n \;\overset{a} \sim\; N\left(\mathbb{E}[X] ,\frac{\text{Var}(X)}{n}\right)$$ All done. The following diagram compares: the EXACT distribution of the sample mean (blue curve) with the asymptotic Normal distribution (dashed red curve) when the sample size is just $n = 6$: Even with this tiny sample size, the simple Normal approximation already performs well in the $\lambda = 1$ case (LHS diagram). If $\lambda$ becomes larger, the distribution becomes more peaked and shifts to the left, and larger sample sizes will be needed ... but will still perform extremely well for large $n$. For comparison, the exact pdf when $n = 6$ is: Derivation of Exact PDF To illustrate the calculation of the exact pdf, consider first two independent Truncated Exponential variables, say $X$ and $Y$ which will have joint pdf $f(x,y)$: Then, the cdf of $S=X+Y$, i.e. $P(X+Y<s)$ is: where I am using the Prob function from the mathStatica package for Mathematica to automate the calculation. The pdf of $S=X+Y$is just the derivative of the cdf wrt $s$: Here is a plot of the exact pdf just derived in the $n= 2$ case (here for the sample sum) when $\lambda = 1$: One can derive the exact pdf of the sample sum (or sample mean) for larger $n$ in this same manner ... though for large $n$, the Central Limit Theorem will rapidly become your friend.
PDF of sum of truncated exponential distribution Updated answer The solution is going to be an $n$-part piecewise pdf on (0,1). Given that the OP has noted he is interested in large $n$, expressing the exact pdf of the sample mean is likely to get m
34,693
PDF of sum of truncated exponential distribution
[There was indeed a mistake in the earlier derivation!] If $f_n$ denotes the density of $s_n=x_1+\ldots+x_n$, it satisfies the recursion \begin{align*} f_1(s) &= \lambda e^{-\lambda s} \big/ 1-e^{-\lambda}\mathbb{I}_{(0,1)}(s)\\ f_n(s) &= \int_0^{1} f_{n-1}(s-y) \dfrac{\lambda e^{-\lambda y} }{ 1-e^{-\lambda}}\text{d}y\mathbb{I}_{(0,n)}(s)\\ \end{align*} The computation for $n=2$ leads to \begin{align*} f_2(s)&=\dfrac{\lambda^2}{(1-e^{-\lambda})^2}\int_0^{1} e^{-\lambda y}e^{-\lambda (s-y)}\mathbb{I}_{(0,1)}(s-y)\text{d}y\mathbb{I}_{(0,2)}(s)\\ &=\dfrac{\lambda^2}{(1-e^{-\lambda})^2}\int_{0\vee (s-1)}^{1\wedge s} e^{-\lambda y}e^{-\lambda (s-y)}\text{d}y\mathbb{I}_{(0,2)}(s)\\ &=\dfrac{\lambda^2e^{-\lambda s}}{(1-e^{-\lambda})^2} \left[{1\wedge s}-{0\vee (s-1)}\right]\\ &=\dfrac{\lambda^2e^{-\lambda s}}{(1-e^{-\lambda})^2}\left[s\mathbb{I}_{(0,1)}(s)+(2-s)\mathbb{I}_{(1,2)}(s)\right]\\ \end{align*}which does not show much promise for the general case.
PDF of sum of truncated exponential distribution
[There was indeed a mistake in the earlier derivation!] If $f_n$ denotes the density of $s_n=x_1+\ldots+x_n$, it satisfies the recursion \begin{align*} f_1(s) &= \lambda e^{-\lambda s} \big/ 1-e^{-\la
PDF of sum of truncated exponential distribution [There was indeed a mistake in the earlier derivation!] If $f_n$ denotes the density of $s_n=x_1+\ldots+x_n$, it satisfies the recursion \begin{align*} f_1(s) &= \lambda e^{-\lambda s} \big/ 1-e^{-\lambda}\mathbb{I}_{(0,1)}(s)\\ f_n(s) &= \int_0^{1} f_{n-1}(s-y) \dfrac{\lambda e^{-\lambda y} }{ 1-e^{-\lambda}}\text{d}y\mathbb{I}_{(0,n)}(s)\\ \end{align*} The computation for $n=2$ leads to \begin{align*} f_2(s)&=\dfrac{\lambda^2}{(1-e^{-\lambda})^2}\int_0^{1} e^{-\lambda y}e^{-\lambda (s-y)}\mathbb{I}_{(0,1)}(s-y)\text{d}y\mathbb{I}_{(0,2)}(s)\\ &=\dfrac{\lambda^2}{(1-e^{-\lambda})^2}\int_{0\vee (s-1)}^{1\wedge s} e^{-\lambda y}e^{-\lambda (s-y)}\text{d}y\mathbb{I}_{(0,2)}(s)\\ &=\dfrac{\lambda^2e^{-\lambda s}}{(1-e^{-\lambda})^2} \left[{1\wedge s}-{0\vee (s-1)}\right]\\ &=\dfrac{\lambda^2e^{-\lambda s}}{(1-e^{-\lambda})^2}\left[s\mathbb{I}_{(0,1)}(s)+(2-s)\mathbb{I}_{(1,2)}(s)\right]\\ \end{align*}which does not show much promise for the general case.
PDF of sum of truncated exponential distribution [There was indeed a mistake in the earlier derivation!] If $f_n$ denotes the density of $s_n=x_1+\ldots+x_n$, it satisfies the recursion \begin{align*} f_1(s) &= \lambda e^{-\lambda s} \big/ 1-e^{-\la
34,694
PDF of sum of truncated exponential distribution
The pdf of a $(0,1)$-trunctated exponential distribution with rate parameter $\lambda$ can be written as \begin{align} f(x)&=I_{(0,1)}(x)\frac{\lambda e^{-\lambda x}}{1-e^{-\lambda}} \\&=(1-p)I_{(0,\infty)}(x)\lambda e^{-\lambda x}+pI_{(1,\infty)}(x)\lambda e^{-\lambda(x-1)}, \end{align} where $1-p=1/(1-e^{-\lambda})$. Although one weight is negative, it follows that we can treat this as a mixture of two exponential distributions and proceed as if both weights were positive. The sum of $n$ such truncated exponentials can then be seen as mixture having a total of $n+1$ components (although again, some of the associated weights are actually negative). The $i$th component is the sum of $n$ exponentials out of which $i$ are shifted one unit to the right. The $i$th component is thus a gamma distributions with shape parameter $n$ shifted $i$ to the right. The overall pdf is $$ f_n(x)=\sum_{i=0}^n {n \choose i}p^i(1-p)^{n-i}I_{(i,\infty)}(x)\frac{\lambda^n}{(n-1)!}(x-i)^{n-1}e^{-\lambda (x-i)}. $$ For $n=3$ and $\lambda=1$, the pdf has the following shape. R code: # density function of the sum dsum01exp <- function(x, n, lambda=1) { p <- 1 - 1/(1 - exp(-lambda)) d <- 0 for (i in 0:n) { d <- d + choose(n, i)*p^i*(1 - p)^(n - i)*dgamma(x-i,shape=n,rate=lambda) } d } # random sample from (0,1)-trunctated exponential r01exp <- function(n, lambda=1) { -1/lambda*log((1-(1-exp(-lambda))*runif(n))) } # histogram of simulated sums vs. theoretical pdf n <- 3 x <- matrix(r01exp(n*1e+5), ncol=n) hist(apply(x,1,sum),breaks=100, prob=TRUE, xlab="sum(x)",main="") curve(dsum01exp(x,n), 0, n, ylab=expression(f[n](x)), add=TRUE)
PDF of sum of truncated exponential distribution
The pdf of a $(0,1)$-trunctated exponential distribution with rate parameter $\lambda$ can be written as \begin{align} f(x)&=I_{(0,1)}(x)\frac{\lambda e^{-\lambda x}}{1-e^{-\lambda}} \\&=(1-p)I_{(0,\i
PDF of sum of truncated exponential distribution The pdf of a $(0,1)$-trunctated exponential distribution with rate parameter $\lambda$ can be written as \begin{align} f(x)&=I_{(0,1)}(x)\frac{\lambda e^{-\lambda x}}{1-e^{-\lambda}} \\&=(1-p)I_{(0,\infty)}(x)\lambda e^{-\lambda x}+pI_{(1,\infty)}(x)\lambda e^{-\lambda(x-1)}, \end{align} where $1-p=1/(1-e^{-\lambda})$. Although one weight is negative, it follows that we can treat this as a mixture of two exponential distributions and proceed as if both weights were positive. The sum of $n$ such truncated exponentials can then be seen as mixture having a total of $n+1$ components (although again, some of the associated weights are actually negative). The $i$th component is the sum of $n$ exponentials out of which $i$ are shifted one unit to the right. The $i$th component is thus a gamma distributions with shape parameter $n$ shifted $i$ to the right. The overall pdf is $$ f_n(x)=\sum_{i=0}^n {n \choose i}p^i(1-p)^{n-i}I_{(i,\infty)}(x)\frac{\lambda^n}{(n-1)!}(x-i)^{n-1}e^{-\lambda (x-i)}. $$ For $n=3$ and $\lambda=1$, the pdf has the following shape. R code: # density function of the sum dsum01exp <- function(x, n, lambda=1) { p <- 1 - 1/(1 - exp(-lambda)) d <- 0 for (i in 0:n) { d <- d + choose(n, i)*p^i*(1 - p)^(n - i)*dgamma(x-i,shape=n,rate=lambda) } d } # random sample from (0,1)-trunctated exponential r01exp <- function(n, lambda=1) { -1/lambda*log((1-(1-exp(-lambda))*runif(n))) } # histogram of simulated sums vs. theoretical pdf n <- 3 x <- matrix(r01exp(n*1e+5), ncol=n) hist(apply(x,1,sum),breaks=100, prob=TRUE, xlab="sum(x)",main="") curve(dsum01exp(x,n), 0, n, ylab=expression(f[n](x)), add=TRUE)
PDF of sum of truncated exponential distribution The pdf of a $(0,1)$-trunctated exponential distribution with rate parameter $\lambda$ can be written as \begin{align} f(x)&=I_{(0,1)}(x)\frac{\lambda e^{-\lambda x}}{1-e^{-\lambda}} \\&=(1-p)I_{(0,\i
34,695
How can I calculate $\mathrm{E}\!\left[\sum _{i=1}^X I_{\{Y_i\leq Y_{n+1}\}}\right]$?
Here is an alternative answer to @Lucas' using the law of iterated expectations: $$ \begin{align} E\left[\sum_{i=1}^X1_{(Y_i \leq Y_{n+1})}\right] & = E\left[E\left[\sum_{i=1}^X1_{(Y_i \leq Y_{n+1})}|X\right]\right] \\ & = E\left[\sum_{i=1}^XE[1_{(Y_i \leq Y_{n+1})}|X]\right] \\ & = E\left[\sum_{i=1}^XE[1_{(Y_i \leq Y_{n+1})}]\right] \\ & = E\left[\sum_{i=1}^XE\left[E[1_{(Y_i \leq Y_{n+1})}|Y_{n+1}]\right]\right] \\ & = E\left[\sum_{i=1}^XE[F(Y_{n+1})]\right] \\[12pt] & = E[X]E\left[F(Y_{n+1})\right] \\[12pt] & =\frac{n+1}{2}E[F(Y_{n+1})] \end {align}$$ The third step follows from independence of $Y_i$ and $Y_{n+1}$ from $X$; the fourth step is again an application of the law of iterated expectations; the last step is simply an application of the formula for the expectation of a discrete uniform random variable. By inverting the order of integration, we derive the remaining expectation: $$ \begin{align} E[F(Y_{n+1})] & = \int_{-\infty}^{\infty}F(y)dF(y) \\ & = \int_{-\infty}^{\infty} \int_{-\infty}^y dF(x)dF(y) \\ & = \int_{-\infty}^{\infty} \int_{x}^{\infty} dF(y)dF(x) \\ & = \int_{-\infty}^{\infty} (1-F(x))dF(x) \\[10pt] & = 1-E[F(Y_{n+1})] \end{align} $$ which implies $E[F(Y_{n+1})] = \frac{1}{2}$. Hence: $$ E\left[\sum_{i=1}^X1_{(Y_i \leq Y_{n+1})}\right] = \frac{n+1}{4} $$
How can I calculate $\mathrm{E}\!\left[\sum _{i=1}^X I_{\{Y_i\leq Y_{n+1}\}}\right]$?
Here is an alternative answer to @Lucas' using the law of iterated expectations: $$ \begin{align} E\left[\sum_{i=1}^X1_{(Y_i \leq Y_{n+1})}\right] & = E\left[E\left[\sum_{i=1}^X1_{(Y_i \leq Y_{n+1})}
How can I calculate $\mathrm{E}\!\left[\sum _{i=1}^X I_{\{Y_i\leq Y_{n+1}\}}\right]$? Here is an alternative answer to @Lucas' using the law of iterated expectations: $$ \begin{align} E\left[\sum_{i=1}^X1_{(Y_i \leq Y_{n+1})}\right] & = E\left[E\left[\sum_{i=1}^X1_{(Y_i \leq Y_{n+1})}|X\right]\right] \\ & = E\left[\sum_{i=1}^XE[1_{(Y_i \leq Y_{n+1})}|X]\right] \\ & = E\left[\sum_{i=1}^XE[1_{(Y_i \leq Y_{n+1})}]\right] \\ & = E\left[\sum_{i=1}^XE\left[E[1_{(Y_i \leq Y_{n+1})}|Y_{n+1}]\right]\right] \\ & = E\left[\sum_{i=1}^XE[F(Y_{n+1})]\right] \\[12pt] & = E[X]E\left[F(Y_{n+1})\right] \\[12pt] & =\frac{n+1}{2}E[F(Y_{n+1})] \end {align}$$ The third step follows from independence of $Y_i$ and $Y_{n+1}$ from $X$; the fourth step is again an application of the law of iterated expectations; the last step is simply an application of the formula for the expectation of a discrete uniform random variable. By inverting the order of integration, we derive the remaining expectation: $$ \begin{align} E[F(Y_{n+1})] & = \int_{-\infty}^{\infty}F(y)dF(y) \\ & = \int_{-\infty}^{\infty} \int_{-\infty}^y dF(x)dF(y) \\ & = \int_{-\infty}^{\infty} \int_{x}^{\infty} dF(y)dF(x) \\ & = \int_{-\infty}^{\infty} (1-F(x))dF(x) \\[10pt] & = 1-E[F(Y_{n+1})] \end{align} $$ which implies $E[F(Y_{n+1})] = \frac{1}{2}$. Hence: $$ E\left[\sum_{i=1}^X1_{(Y_i \leq Y_{n+1})}\right] = \frac{n+1}{4} $$
How can I calculate $\mathrm{E}\!\left[\sum _{i=1}^X I_{\{Y_i\leq Y_{n+1}\}}\right]$? Here is an alternative answer to @Lucas' using the law of iterated expectations: $$ \begin{align} E\left[\sum_{i=1}^X1_{(Y_i \leq Y_{n+1})}\right] & = E\left[E\left[\sum_{i=1}^X1_{(Y_i \leq Y_{n+1})}
34,696
How can I calculate $\mathrm{E}\!\left[\sum _{i=1}^X I_{\{Y_i\leq Y_{n+1}\}}\right]$?
By distributional symmetry, $\Pr\{Y_i\leq Y_{n+1}\}=\Pr\{Y_{n+1}\leq Y_i\}$, for each $i=1,\dots,n$. Since $F$ is continuous, we have $$ \Pr\{Y_i\leq Y_{n+1}\} = 1-\Pr\{Y_{n+1}< Y_i\}=1-\Pr\{Y_{n+1}\leq Y_i\}. $$ Therefore, $\mathrm{E}\left[I_{\{Y_i\leq Y_{n+1}\}}\right]=\Pr\{Y_i\leq Y_{n+1}\}=1/2$. Now, we have $$ \mathrm{E}\!\left[\sum_{i=1}^X I_{\{Y_i\leq Y_{n+1}\}}\;\Bigg\vert\; X=x\right] = \mathrm{E}\!\left[\sum_{i=1}^x I_{\{Y_i\leq Y_{n+1}\}}\;\Bigg\vert\; X=x\right] = \sum_{i=1}^x\;\mathrm{E}\!\left[I_{\{Y_i\leq Y_{n+1}\}}\;\Bigg\vert\; X=x\right] $$ $$ = \sum_{i=1}^x\;\mathrm{E}\!\left[I_{\{Y_i\leq Y_{n+1}\}}\right] = \frac{x}{2}, $$ in which we used the linearity of the conditional expectation and the independence of $X$ and the $Y_i$'s. Hence, $$ \mathrm{E}\!\left[\sum_{i=1}^X I_{\{Y_i\leq Y_{n+1}\}}\right] = \mathrm{E}\!\left[\mathrm{E}\!\left[\sum_{i=1}^X I_{\{Y_i\leq Y_{n+1}\}}\;\Bigg\vert\; X\right]\right] = \mathrm{E}\left[\frac{X}{2}\right] = \frac{n+1}{4}. $$
How can I calculate $\mathrm{E}\!\left[\sum _{i=1}^X I_{\{Y_i\leq Y_{n+1}\}}\right]$?
By distributional symmetry, $\Pr\{Y_i\leq Y_{n+1}\}=\Pr\{Y_{n+1}\leq Y_i\}$, for each $i=1,\dots,n$. Since $F$ is continuous, we have $$ \Pr\{Y_i\leq Y_{n+1}\} = 1-\Pr\{Y_{n+1}< Y_i\}=1-\Pr\{Y_{n+1}
How can I calculate $\mathrm{E}\!\left[\sum _{i=1}^X I_{\{Y_i\leq Y_{n+1}\}}\right]$? By distributional symmetry, $\Pr\{Y_i\leq Y_{n+1}\}=\Pr\{Y_{n+1}\leq Y_i\}$, for each $i=1,\dots,n$. Since $F$ is continuous, we have $$ \Pr\{Y_i\leq Y_{n+1}\} = 1-\Pr\{Y_{n+1}< Y_i\}=1-\Pr\{Y_{n+1}\leq Y_i\}. $$ Therefore, $\mathrm{E}\left[I_{\{Y_i\leq Y_{n+1}\}}\right]=\Pr\{Y_i\leq Y_{n+1}\}=1/2$. Now, we have $$ \mathrm{E}\!\left[\sum_{i=1}^X I_{\{Y_i\leq Y_{n+1}\}}\;\Bigg\vert\; X=x\right] = \mathrm{E}\!\left[\sum_{i=1}^x I_{\{Y_i\leq Y_{n+1}\}}\;\Bigg\vert\; X=x\right] = \sum_{i=1}^x\;\mathrm{E}\!\left[I_{\{Y_i\leq Y_{n+1}\}}\;\Bigg\vert\; X=x\right] $$ $$ = \sum_{i=1}^x\;\mathrm{E}\!\left[I_{\{Y_i\leq Y_{n+1}\}}\right] = \frac{x}{2}, $$ in which we used the linearity of the conditional expectation and the independence of $X$ and the $Y_i$'s. Hence, $$ \mathrm{E}\!\left[\sum_{i=1}^X I_{\{Y_i\leq Y_{n+1}\}}\right] = \mathrm{E}\!\left[\mathrm{E}\!\left[\sum_{i=1}^X I_{\{Y_i\leq Y_{n+1}\}}\;\Bigg\vert\; X\right]\right] = \mathrm{E}\left[\frac{X}{2}\right] = \frac{n+1}{4}. $$
How can I calculate $\mathrm{E}\!\left[\sum _{i=1}^X I_{\{Y_i\leq Y_{n+1}\}}\right]$? By distributional symmetry, $\Pr\{Y_i\leq Y_{n+1}\}=\Pr\{Y_{n+1}\leq Y_i\}$, for each $i=1,\dots,n$. Since $F$ is continuous, we have $$ \Pr\{Y_i\leq Y_{n+1}\} = 1-\Pr\{Y_{n+1}< Y_i\}=1-\Pr\{Y_{n+1}
34,697
How can I calculate $\mathrm{E}\!\left[\sum _{i=1}^X I_{\{Y_i\leq Y_{n+1}\}}\right]$?
We have \begin{align} E\left[ \sum_{i = 1}^X I[Y_i \leq Y_{n + 1}] \right] &= E\left[ \sum_{i = 1}^n I[i \leq X] I[Y_i \leq Y_{n + 1}] \right] \\ &= \sum_{i = 1}^n E\left[ I[i \leq X] I[Y_i \leq Y_{n + 1}] \right] \\ &= \sum_{i = 1}^n E\left[ I[i \leq X] \right] \cdot E\left[ I[Y_i \leq Y_{n + 1}] \right] \\ &= \sum_{i = 1}^n \frac{i}{n} \cdot E[I[Y_i \leq Y_{n + 1}]] \\ &= \sum_{i = 1}^n \frac{i}{n} \cdot E\left[ F(Y_{n + 1})] \right] \\ &= \sum_{i = 1}^n \frac{i}{n} \cdot \frac{1}{2} \\ &= \frac{n + 1}{4} \end{align} The second step follows from the linearity of expectations, the third step from the independence of $X$ and $Y_1, ..., Y_{n + 1}$, and the fifth step from the fact that $$F(y) = P(Y \leq y) = E[I[Y \leq y]].$$ To prove the sixth step, you can use partial integration. For the final step, you use the formula for partial sums.
How can I calculate $\mathrm{E}\!\left[\sum _{i=1}^X I_{\{Y_i\leq Y_{n+1}\}}\right]$?
We have \begin{align} E\left[ \sum_{i = 1}^X I[Y_i \leq Y_{n + 1}] \right] &= E\left[ \sum_{i = 1}^n I[i \leq X] I[Y_i \leq Y_{n + 1}] \right] \\ &= \sum_{i = 1}^n E\left[ I[i \leq X] I[Y_i \leq Y_{
How can I calculate $\mathrm{E}\!\left[\sum _{i=1}^X I_{\{Y_i\leq Y_{n+1}\}}\right]$? We have \begin{align} E\left[ \sum_{i = 1}^X I[Y_i \leq Y_{n + 1}] \right] &= E\left[ \sum_{i = 1}^n I[i \leq X] I[Y_i \leq Y_{n + 1}] \right] \\ &= \sum_{i = 1}^n E\left[ I[i \leq X] I[Y_i \leq Y_{n + 1}] \right] \\ &= \sum_{i = 1}^n E\left[ I[i \leq X] \right] \cdot E\left[ I[Y_i \leq Y_{n + 1}] \right] \\ &= \sum_{i = 1}^n \frac{i}{n} \cdot E[I[Y_i \leq Y_{n + 1}]] \\ &= \sum_{i = 1}^n \frac{i}{n} \cdot E\left[ F(Y_{n + 1})] \right] \\ &= \sum_{i = 1}^n \frac{i}{n} \cdot \frac{1}{2} \\ &= \frac{n + 1}{4} \end{align} The second step follows from the linearity of expectations, the third step from the independence of $X$ and $Y_1, ..., Y_{n + 1}$, and the fifth step from the fact that $$F(y) = P(Y \leq y) = E[I[Y \leq y]].$$ To prove the sixth step, you can use partial integration. For the final step, you use the formula for partial sums.
How can I calculate $\mathrm{E}\!\left[\sum _{i=1}^X I_{\{Y_i\leq Y_{n+1}\}}\right]$? We have \begin{align} E\left[ \sum_{i = 1}^X I[Y_i \leq Y_{n + 1}] \right] &= E\left[ \sum_{i = 1}^n I[i \leq X] I[Y_i \leq Y_{n + 1}] \right] \\ &= \sum_{i = 1}^n E\left[ I[i \leq X] I[Y_i \leq Y_{
34,698
Agresti-Coull Interval question
The reason why Agresti and Coull chose these "add two successes and two failures" lies in rounding of a Wilson 95% CI. It was not determined by simulation studies. I had to write a small paper on the topic once, here were my findings (condensed) and an answer to your question. Pro-tip, R has a package propCIs which has different CI's build in. (Agresti-Coull is the add4ci method) Background When $X_i \stackrel{d}{=} \text{Ber}(p)$ and the sample size is $n$ then one would intuitively try the following $1-\alpha$ CI (so called Wald CI): $$\hat p \pm z_{\alpha/2}\sqrt{\frac{\hat p(1-\hat p)}{n}}$$ But as you know this behaves badly. One of the reason the default CI behaves badly, is the usage of $\hat p$ to find the width of the CI. This results in a very small width when $\hat p$ is close to 1 or 0. The Wilson CI tries to mitigate this issue. This Wilson CI uses the width under $H_0$ and seeks all $p$ which solve $$\left|\dfrac{\hat p - p}{\sqrt{\frac{p(1-p)}{n}}}\right| < z_{\alpha/2}$$ Working this out results in a quadratic equation wich results in the following crazy formula for the CI: $$\hat p \left(\frac{n}{n+z_{\alpha/2}^2}\right) + \frac{1}{2}\left( \frac{z_{\alpha/2}^2}{n+z_{\alpha/2}^2}\right) \pm z_{\alpha/2} \sqrt{\dfrac{1}{n+z_{\alpha/2}^2}\left[ \hat p (1-\hat p) \left( \dfrac{n}{n+z_{\alpha/2}^2}\right)+\dfrac{1}{2}\left(1-\dfrac{1}{2}\right) \left(\dfrac{z_{\alpha/2}^2}{n+z_{\alpha/2}^2}\right)\right]}.$$ This CI behaves pretty good. See the figure below. Agresti-Coull Agresti and Coull looked at the center of the Wilson CI and noticed a simplification if one calculates a 95% CI. $z_{0.025} = 1.96\approx 2$. Now notice how the center of the Wilson CI was given by: $$\hat p \left( \dfrac{n}{n+z^2_{\alpha/2}}\right) + \dfrac{1}{2}\left( \dfrac{z^2_{\alpha/2}}{n+z^2_{\alpha/2}} \right)$$ When you apply the simplification suggested above you find: $$\tilde p = \hat p \left( \frac{n}{n+4}\right) + \dfrac{1}{2}\left( \dfrac{4}{n+4}\right) = \hat p \left( \frac{n}{n+4}\right) + \dfrac{2}{n+4} = \dfrac{X+2}{n+4}$$ Which explaines the "adding two failures, two success - method". The Agresti-Coull CI is then defined as: $$\tilde p \pm z_{\alpha/2}\sqrt{\dfrac{\tilde p(1-\tilde p)}{\tilde n}}$$ Comparison - Coverage probability The following picture show the three (as well as Clopper-Pearson) and the coverage probability for simultation of 5000 times. Why not use 3 or even more successes/failures ratio's? First of all, the derivation of the Agresti-Coull interval makes sense. I've looked at your graph of the performance of the different methods and you claim the "adding three successes" is better, but I'm not convinced. I would say it is worse since the coverage probability is systematically to large. Meaning that the CI are to large, which makes them to conservative. Using this interval it would be harder to detect a significant result. The paper L. D. Brown et al. (2001). Interval Estimation for a Binomial Proportion. Statistical Science. contains a very good overview of all the CI's for a binomial distribution.
Agresti-Coull Interval question
The reason why Agresti and Coull chose these "add two successes and two failures" lies in rounding of a Wilson 95% CI. It was not determined by simulation studies. I had to write a small paper on the
Agresti-Coull Interval question The reason why Agresti and Coull chose these "add two successes and two failures" lies in rounding of a Wilson 95% CI. It was not determined by simulation studies. I had to write a small paper on the topic once, here were my findings (condensed) and an answer to your question. Pro-tip, R has a package propCIs which has different CI's build in. (Agresti-Coull is the add4ci method) Background When $X_i \stackrel{d}{=} \text{Ber}(p)$ and the sample size is $n$ then one would intuitively try the following $1-\alpha$ CI (so called Wald CI): $$\hat p \pm z_{\alpha/2}\sqrt{\frac{\hat p(1-\hat p)}{n}}$$ But as you know this behaves badly. One of the reason the default CI behaves badly, is the usage of $\hat p$ to find the width of the CI. This results in a very small width when $\hat p$ is close to 1 or 0. The Wilson CI tries to mitigate this issue. This Wilson CI uses the width under $H_0$ and seeks all $p$ which solve $$\left|\dfrac{\hat p - p}{\sqrt{\frac{p(1-p)}{n}}}\right| < z_{\alpha/2}$$ Working this out results in a quadratic equation wich results in the following crazy formula for the CI: $$\hat p \left(\frac{n}{n+z_{\alpha/2}^2}\right) + \frac{1}{2}\left( \frac{z_{\alpha/2}^2}{n+z_{\alpha/2}^2}\right) \pm z_{\alpha/2} \sqrt{\dfrac{1}{n+z_{\alpha/2}^2}\left[ \hat p (1-\hat p) \left( \dfrac{n}{n+z_{\alpha/2}^2}\right)+\dfrac{1}{2}\left(1-\dfrac{1}{2}\right) \left(\dfrac{z_{\alpha/2}^2}{n+z_{\alpha/2}^2}\right)\right]}.$$ This CI behaves pretty good. See the figure below. Agresti-Coull Agresti and Coull looked at the center of the Wilson CI and noticed a simplification if one calculates a 95% CI. $z_{0.025} = 1.96\approx 2$. Now notice how the center of the Wilson CI was given by: $$\hat p \left( \dfrac{n}{n+z^2_{\alpha/2}}\right) + \dfrac{1}{2}\left( \dfrac{z^2_{\alpha/2}}{n+z^2_{\alpha/2}} \right)$$ When you apply the simplification suggested above you find: $$\tilde p = \hat p \left( \frac{n}{n+4}\right) + \dfrac{1}{2}\left( \dfrac{4}{n+4}\right) = \hat p \left( \frac{n}{n+4}\right) + \dfrac{2}{n+4} = \dfrac{X+2}{n+4}$$ Which explaines the "adding two failures, two success - method". The Agresti-Coull CI is then defined as: $$\tilde p \pm z_{\alpha/2}\sqrt{\dfrac{\tilde p(1-\tilde p)}{\tilde n}}$$ Comparison - Coverage probability The following picture show the three (as well as Clopper-Pearson) and the coverage probability for simultation of 5000 times. Why not use 3 or even more successes/failures ratio's? First of all, the derivation of the Agresti-Coull interval makes sense. I've looked at your graph of the performance of the different methods and you claim the "adding three successes" is better, but I'm not convinced. I would say it is worse since the coverage probability is systematically to large. Meaning that the CI are to large, which makes them to conservative. Using this interval it would be harder to detect a significant result. The paper L. D. Brown et al. (2001). Interval Estimation for a Binomial Proportion. Statistical Science. contains a very good overview of all the CI's for a binomial distribution.
Agresti-Coull Interval question The reason why Agresti and Coull chose these "add two successes and two failures" lies in rounding of a Wilson 95% CI. It was not determined by simulation studies. I had to write a small paper on the
34,699
Preserving comments on graphs for exploratory data analysis
Here's an easy solution that many people have found useful. If you find it trivial, I won't disagree. This cuts across statistical software, operating system and other computing details. Just copy and paste your graphs into your favourite word or text processor and then add your own comments. That could mean MS Word, software supporting TeX, LaTeX, etc. That's it. Clearly the advantages are simplicity (nothing new to learn) and flexibility (add what you want in the way that you want it). This isn't an automated solution. But even automated solutions depend on being fed information on the graphs and your comments, so what is that different?
Preserving comments on graphs for exploratory data analysis
Here's an easy solution that many people have found useful. If you find it trivial, I won't disagree. This cuts across statistical software, operating system and other computing details. Just copy an
Preserving comments on graphs for exploratory data analysis Here's an easy solution that many people have found useful. If you find it trivial, I won't disagree. This cuts across statistical software, operating system and other computing details. Just copy and paste your graphs into your favourite word or text processor and then add your own comments. That could mean MS Word, software supporting TeX, LaTeX, etc. That's it. Clearly the advantages are simplicity (nothing new to learn) and flexibility (add what you want in the way that you want it). This isn't an automated solution. But even automated solutions depend on being fed information on the graphs and your comments, so what is that different?
Preserving comments on graphs for exploratory data analysis Here's an easy solution that many people have found useful. If you find it trivial, I won't disagree. This cuts across statistical software, operating system and other computing details. Just copy an
34,700
Preserving comments on graphs for exploratory data analysis
I highly recommend Jupyter Notebook, which lets you create documents that contain interspersed code blocks, plots, and notes/documentation. The document can include markdown and latex, which is automatically rendered (much like writing on CrossValidated). When you run a code block, any text output and plots that it generates are added inline to the document. You can change a code block and re-run to update the output/plots. This is nice for testing things interactively (e.g. tweaking code/parameters to see what happens). I think it's easier than having to export figures and and paste them into a traditional, static document, especially if you change anything. You can export a notebook to PDF, etc. to get a static copy. It's open source and works with Python, R, and other languages. The interface is browser-based, so it's cross-platform and easy to share notebooks. You can run the backend on your own machine, or you can host notebooks on a website so you/others can edit/view/run them from anywhere (the code will run on the server). Apparently there's a way to configure the notebook as the frontend to a compute cluster for parallel computations.
Preserving comments on graphs for exploratory data analysis
I highly recommend Jupyter Notebook, which lets you create documents that contain interspersed code blocks, plots, and notes/documentation. The document can include markdown and latex, which is automa
Preserving comments on graphs for exploratory data analysis I highly recommend Jupyter Notebook, which lets you create documents that contain interspersed code blocks, plots, and notes/documentation. The document can include markdown and latex, which is automatically rendered (much like writing on CrossValidated). When you run a code block, any text output and plots that it generates are added inline to the document. You can change a code block and re-run to update the output/plots. This is nice for testing things interactively (e.g. tweaking code/parameters to see what happens). I think it's easier than having to export figures and and paste them into a traditional, static document, especially if you change anything. You can export a notebook to PDF, etc. to get a static copy. It's open source and works with Python, R, and other languages. The interface is browser-based, so it's cross-platform and easy to share notebooks. You can run the backend on your own machine, or you can host notebooks on a website so you/others can edit/view/run them from anywhere (the code will run on the server). Apparently there's a way to configure the notebook as the frontend to a compute cluster for parallel computations.
Preserving comments on graphs for exploratory data analysis I highly recommend Jupyter Notebook, which lets you create documents that contain interspersed code blocks, plots, and notes/documentation. The document can include markdown and latex, which is automa