idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
25,501 | How to calculate regularization parameter in ridge regression given degrees of freedom and input matrix? | Here is the small Matlab code based on the formula proved by probabilityislogic:
function [lamda] = calculate_labda(Xnormalised,df)
[n,p] = size(Xnormalised);
%Finding SVD of data
[u s v]=svd(Xnormalised);
Di=diag(s);
Dsq=Di.^2;
%Newton-rapson method to solve for lamda
lamdaPrev=(p-df)/df;
lamdaCur=Inf;%random large value
diff=lamdaCur-lamdaPrev;
threshold=eps(class(XstdArray));
while (diff>threshold)
numerator=(sum(Dsq ./ (Dsq+lamdaPrev))-df);
denominator=sum(Dsq./((Dsq+lamdaPrev).^2));
lamdaCur=lamdaPrev+(numerator/denominator);
diff=lamdaCur-lamdaPrev;
lamdaPrev=lamdaCur;
end
lamda=lamdaCur;
end | How to calculate regularization parameter in ridge regression given degrees of freedom and input mat | Here is the small Matlab code based on the formula proved by probabilityislogic:
function [lamda] = calculate_labda(Xnormalised,df)
[n,p] = size(Xnormalised);
%Finding SVD of data
[u s | How to calculate regularization parameter in ridge regression given degrees of freedom and input matrix?
Here is the small Matlab code based on the formula proved by probabilityislogic:
function [lamda] = calculate_labda(Xnormalised,df)
[n,p] = size(Xnormalised);
%Finding SVD of data
[u s v]=svd(Xnormalised);
Di=diag(s);
Dsq=Di.^2;
%Newton-rapson method to solve for lamda
lamdaPrev=(p-df)/df;
lamdaCur=Inf;%random large value
diff=lamdaCur-lamdaPrev;
threshold=eps(class(XstdArray));
while (diff>threshold)
numerator=(sum(Dsq ./ (Dsq+lamdaPrev))-df);
denominator=sum(Dsq./((Dsq+lamdaPrev).^2));
lamdaCur=lamdaPrev+(numerator/denominator);
diff=lamdaCur-lamdaPrev;
lamdaPrev=lamdaCur;
end
lamda=lamdaCur;
end | How to calculate regularization parameter in ridge regression given degrees of freedom and input mat
Here is the small Matlab code based on the formula proved by probabilityislogic:
function [lamda] = calculate_labda(Xnormalised,df)
[n,p] = size(Xnormalised);
%Finding SVD of data
[u s |
25,502 | Predict after running the mlogit function in R | Here's useful trick: Add the data you want to predict to your original estimation sample, but use the weights variable to set the weight of those new observations to zero. Estimate the model (with the new observations weighted to zero), and get the predictions from the "probabilities" output. That way you can bypass the predict function, which is a mess. | Predict after running the mlogit function in R | Here's useful trick: Add the data you want to predict to your original estimation sample, but use the weights variable to set the weight of those new observations to zero. Estimate the model (with the | Predict after running the mlogit function in R
Here's useful trick: Add the data you want to predict to your original estimation sample, but use the weights variable to set the weight of those new observations to zero. Estimate the model (with the new observations weighted to zero), and get the predictions from the "probabilities" output. That way you can bypass the predict function, which is a mess. | Predict after running the mlogit function in R
Here's useful trick: Add the data you want to predict to your original estimation sample, but use the weights variable to set the weight of those new observations to zero. Estimate the model (with the |
25,503 | Predict after running the mlogit function in R | The mlogit package does have a predict() method, at least in the version I'm using ( 0.2-3 with R 2.15.3).
The code put up by @Zach has one error in it. The "long format" data used by mlogit() has one row for each alternative; this is the format created by the mlogit.data() function. Therefore to get a prediction for the first case you need to pull out all the rows for that case, and there are 4:
Fish_fit<-Fish[-(1:4),]
Fish_test<-Fish[1:4,]
m <- mlogit(mode ~price+ catch | income, data = Fish_fit)
predict(m,newdata=Fish_test)
which gives a good result. | Predict after running the mlogit function in R | The mlogit package does have a predict() method, at least in the version I'm using ( 0.2-3 with R 2.15.3).
The code put up by @Zach has one error in it. The "long format" data used by mlogit() has one | Predict after running the mlogit function in R
The mlogit package does have a predict() method, at least in the version I'm using ( 0.2-3 with R 2.15.3).
The code put up by @Zach has one error in it. The "long format" data used by mlogit() has one row for each alternative; this is the format created by the mlogit.data() function. Therefore to get a prediction for the first case you need to pull out all the rows for that case, and there are 4:
Fish_fit<-Fish[-(1:4),]
Fish_test<-Fish[1:4,]
m <- mlogit(mode ~price+ catch | income, data = Fish_fit)
predict(m,newdata=Fish_test)
which gives a good result. | Predict after running the mlogit function in R
The mlogit package does have a predict() method, at least in the version I'm using ( 0.2-3 with R 2.15.3).
The code put up by @Zach has one error in it. The "long format" data used by mlogit() has one |
25,504 | Predict after running the mlogit function in R | After quite a lot of effort in trying to use the predict function for the population, I think I can add a few insights to all your answers.
The predict function of mlogit works fine, you just have to make some adjustments and be sure that the following things are taken care of:
The newdata (as expected) should include exactly the same data as the sample used for the estimation of the model. This means that one should check for "hidden" properties of the data (such as a factor that inherits levels that do not exist -droplevel can be useful in this case-, or not introduced in the sample factors, or a wrong colname etc.).
You have to make an arbitrary choice in your newdata (if it does not exist) something that can be easily done using the sample function:
MrChoice <-sample(c("Car", "Bus", "Walk"),nrow(datase),replace=TRUE, prob = c(0.5, 0.4, 0.1))
mynewData$mode<-MrChoice
The next required step is to again transform the data to mlogit data, using the same function as used for the sample data, for example:
ExpData3<- mlogit.data(mynewData, shape="wide", choice = "mode",sep=".",id = "TripID")
The final step would be the actual prediction using the predict function.
resulted<-predict(ml1,newdata=ExpData3) | Predict after running the mlogit function in R | After quite a lot of effort in trying to use the predict function for the population, I think I can add a few insights to all your answers.
The predict function of mlogit works fine, you just have to | Predict after running the mlogit function in R
After quite a lot of effort in trying to use the predict function for the population, I think I can add a few insights to all your answers.
The predict function of mlogit works fine, you just have to make some adjustments and be sure that the following things are taken care of:
The newdata (as expected) should include exactly the same data as the sample used for the estimation of the model. This means that one should check for "hidden" properties of the data (such as a factor that inherits levels that do not exist -droplevel can be useful in this case-, or not introduced in the sample factors, or a wrong colname etc.).
You have to make an arbitrary choice in your newdata (if it does not exist) something that can be easily done using the sample function:
MrChoice <-sample(c("Car", "Bus", "Walk"),nrow(datase),replace=TRUE, prob = c(0.5, 0.4, 0.1))
mynewData$mode<-MrChoice
The next required step is to again transform the data to mlogit data, using the same function as used for the sample data, for example:
ExpData3<- mlogit.data(mynewData, shape="wide", choice = "mode",sep=".",id = "TripID")
The final step would be the actual prediction using the predict function.
resulted<-predict(ml1,newdata=ExpData3) | Predict after running the mlogit function in R
After quite a lot of effort in trying to use the predict function for the population, I think I can add a few insights to all your answers.
The predict function of mlogit works fine, you just have to |
25,505 | Predict after running the mlogit function in R | To answer my own question, I've moved over to using the 'glmnet' package to fit my multinomial logits, which has the added advantage of using the lasso or elastic net to regularize my independent variables. glmnet seems to be a much more 'finished' packaged than mlogit, complete with a 'predict' function. | Predict after running the mlogit function in R | To answer my own question, I've moved over to using the 'glmnet' package to fit my multinomial logits, which has the added advantage of using the lasso or elastic net to regularize my independent vari | Predict after running the mlogit function in R
To answer my own question, I've moved over to using the 'glmnet' package to fit my multinomial logits, which has the added advantage of using the lasso or elastic net to regularize my independent variables. glmnet seems to be a much more 'finished' packaged than mlogit, complete with a 'predict' function. | Predict after running the mlogit function in R
To answer my own question, I've moved over to using the 'glmnet' package to fit my multinomial logits, which has the added advantage of using the lasso or elastic net to regularize my independent vari |
25,506 | Predict after running the mlogit function in R | mlogit has a predict function, but I found it very difficult to use. I wrote my own very ugly set of functions for an implementation that I have. Anyone is welcome to use or improve them, stored on my github profile. | Predict after running the mlogit function in R | mlogit has a predict function, but I found it very difficult to use. I wrote my own very ugly set of functions for an implementation that I have. Anyone is welcome to use or improve them, stored on my | Predict after running the mlogit function in R
mlogit has a predict function, but I found it very difficult to use. I wrote my own very ugly set of functions for an implementation that I have. Anyone is welcome to use or improve them, stored on my github profile. | Predict after running the mlogit function in R
mlogit has a predict function, but I found it very difficult to use. I wrote my own very ugly set of functions for an implementation that I have. Anyone is welcome to use or improve them, stored on my |
25,507 | Predict after running the mlogit function in R | I'm pretty sure this is easily done with the given mlogit package by using the fitted function and then the standard R predict function. As chl pointed out, although I haven't done it myself yet (at least not the predict), is exampled in the package vignettes here on pg 29. | Predict after running the mlogit function in R | I'm pretty sure this is easily done with the given mlogit package by using the fitted function and then the standard R predict function. As chl pointed out, although I haven't done it myself yet (at | Predict after running the mlogit function in R
I'm pretty sure this is easily done with the given mlogit package by using the fitted function and then the standard R predict function. As chl pointed out, although I haven't done it myself yet (at least not the predict), is exampled in the package vignettes here on pg 29. | Predict after running the mlogit function in R
I'm pretty sure this is easily done with the given mlogit package by using the fitted function and then the standard R predict function. As chl pointed out, although I haven't done it myself yet (at |
25,508 | On the use of oblique rotation after PCA | I think there are different opinions or views about PCA, but basically we often think of it as either a reduction technique (you reduce your features space to a smaller one, often much more "readable" providing you take care of properly centering/standardizing the data when it is needed) or a way to construct latent factors or dimensions that account for a significant part of the inter-individual dispersion (here, the "individuals" stand for the statistical units on which data are collected; this may be country, people, etc.). In both case, we construct linear combinations of the original variables that account for the maximum of variance (when projected on the principal axis), subject to a constraint of orthogonality between any two principal components. Now, what has been described is purely algebrical or mathematical and we don't think of it as a (generating) model, contrary to what is done in the factor analysis tradition where we include an error term to account for some kind of measurement error. I also like the introduction given by William Revelle in his forthcoming handbook on applied psychometrics using R (Chapter 6), if we want to analyze the structure of a correlation matrix, then
The first [approach, PCA] is a model
that approximates the correlation
matrix in terms of the product of
components where each component is a
weighted linear sum of the variables,
the second model [factor analysis] is
also an approximation of the
correlation matrix by the product of
two factors, but the factors in this
are seen as causes rather than as
consequences of the variables.
In other words, with PCA you are expressing each component (factor) as a linear combination of the variables whereas in FA these are the variables that are expressed as a linear combination of the factors. It is well acknowledged that both methods will generally yield quite similar results (see e.g. Harman, 1976 or Catell, 1978), especially in the "ideal" case where we have a large number of individuals and a good ratio factor:variables (typically varying between 2 and 10 depending on the authors you consider!). This is because, by estimating the diagonals in the correlation matrix (as is done in FA, and these elements are known as the communalities), the error variance is eliminated from the factor matrix. This is the reason why PCA is often used as a way to uncover latent factors or psychological constructs in place of FA developed in the last century. But, as we go on this way, we often want to reach an easier interpretation of the resulting factor structure (or the so-called pattern matrix). And then comes the useful trick of rotating the factorial axis so that we maximize loadings of variables on specific factor, or equivalently reach a "simple structure". Using orthogonal rotation (e.g. VARIMAX), we preserve the independence of the factors. With oblique rotation (e.g. OBLIMIN, PROMAX), we break it and factors are allowed to correlate. This has been largely debated in the literature, and has lead some authors (not psychometricians, but statisticians in the early 1960's) to conclude that FA is an unfair approach due to the fact that researchers might seek the factor solution that is the more convenient to interpret.
But the point is that rotation methods were originally developed in the context of the FA approach and are now routinely used with PCA. I don't think this contradicts the algorithmic computation of the principal components: You can rotate your factorial axes the way you want, provided you keep in mind that once correlated (by oblique rotation) the interpretation of the factorial space becomes less obvious.
PCA is routinely used when developing new questionnaires, although FA is probably a better approach in this case because we are trying to extract meaningful factors that take into account measurement errors and whose relationships might be studied on their own (e.g. by factoring out the resulting pattern matrix, we get a second-order factor model). But PCA is also used for checking the factorial structure of already validated ones. Researchers don't really matter about FA vs. PCA when they have, say 500 representative subjects who are asked to rate a 60-item questionnaire tackling five dmensions (this is the case of the NEO-FFI, for example), and I think they are right because in this case we aren't very much interested in identifying a generating or conceptual model (the term "representative" is used here to alleviate the issue of measurement invariance).
Now, about the choice of rotation method and why some authors argue against the strict use of orthogonal rotation, I would like to quote Paul Kline, as I did in response to the following question, FA: Choosing Rotation matrix, based on “Simple Structure Criteria”,
(...) in the real world, it is not
unreasonable to think that factors, as
important determiners of behavior,
would be correlated. -- P. Kline,
Intelligence. The Psychometric View, 1991, p. 19
I would thus conclude that, depending on the objective of your study (do you want to highlight the main patterns of your correlation matrix or do you seek to provide a sensible interpretation of the underlying mechanisms that may have cause you to observe such a correlation matrix), you are up to choose the method that is the most appropriate: This doesn't have to do with the construction of linear combinations, but merely on the way you want to interpret the resulting factorial space.
References
Harman, H.H. (1976). Modern Factor Analysis. Chicago, University of Chicago Press.
Cattell, R.B. (1978). The Scientific Use of Factor Analysis. New York, Plenum.
Kline, P. (1991). Intelligence. The Psychometric View. Routledge. | On the use of oblique rotation after PCA | I think there are different opinions or views about PCA, but basically we often think of it as either a reduction technique (you reduce your features space to a smaller one, often much more "readable" | On the use of oblique rotation after PCA
I think there are different opinions or views about PCA, but basically we often think of it as either a reduction technique (you reduce your features space to a smaller one, often much more "readable" providing you take care of properly centering/standardizing the data when it is needed) or a way to construct latent factors or dimensions that account for a significant part of the inter-individual dispersion (here, the "individuals" stand for the statistical units on which data are collected; this may be country, people, etc.). In both case, we construct linear combinations of the original variables that account for the maximum of variance (when projected on the principal axis), subject to a constraint of orthogonality between any two principal components. Now, what has been described is purely algebrical or mathematical and we don't think of it as a (generating) model, contrary to what is done in the factor analysis tradition where we include an error term to account for some kind of measurement error. I also like the introduction given by William Revelle in his forthcoming handbook on applied psychometrics using R (Chapter 6), if we want to analyze the structure of a correlation matrix, then
The first [approach, PCA] is a model
that approximates the correlation
matrix in terms of the product of
components where each component is a
weighted linear sum of the variables,
the second model [factor analysis] is
also an approximation of the
correlation matrix by the product of
two factors, but the factors in this
are seen as causes rather than as
consequences of the variables.
In other words, with PCA you are expressing each component (factor) as a linear combination of the variables whereas in FA these are the variables that are expressed as a linear combination of the factors. It is well acknowledged that both methods will generally yield quite similar results (see e.g. Harman, 1976 or Catell, 1978), especially in the "ideal" case where we have a large number of individuals and a good ratio factor:variables (typically varying between 2 and 10 depending on the authors you consider!). This is because, by estimating the diagonals in the correlation matrix (as is done in FA, and these elements are known as the communalities), the error variance is eliminated from the factor matrix. This is the reason why PCA is often used as a way to uncover latent factors or psychological constructs in place of FA developed in the last century. But, as we go on this way, we often want to reach an easier interpretation of the resulting factor structure (or the so-called pattern matrix). And then comes the useful trick of rotating the factorial axis so that we maximize loadings of variables on specific factor, or equivalently reach a "simple structure". Using orthogonal rotation (e.g. VARIMAX), we preserve the independence of the factors. With oblique rotation (e.g. OBLIMIN, PROMAX), we break it and factors are allowed to correlate. This has been largely debated in the literature, and has lead some authors (not psychometricians, but statisticians in the early 1960's) to conclude that FA is an unfair approach due to the fact that researchers might seek the factor solution that is the more convenient to interpret.
But the point is that rotation methods were originally developed in the context of the FA approach and are now routinely used with PCA. I don't think this contradicts the algorithmic computation of the principal components: You can rotate your factorial axes the way you want, provided you keep in mind that once correlated (by oblique rotation) the interpretation of the factorial space becomes less obvious.
PCA is routinely used when developing new questionnaires, although FA is probably a better approach in this case because we are trying to extract meaningful factors that take into account measurement errors and whose relationships might be studied on their own (e.g. by factoring out the resulting pattern matrix, we get a second-order factor model). But PCA is also used for checking the factorial structure of already validated ones. Researchers don't really matter about FA vs. PCA when they have, say 500 representative subjects who are asked to rate a 60-item questionnaire tackling five dmensions (this is the case of the NEO-FFI, for example), and I think they are right because in this case we aren't very much interested in identifying a generating or conceptual model (the term "representative" is used here to alleviate the issue of measurement invariance).
Now, about the choice of rotation method and why some authors argue against the strict use of orthogonal rotation, I would like to quote Paul Kline, as I did in response to the following question, FA: Choosing Rotation matrix, based on “Simple Structure Criteria”,
(...) in the real world, it is not
unreasonable to think that factors, as
important determiners of behavior,
would be correlated. -- P. Kline,
Intelligence. The Psychometric View, 1991, p. 19
I would thus conclude that, depending on the objective of your study (do you want to highlight the main patterns of your correlation matrix or do you seek to provide a sensible interpretation of the underlying mechanisms that may have cause you to observe such a correlation matrix), you are up to choose the method that is the most appropriate: This doesn't have to do with the construction of linear combinations, but merely on the way you want to interpret the resulting factorial space.
References
Harman, H.H. (1976). Modern Factor Analysis. Chicago, University of Chicago Press.
Cattell, R.B. (1978). The Scientific Use of Factor Analysis. New York, Plenum.
Kline, P. (1991). Intelligence. The Psychometric View. Routledge. | On the use of oblique rotation after PCA
I think there are different opinions or views about PCA, but basically we often think of it as either a reduction technique (you reduce your features space to a smaller one, often much more "readable" |
25,509 | On the use of oblique rotation after PCA | The problem with orthogonal dimensions is that the components can be uninterpretable. Thus, while oblique rotation (i.e., nonorthogonal dimensions) is technically less satisfying such a rotation sometimes enhances interpretablity of the resulting components. | On the use of oblique rotation after PCA | The problem with orthogonal dimensions is that the components can be uninterpretable. Thus, while oblique rotation (i.e., nonorthogonal dimensions) is technically less satisfying such a rotation somet | On the use of oblique rotation after PCA
The problem with orthogonal dimensions is that the components can be uninterpretable. Thus, while oblique rotation (i.e., nonorthogonal dimensions) is technically less satisfying such a rotation sometimes enhances interpretablity of the resulting components. | On the use of oblique rotation after PCA
The problem with orthogonal dimensions is that the components can be uninterpretable. Thus, while oblique rotation (i.e., nonorthogonal dimensions) is technically less satisfying such a rotation somet |
25,510 | On the use of oblique rotation after PCA | Basic Points
Rotation can make interpretation of components clearer
Oblique rotation often makes more theoretical sense. I.e., Observed variables can be explained in terms of a smaller number of correlated components.
Example
10 tests all measuring ability with some measuring verbal and some measuring spatial ability. All tests are intercorrelated, but intercorrelations within verbal or within spatial tests are greater than across test type. A parsimonious PCA might involve two correlated components, a verbal and a spatial. Theory and research suggests that these two abilities are correlated. Thus, an oblique rotation makes theoretical sense. | On the use of oblique rotation after PCA | Basic Points
Rotation can make interpretation of components clearer
Oblique rotation often makes more theoretical sense. I.e., Observed variables can be explained in terms of a smaller number of corr | On the use of oblique rotation after PCA
Basic Points
Rotation can make interpretation of components clearer
Oblique rotation often makes more theoretical sense. I.e., Observed variables can be explained in terms of a smaller number of correlated components.
Example
10 tests all measuring ability with some measuring verbal and some measuring spatial ability. All tests are intercorrelated, but intercorrelations within verbal or within spatial tests are greater than across test type. A parsimonious PCA might involve two correlated components, a verbal and a spatial. Theory and research suggests that these two abilities are correlated. Thus, an oblique rotation makes theoretical sense. | On the use of oblique rotation after PCA
Basic Points
Rotation can make interpretation of components clearer
Oblique rotation often makes more theoretical sense. I.e., Observed variables can be explained in terms of a smaller number of corr |
25,511 | If the null hypothesis is never really true, is there a point to using a statistical test without a priori power analysis? | If we focus on medical research; performing a study involves taking a risk and potentially harming people. This is acceptable within bounds defined by the Principle of Equipoise as outlined in the Declaration of Helsinki. Prior to recruiting even a single subject to a study, the protocol must be reviewed and approved by an ethics board, usually an institutional review board (IRB). Many medical centers include a statistician or epidemiologist on such boards, and they consider the statistical feasibility of the study. That is to say, the protocol statistician has outlined the assumptions and the anticipated effects and applied the necessary formulas to provide rationale for the specified sample size(s). There are a number of questions to consider subsequently: are the assumptions reasonable? Is the analysis well powered? Does it make sense to recruit this many people without additional preliminary research? Will the potential benefits in the population after the study outweigh the risks in the study participants? And so on...
The constitution and mission of an IRB is outlined in the Belmont report. Just a plug, IRBs within medical institutions often have difficulty recruiting and retaining statisticians. If you are a biostatistician within an academic medical center, ask whether there is a seat for a biostatistician to participate.
The result of a successful medical trial is that standard practice can be updated based on what is known. Typically, this does fall down to a trial showing a significant result. One can hope based on the input of IRBs, and the natural limitation of cost, that the design feature under study has a reasonable profound impact on health so that the significance is compelling in its own right.
There is a flipside to this. Much less can be said of non-experimental, large EHR based studies which often show significant effects that can't and shouldn't be translated into practice. Open data sources and semi-closed data sources often do not have a steering committee to review the ethics of proposed research. Conversely, many languishing areas of healthcare continue to hem and haw over results due to the failure of trials to show unequivocal results, such as sodium reduction, cognitive behavioral therapy, fish oil supplementation, low fat diets, some vaccines, and so on.
In summary, for any confirmatory study, no there is no point to conducting a hypothesis test unless a power/sample size calculation has been performed - and the primary endpoint(s) is/are formally powered and secondary endpoints are reasonably powerful or important. In any other case, the analysis should be treated as exploratory, and a "hypothesis test" in this framework can be viewed as yet another method to identify research topics or detect effects - in that case, the statistician should be completely transparent in the reporting of their results. | If the null hypothesis is never really true, is there a point to using a statistical test without a | If we focus on medical research; performing a study involves taking a risk and potentially harming people. This is acceptable within bounds defined by the Principle of Equipoise as outlined in the Dec | If the null hypothesis is never really true, is there a point to using a statistical test without a priori power analysis?
If we focus on medical research; performing a study involves taking a risk and potentially harming people. This is acceptable within bounds defined by the Principle of Equipoise as outlined in the Declaration of Helsinki. Prior to recruiting even a single subject to a study, the protocol must be reviewed and approved by an ethics board, usually an institutional review board (IRB). Many medical centers include a statistician or epidemiologist on such boards, and they consider the statistical feasibility of the study. That is to say, the protocol statistician has outlined the assumptions and the anticipated effects and applied the necessary formulas to provide rationale for the specified sample size(s). There are a number of questions to consider subsequently: are the assumptions reasonable? Is the analysis well powered? Does it make sense to recruit this many people without additional preliminary research? Will the potential benefits in the population after the study outweigh the risks in the study participants? And so on...
The constitution and mission of an IRB is outlined in the Belmont report. Just a plug, IRBs within medical institutions often have difficulty recruiting and retaining statisticians. If you are a biostatistician within an academic medical center, ask whether there is a seat for a biostatistician to participate.
The result of a successful medical trial is that standard practice can be updated based on what is known. Typically, this does fall down to a trial showing a significant result. One can hope based on the input of IRBs, and the natural limitation of cost, that the design feature under study has a reasonable profound impact on health so that the significance is compelling in its own right.
There is a flipside to this. Much less can be said of non-experimental, large EHR based studies which often show significant effects that can't and shouldn't be translated into practice. Open data sources and semi-closed data sources often do not have a steering committee to review the ethics of proposed research. Conversely, many languishing areas of healthcare continue to hem and haw over results due to the failure of trials to show unequivocal results, such as sodium reduction, cognitive behavioral therapy, fish oil supplementation, low fat diets, some vaccines, and so on.
In summary, for any confirmatory study, no there is no point to conducting a hypothesis test unless a power/sample size calculation has been performed - and the primary endpoint(s) is/are formally powered and secondary endpoints are reasonably powerful or important. In any other case, the analysis should be treated as exploratory, and a "hypothesis test" in this framework can be viewed as yet another method to identify research topics or detect effects - in that case, the statistician should be completely transparent in the reporting of their results. | If the null hypothesis is never really true, is there a point to using a statistical test without a
If we focus on medical research; performing a study involves taking a risk and potentially harming people. This is acceptable within bounds defined by the Principle of Equipoise as outlined in the Dec |
25,512 | If the null hypothesis is never really true, is there a point to using a statistical test without a priori power analysis? | I'm not sure how a power analysis would help. If you do a power analysis, which says you need N = 1000 and it is not significant, what do you know that you didn't know before you did the power analysis? (And how do you estimate the size of the effect to put into the power analysis?)
This is the problem with over-relying on (or perhaps over-interpreting) p-values. A non-significant p-value tells you that you do not have confidence in knowing the direction of the effect.
Andrew Gelman, in his blog, has popularized the idea of Type S and Type M errors instead of Type I and Type II errors. A Type S error is a Sign error - you have the wrong direction of effect, a type M error is a Magnitude error - you have not correctly estimated the magnitude of the effect. | If the null hypothesis is never really true, is there a point to using a statistical test without a | I'm not sure how a power analysis would help. If you do a power analysis, which says you need N = 1000 and it is not significant, what do you know that you didn't know before you did the power analysi | If the null hypothesis is never really true, is there a point to using a statistical test without a priori power analysis?
I'm not sure how a power analysis would help. If you do a power analysis, which says you need N = 1000 and it is not significant, what do you know that you didn't know before you did the power analysis? (And how do you estimate the size of the effect to put into the power analysis?)
This is the problem with over-relying on (or perhaps over-interpreting) p-values. A non-significant p-value tells you that you do not have confidence in knowing the direction of the effect.
Andrew Gelman, in his blog, has popularized the idea of Type S and Type M errors instead of Type I and Type II errors. A Type S error is a Sign error - you have the wrong direction of effect, a type M error is a Magnitude error - you have not correctly estimated the magnitude of the effect. | If the null hypothesis is never really true, is there a point to using a statistical test without a
I'm not sure how a power analysis would help. If you do a power analysis, which says you need N = 1000 and it is not significant, what do you know that you didn't know before you did the power analysi |
25,513 | If the null hypothesis is never really true, is there a point to using a statistical test without a priori power analysis? | If the null hypothesis is never really true, is there a point to using a statistical test?
If we already know that the null hypothesis is not true, then the point is not to proof that the null hypothesis is not true.
The point of the null hypothesis test is to show that a test is sensitive enough to be able to exclude certain hypothesis. The quality of a test is not the ability to show which values are most likely true, but instead it is the ability to show which values are likely not true and to show it with a large significance.
In addition, there are some issues with continuous distributions having zero probability for any specific value. So no value is ever true when we consider a continuous distribution for some parameter. Still, relevant are the distribution densities and whether the region around certain hypothesis, like the null hypothesis, are low or not. | If the null hypothesis is never really true, is there a point to using a statistical test without a | If the null hypothesis is never really true, is there a point to using a statistical test?
If we already know that the null hypothesis is not true, then the point is not to proof that the null hypoth | If the null hypothesis is never really true, is there a point to using a statistical test without a priori power analysis?
If the null hypothesis is never really true, is there a point to using a statistical test?
If we already know that the null hypothesis is not true, then the point is not to proof that the null hypothesis is not true.
The point of the null hypothesis test is to show that a test is sensitive enough to be able to exclude certain hypothesis. The quality of a test is not the ability to show which values are most likely true, but instead it is the ability to show which values are likely not true and to show it with a large significance.
In addition, there are some issues with continuous distributions having zero probability for any specific value. So no value is ever true when we consider a continuous distribution for some parameter. Still, relevant are the distribution densities and whether the region around certain hypothesis, like the null hypothesis, are low or not. | If the null hypothesis is never really true, is there a point to using a statistical test without a
If the null hypothesis is never really true, is there a point to using a statistical test?
If we already know that the null hypothesis is not true, then the point is not to proof that the null hypoth |
25,514 | If the null hypothesis is never really true, is there a point to using a statistical test without a priori power analysis? | Yes! If we have done a priori power calculations to figure out the sample size we'd need to consistently detect an effect of the size we care about, and we've actually collected that amount of data, then a significant p-value is confirmatory and meaningful. You made a deliberate effort to collect enough data to rule out the straw-man argument of "What if your results are just sampling variation?" and you appear to have overcome that hurdle. In Deborah Mayo's words, you subjected your hypothesis to "severe testing," using a test with "an overwhelmingly good chance of revealing the presence of a specific error, if it exists --- but not otherwise."
But if we haven't done a priori power calculations, and we chose a sample size in other ways (convenience, or budget constraints, or a mistaken belief that "n=30 is big enough" for everything)... then our test was not "severe.".
So, what's the use of hypothesis testing without an a priori power analysis? Sometimes we're in a situation where we simply couldn't have collected more data. (Maybe we are looking back at historical records and there's only a small sample left in existence. The population was larger than this sample, but there's no way to sample more data from that population any longer.) Then hypothesis testing isn't ideal, but might still be useful in a limited way: Although a significant p-value wouldn't tell us much, an insignificant p-value would tell us that we definitely should be worried about sampling variation as we interpret our findings. | If the null hypothesis is never really true, is there a point to using a statistical test without a | Yes! If we have done a priori power calculations to figure out the sample size we'd need to consistently detect an effect of the size we care about, and we've actually collected that amount of data, t | If the null hypothesis is never really true, is there a point to using a statistical test without a priori power analysis?
Yes! If we have done a priori power calculations to figure out the sample size we'd need to consistently detect an effect of the size we care about, and we've actually collected that amount of data, then a significant p-value is confirmatory and meaningful. You made a deliberate effort to collect enough data to rule out the straw-man argument of "What if your results are just sampling variation?" and you appear to have overcome that hurdle. In Deborah Mayo's words, you subjected your hypothesis to "severe testing," using a test with "an overwhelmingly good chance of revealing the presence of a specific error, if it exists --- but not otherwise."
But if we haven't done a priori power calculations, and we chose a sample size in other ways (convenience, or budget constraints, or a mistaken belief that "n=30 is big enough" for everything)... then our test was not "severe.".
So, what's the use of hypothesis testing without an a priori power analysis? Sometimes we're in a situation where we simply couldn't have collected more data. (Maybe we are looking back at historical records and there's only a small sample left in existence. The population was larger than this sample, but there's no way to sample more data from that population any longer.) Then hypothesis testing isn't ideal, but might still be useful in a limited way: Although a significant p-value wouldn't tell us much, an insignificant p-value would tell us that we definitely should be worried about sampling variation as we interpret our findings. | If the null hypothesis is never really true, is there a point to using a statistical test without a
Yes! If we have done a priori power calculations to figure out the sample size we'd need to consistently detect an effect of the size we care about, and we've actually collected that amount of data, t |
25,515 | Why is mean ± 2*SEM (95% confidence interval) overlapping, but the p-value is 0.05? | The overlap is just a (strict/inaccurate) rule of thumb
The point when the error bars do not overlap is when the distance between the two points is equal to $2(SE_1+SE_2)$. So effectively you are testing whether some sort of standardized score (distance divided by the sum of standard errors) is greater than 2. Let's call this $z_{overlap}$
$$ z_{overlap} = \frac{\vert \bar{X}_1- \bar{X}_2 \vert}{SE_1+SE_2} \geq 2$$
If this $z_{overlap} \geq 2$ then the error bars do not overlap.
The standard deviation of a linear sum of independent variables
Adding the standard deviations (errors) together is not the typical way to compute the standard deviation (error) of a linear sum (the parameter $\bar{X}_1-\bar{X}_2$ can be considered as a linear sum where one of the two is multiplied by a factor $-1$) See also: Sum of uncorrelated variables
So the following are true for independent $\bar{X}_1$ and $\bar{X}_2$:
$$\begin{array}{}
\text{Var}(\bar{X}_1-\bar{X}_2) &=& \text{Var}(\bar{X}_1) + \text{Var}(\bar{X}_2)\\
\sigma_{\bar{X}_1-\bar{X}_2}^2 &=& \sigma_{\bar{X}_1}^2+\sigma_{\bar{X}_2}^2\\
\sigma_{\bar{X}_1-\bar{X}_2} &=& \sqrt{\sigma_{\bar{X}_1}^2+\sigma_{\bar{X}_2}^2}\\
\text{S.E.}(\bar{X}_1-\bar{X}_2) &=& \sqrt{\text{S.E.}(\bar{X}_1)^2 + \text{S.E.}(\bar{X}_2)^2}\\
\end{array}$$
But not
$$\text{S.E.}(\bar{X}_1-\bar{X}_2) \neq {\text{S.E.}(\bar{X}_1) + \text{S.E.}(\bar{X}_2)}$$
'Correct' formula for comparing the difference in the mean of two samples
For a t-test to compare the difference in means of two populations, you should be using a formula like
In the simplest case:
$$t = \frac{\bar{X}_1 - \bar{X}_2}{\sqrt{SE_1^2+SE_2^2}}$$
this is when we consider the variances to be unequal or when the sample sizes are equal.
If the sample sizes are different and you consider the variance of the populations to be equal, then you can estimate the variances for both samples together instead of separately, and use one of many formulae for the pooled variance like
$$s_p = \sqrt{\frac{(n_1-1)s_1^2 +(n_2-1)s_2^2}{n_1+n_2-2}}$$
with $$t = \frac{\bar{X}_1 - \bar{X}_2}{s_p \sqrt{\frac{1}{n_1}+\frac{1}{n_2}}}$$
and with $SE_1 = s_1/\sqrt{n_1}$ and $SE_2 = s_2/\sqrt{n_2}$ you get
$$t = \frac{\bar{X}_1 - \bar{X}_2}{\sqrt{\frac{n_1+n_2}{n_1+n_2-2} \left( \frac{n_1-1}{n_2} SE_1^2 + \frac{n_2-1}{n_1} SE_2^2 \right)}}$$
Note that the value $\sqrt{SE_1^2+SE_2^2}$ is smaller than $SE_1+SE_2$, therefore $t>z_{overlap}$.
Sidenotes:
In the case of the pooled variance, you might have a situation - although it is rare - that the variance of the larger sample is larger than the variance of the smaller sample, and then it is possible that $t<z_{overlap}$.
Instead of z-values and a z-test you are actually doing (should be doing) a t-test. So it might be that the levels on which you base the confidence intervals for the error bars (like '95% is equivalent to 2 times the standard error') will be different for the t-test. To be fair, to compare apples with apples, you should use the same standard and base the confidence levels for the error bars on a t-test as well. So let's assume that also for the t-test the boundary level that relates to 95% is equal to or less than 2 (this is the case for sample sizes larger than 60).
If this $t \geq 2$ then the difference is significant (at a 5% level).
The standard error of the difference between two variables is not the sum of standard errors of each variable. This sum is overestimating the error for the difference and will be too conservative (too often claim there is no significant difference).
So $t>z_{overlap}$ and may lead to a significant difference while the error bars have overlap. You do not need non-overlapping error bars in order to have a significant difference. This overlap is a stricter requirement and happens when the p-value is $\leq 0.05$ (and it will often be a lower p-value). | Why is mean ± 2*SEM (95% confidence interval) overlapping, but the p-value is 0.05? | The overlap is just a (strict/inaccurate) rule of thumb
The point when the error bars do not overlap is when the distance between the two points is equal to $2(SE_1+SE_2)$. So effectively you are test | Why is mean ± 2*SEM (95% confidence interval) overlapping, but the p-value is 0.05?
The overlap is just a (strict/inaccurate) rule of thumb
The point when the error bars do not overlap is when the distance between the two points is equal to $2(SE_1+SE_2)$. So effectively you are testing whether some sort of standardized score (distance divided by the sum of standard errors) is greater than 2. Let's call this $z_{overlap}$
$$ z_{overlap} = \frac{\vert \bar{X}_1- \bar{X}_2 \vert}{SE_1+SE_2} \geq 2$$
If this $z_{overlap} \geq 2$ then the error bars do not overlap.
The standard deviation of a linear sum of independent variables
Adding the standard deviations (errors) together is not the typical way to compute the standard deviation (error) of a linear sum (the parameter $\bar{X}_1-\bar{X}_2$ can be considered as a linear sum where one of the two is multiplied by a factor $-1$) See also: Sum of uncorrelated variables
So the following are true for independent $\bar{X}_1$ and $\bar{X}_2$:
$$\begin{array}{}
\text{Var}(\bar{X}_1-\bar{X}_2) &=& \text{Var}(\bar{X}_1) + \text{Var}(\bar{X}_2)\\
\sigma_{\bar{X}_1-\bar{X}_2}^2 &=& \sigma_{\bar{X}_1}^2+\sigma_{\bar{X}_2}^2\\
\sigma_{\bar{X}_1-\bar{X}_2} &=& \sqrt{\sigma_{\bar{X}_1}^2+\sigma_{\bar{X}_2}^2}\\
\text{S.E.}(\bar{X}_1-\bar{X}_2) &=& \sqrt{\text{S.E.}(\bar{X}_1)^2 + \text{S.E.}(\bar{X}_2)^2}\\
\end{array}$$
But not
$$\text{S.E.}(\bar{X}_1-\bar{X}_2) \neq {\text{S.E.}(\bar{X}_1) + \text{S.E.}(\bar{X}_2)}$$
'Correct' formula for comparing the difference in the mean of two samples
For a t-test to compare the difference in means of two populations, you should be using a formula like
In the simplest case:
$$t = \frac{\bar{X}_1 - \bar{X}_2}{\sqrt{SE_1^2+SE_2^2}}$$
this is when we consider the variances to be unequal or when the sample sizes are equal.
If the sample sizes are different and you consider the variance of the populations to be equal, then you can estimate the variances for both samples together instead of separately, and use one of many formulae for the pooled variance like
$$s_p = \sqrt{\frac{(n_1-1)s_1^2 +(n_2-1)s_2^2}{n_1+n_2-2}}$$
with $$t = \frac{\bar{X}_1 - \bar{X}_2}{s_p \sqrt{\frac{1}{n_1}+\frac{1}{n_2}}}$$
and with $SE_1 = s_1/\sqrt{n_1}$ and $SE_2 = s_2/\sqrt{n_2}$ you get
$$t = \frac{\bar{X}_1 - \bar{X}_2}{\sqrt{\frac{n_1+n_2}{n_1+n_2-2} \left( \frac{n_1-1}{n_2} SE_1^2 + \frac{n_2-1}{n_1} SE_2^2 \right)}}$$
Note that the value $\sqrt{SE_1^2+SE_2^2}$ is smaller than $SE_1+SE_2$, therefore $t>z_{overlap}$.
Sidenotes:
In the case of the pooled variance, you might have a situation - although it is rare - that the variance of the larger sample is larger than the variance of the smaller sample, and then it is possible that $t<z_{overlap}$.
Instead of z-values and a z-test you are actually doing (should be doing) a t-test. So it might be that the levels on which you base the confidence intervals for the error bars (like '95% is equivalent to 2 times the standard error') will be different for the t-test. To be fair, to compare apples with apples, you should use the same standard and base the confidence levels for the error bars on a t-test as well. So let's assume that also for the t-test the boundary level that relates to 95% is equal to or less than 2 (this is the case for sample sizes larger than 60).
If this $t \geq 2$ then the difference is significant (at a 5% level).
The standard error of the difference between two variables is not the sum of standard errors of each variable. This sum is overestimating the error for the difference and will be too conservative (too often claim there is no significant difference).
So $t>z_{overlap}$ and may lead to a significant difference while the error bars have overlap. You do not need non-overlapping error bars in order to have a significant difference. This overlap is a stricter requirement and happens when the p-value is $\leq 0.05$ (and it will often be a lower p-value). | Why is mean ± 2*SEM (95% confidence interval) overlapping, but the p-value is 0.05?
The overlap is just a (strict/inaccurate) rule of thumb
The point when the error bars do not overlap is when the distance between the two points is equal to $2(SE_1+SE_2)$. So effectively you are test |
25,516 | Why is mean ± 2*SEM (95% confidence interval) overlapping, but the p-value is 0.05? | The p-value should be considered between a CI and a parameter value, not two CIs. Indeed, the red point falls entirely outside the blue CI, and the blue point falls entirely outside the red CI.
And it is true that under the null hypothesis such an event would happen 5% of the time:
2.5% of the time, you get a point above the 95% CI
2.5% of the time, you get a point below the 95% CI
If it is only the whiskers that overlap or touch, then the null hypothesis will produce this result a lot less often than 5%. This is because (to use your example) both the blue sample would need to be low, and at the same time the red sample would need to be high (exactly how high would depend on the blue value). You can picture it as a 3D multivariate Gaussian plot, with no skew since the two errors are independent of one another:
Along each axis the probability of falling outside the highlighted region (the CI) is 0.05. But the total probabilities of the blue and pink areas, which gives you P of the two CIs barely touching, is less than 0.05 in your case.
A change of variables from the blue/red axes to the green one will let you integrate this volume using a univariate rather than multivariate Gaussian, and the new variance is the pooled variance from @Sextus-Empiricus's answer. | Why is mean ± 2*SEM (95% confidence interval) overlapping, but the p-value is 0.05? | The p-value should be considered between a CI and a parameter value, not two CIs. Indeed, the red point falls entirely outside the blue CI, and the blue point falls entirely outside the red CI.
And it | Why is mean ± 2*SEM (95% confidence interval) overlapping, but the p-value is 0.05?
The p-value should be considered between a CI and a parameter value, not two CIs. Indeed, the red point falls entirely outside the blue CI, and the blue point falls entirely outside the red CI.
And it is true that under the null hypothesis such an event would happen 5% of the time:
2.5% of the time, you get a point above the 95% CI
2.5% of the time, you get a point below the 95% CI
If it is only the whiskers that overlap or touch, then the null hypothesis will produce this result a lot less often than 5%. This is because (to use your example) both the blue sample would need to be low, and at the same time the red sample would need to be high (exactly how high would depend on the blue value). You can picture it as a 3D multivariate Gaussian plot, with no skew since the two errors are independent of one another:
Along each axis the probability of falling outside the highlighted region (the CI) is 0.05. But the total probabilities of the blue and pink areas, which gives you P of the two CIs barely touching, is less than 0.05 in your case.
A change of variables from the blue/red axes to the green one will let you integrate this volume using a univariate rather than multivariate Gaussian, and the new variance is the pooled variance from @Sextus-Empiricus's answer. | Why is mean ± 2*SEM (95% confidence interval) overlapping, but the p-value is 0.05?
The p-value should be considered between a CI and a parameter value, not two CIs. Indeed, the red point falls entirely outside the blue CI, and the blue point falls entirely outside the red CI.
And it |
25,517 | Why is mean ± 2*SEM (95% confidence interval) overlapping, but the p-value is 0.05? | Even if we ignore the difference between confidence and probability, the overlap consists of points for which both the red probability and the blue probability are greater than 0.05. But that doesn't mean that the probability of both is greater than 0.05. For instance, if both the red and blue probability are 0.10, then the joint probability (assuming independence) is 0.01. If you integrate over the whole overlap, this will be less than 0.01.
When you look at the overlap, you are seeing points for which the difference is less than two standard deviations. But remember that the variance of the difference between two variables is the sum of the individual variances. So you can generally use a rule of thumb that if you want to compare two different populations by checking for overlapping CI, you need to divide the size of each CI by $\sqrt 2$: if the variances are of similar sizes, then the variance of the difference will be twice the individual variances, and the standard deviation will be $\sqrt 2$ times as large. | Why is mean ± 2*SEM (95% confidence interval) overlapping, but the p-value is 0.05? | Even if we ignore the difference between confidence and probability, the overlap consists of points for which both the red probability and the blue probability are greater than 0.05. But that doesn't | Why is mean ± 2*SEM (95% confidence interval) overlapping, but the p-value is 0.05?
Even if we ignore the difference between confidence and probability, the overlap consists of points for which both the red probability and the blue probability are greater than 0.05. But that doesn't mean that the probability of both is greater than 0.05. For instance, if both the red and blue probability are 0.10, then the joint probability (assuming independence) is 0.01. If you integrate over the whole overlap, this will be less than 0.01.
When you look at the overlap, you are seeing points for which the difference is less than two standard deviations. But remember that the variance of the difference between two variables is the sum of the individual variances. So you can generally use a rule of thumb that if you want to compare two different populations by checking for overlapping CI, you need to divide the size of each CI by $\sqrt 2$: if the variances are of similar sizes, then the variance of the difference will be twice the individual variances, and the standard deviation will be $\sqrt 2$ times as large. | Why is mean ± 2*SEM (95% confidence interval) overlapping, but the p-value is 0.05?
Even if we ignore the difference between confidence and probability, the overlap consists of points for which both the red probability and the blue probability are greater than 0.05. But that doesn't |
25,518 | Hypothesis testing on tossing the coin n times | You want $n$ large enough that a confidence
interval of the form $\hat p \pm 1.96\sqrt{\frac{\hat p(1-\hat p)}{n}},$ where $X$ is the number of heads and $\hat p = X/n,$ does not include $0.5.$
Roughly speaking the standard error is $\sqrt{.6(.4)/n}$
and the margin of error is about $2\sqrt{.6(.4)/n}\approx 0.98/\sqrt{n}.$ And you want the margin error to be less
than $0.1,$ so something around $n = 96$ should suffice. I show
examples with $n=100$ below.
n = 100; x = 60; z = qnorm(c(.025,.975))
CI = .6 + z*sqrt(.24/100); CI
[1] 0.5039818 0.6960182
A superior kind of CI due to Agresti and Coull uses
the point estimate $\tilde p = (x+2)/(n+4) = 62/104 = 0.5962$ and the endpoints are at $\tilde p \pm 1.96\sqrt{\tilde p(1-\tilde p)/104}.$ This interval also just misses covering $1/2.$
p.est=(60+2)/(10+4)
p.est + qnorm(c(.025, .975)) * sqrt( p.est*(1-p.est)/104 )
[1] 0.5018524 0.6904553
Finally, a Jeffries 95% CI uses quantiles $0.025$ and $0.975$
of the distribution $\mathsf{BETA}(60+0.5, 40+0.5),$ so that the interval is $(0.5023,0.6920).$
qbeta(c(.025, .975), 60.5, 40.5)
[1] 0.5022567 0.6920477
Depending on the kind of interval you are using and whether you
want the smallest number just large enough so that the CI doesn't
contain $1/2,$ I'll leave the rest to you. | Hypothesis testing on tossing the coin n times | You want $n$ large enough that a confidence
interval of the form $\hat p \pm 1.96\sqrt{\frac{\hat p(1-\hat p)}{n}},$ where $X$ is the number of heads and $\hat p = X/n,$ does not include $0.5.$
Roughl | Hypothesis testing on tossing the coin n times
You want $n$ large enough that a confidence
interval of the form $\hat p \pm 1.96\sqrt{\frac{\hat p(1-\hat p)}{n}},$ where $X$ is the number of heads and $\hat p = X/n,$ does not include $0.5.$
Roughly speaking the standard error is $\sqrt{.6(.4)/n}$
and the margin of error is about $2\sqrt{.6(.4)/n}\approx 0.98/\sqrt{n}.$ And you want the margin error to be less
than $0.1,$ so something around $n = 96$ should suffice. I show
examples with $n=100$ below.
n = 100; x = 60; z = qnorm(c(.025,.975))
CI = .6 + z*sqrt(.24/100); CI
[1] 0.5039818 0.6960182
A superior kind of CI due to Agresti and Coull uses
the point estimate $\tilde p = (x+2)/(n+4) = 62/104 = 0.5962$ and the endpoints are at $\tilde p \pm 1.96\sqrt{\tilde p(1-\tilde p)/104}.$ This interval also just misses covering $1/2.$
p.est=(60+2)/(10+4)
p.est + qnorm(c(.025, .975)) * sqrt( p.est*(1-p.est)/104 )
[1] 0.5018524 0.6904553
Finally, a Jeffries 95% CI uses quantiles $0.025$ and $0.975$
of the distribution $\mathsf{BETA}(60+0.5, 40+0.5),$ so that the interval is $(0.5023,0.6920).$
qbeta(c(.025, .975), 60.5, 40.5)
[1] 0.5022567 0.6920477
Depending on the kind of interval you are using and whether you
want the smallest number just large enough so that the CI doesn't
contain $1/2,$ I'll leave the rest to you. | Hypothesis testing on tossing the coin n times
You want $n$ large enough that a confidence
interval of the form $\hat p \pm 1.96\sqrt{\frac{\hat p(1-\hat p)}{n}},$ where $X$ is the number of heads and $\hat p = X/n,$ does not include $0.5.$
Roughl |
25,519 | Hypothesis testing on tossing the coin n times | I worked out similar math for the problem of seed germination, to estimate the population germination rate from a sample of n seeds of which k germinate.
The formula I got for the CDF is:
where x is the germination probability. Substituting your problem variables in, letting n be the number of coin flips, k is 0.6*n, so the resulting formula would be:
which you can solve for n to give you the number of total flips that will exclude a fair coin with 95% credibility. Note that this is a CREDIBLE interval instead of a CONFIDENCE interval. I don't know how this will work with an exact 60% instead of integer values for n and k, but the general CDF formula can get you there for any arbitrary credible interval. Let x equal the "fair coin probability" of 0.5, k be the number of heads, and n the total number of flips.
I actually made a Javascript calculator for the seed germination problem that you could co-opt for this use. There is also a link to a PDF of the worked-out math if you want to take a look or check it. Hope this helps!
UPDATE: I brute forced the math with my calculator and I calculate that the 95% credible interval excludes a fair coin (50% probability) at between 95 and 100 coin flips. A plot of the credible interval evolution with n is attached. | Hypothesis testing on tossing the coin n times | I worked out similar math for the problem of seed germination, to estimate the population germination rate from a sample of n seeds of which k germinate.
The formula I got for the CDF is:
where x is | Hypothesis testing on tossing the coin n times
I worked out similar math for the problem of seed germination, to estimate the population germination rate from a sample of n seeds of which k germinate.
The formula I got for the CDF is:
where x is the germination probability. Substituting your problem variables in, letting n be the number of coin flips, k is 0.6*n, so the resulting formula would be:
which you can solve for n to give you the number of total flips that will exclude a fair coin with 95% credibility. Note that this is a CREDIBLE interval instead of a CONFIDENCE interval. I don't know how this will work with an exact 60% instead of integer values for n and k, but the general CDF formula can get you there for any arbitrary credible interval. Let x equal the "fair coin probability" of 0.5, k be the number of heads, and n the total number of flips.
I actually made a Javascript calculator for the seed germination problem that you could co-opt for this use. There is also a link to a PDF of the worked-out math if you want to take a look or check it. Hope this helps!
UPDATE: I brute forced the math with my calculator and I calculate that the 95% credible interval excludes a fair coin (50% probability) at between 95 and 100 coin flips. A plot of the credible interval evolution with n is attached. | Hypothesis testing on tossing the coin n times
I worked out similar math for the problem of seed germination, to estimate the population germination rate from a sample of n seeds of which k germinate.
The formula I got for the CDF is:
where x is |
25,520 | Creating random points in the surface of a n-dimensional sphere | Using a stereographic projection is attractive.
The stereographic projection relative to a point $x_0\in S^{n}\subset \mathbb{R}^{n+1}$ maps any point $x$ not diametrically opposite to $x_0$ (that is, $x\ne -x_0$) onto the point $y(x;x_0)$ found by moving directly away from $-x_0$ until encountering the tangent plane of $S^n$ at $x_0.$ Write $t$ for the multiple of this direction vector $x-(-x_0) = x+x_0,$ so that
$$y = y(x;x_0)= x + t(x+x_0).$$
Points $y$ on the tangent plane are those for which $y,$ relative to $x_0,$ are perpendicular to the Normal direction at $x_0$ (which is $x_0$ itself). In terms of the Euclidean inner product $\langle\ \rangle$ this means
$$0 = \langle y - x_0, x_0 \rangle = \langle x + t(x+x_0) - x_0, x_0\rangle = t\langle x + x_0, x_0\rangle + \langle x-x_0, x_0\rangle.$$
This linear equation in $t$ has the unique solution
$$t = -\frac{\langle x-x_0,x_0\rangle}{\langle x + x_0, x_0\rangle}.$$
With a little analysis you can verify that $|y-x_0|$ agrees with $x-x_0$ to first order in $x-x_0,$ indicating that when $x$ is close to $x_0,$ Stereographic projection doesn't appreciably affect Euclidean distances: that is, up to first order, Stereographic projection is an approximate isometry near $x_0.$
Consequently, if we generate points $y$ on the tangent plane $T_{x_0}S^n$ near its origin at $x_0$ and view them as stereographic projections of corresponding points $x$ on $S_n,$ then the distribution of the points on the sphere will approximate the distribution of the points on the plane.
This leaves us with two subproblems to solve:
Generate Normally-distributed points near $x_0$ on $T_{x_0}S^n.$
Invert the Stereographic projection (based at $x_0$).
To solve (1), apply the Gram-Schmidt process to the vectors $x_0, e_1, e_2, \ldots, e_{n+1}$ where the $e_i$ are any basis for $\mathbb{R}^n+1.$ The result after $n+1$ steps will be an orthonormal sequence of vectors that includes a single zero vector. After removing that zero vector we will obtain an orthonormal basis $u_0 = x_0, u_1, u_2, \ldots, u_{n}.$
Generate a random point (according to any distribution whatsoever) on $T_{x_0}S^n$ by generating a random vector $Z = (z_1,z_2,\ldots, z_n) \in \mathbb{R}^n$ and setting
$$y = x_0 + z_1 u_1 + z_2 u_2 + \cdots + z_n u_n.\tag{1}$$
Because the $u_i$ are all orthogonal to $x_0$ (by construction), $y-x_0$ is obviously orthogonal to $x_0.$ That proves all such $y$ lie on $T_{x_0}S^n.$ When the $z_i$ are generated with a Normal distribution, $y$ follows a Normal distribution because it is a linear combination of Normal variates. Thus, this method satisfies all the requirements of the question.
To solve (2), find $x\in S^n$ on the line segment between $-x_0$ and $y.$ All such points can be expressed in terms of a unique real number $0 \lt s \le 1$ in the form
$$x = (1-s)(-x_0) + s y = s(x_0+y) - x_0.$$
Applying the equation of the sphere $|x|^2=1$ gives a quadratic equation for $s$
$$1 = |x_0+y|^2\,s^2 - 2\langle x_0,x_0+y\rangle\, s + 1$$
with unique nonzero solution
$$s = \frac{2\langle x_0, x_0+y\rangle}{|x_0+y|^2},$$
whence
$$x = s(x_0+y) - x_0 = \frac{2\langle x_0, x_0+y\rangle}{|x_0+y|^2}\,(x_0+y) - x_0.\tag{2}$$
Formulas $(1)$ and $(2)$ give an effective and efficient algorithm to generate the points $x$ on the sphere near $x_0$ with an approximate Normal distribution (or, indeed, to approximate any distribution of points close to $x_0$).
Here is a scatterplot matrix of a set of 4,000 such points generated near $x_0 = (1,1,1)/\sqrt{3}.$ The standard deviation in the tangent plane is $1/\sqrt{12} \approx 0.29.$ This is large in the sense that the points are scattered across a sizable portion of the $x_0$ hemisphere, thereby making this a fairly severe test of the algorithm.
It was created with the following R implementation. At the end, this R code plots histograms of the squared distances of the $y$ points and the $z$ points to the basepoint $x_0.$ By construction, the former follows a $\chi^2(n)$ distribution. The sphere's curvature contracts the distances the most when they are large, but when $\sigma$ is not too large, the contraction is virtually unnoticeable.
#
# Extend any vector `x0` to an orthonormal basis.
# The first column of the output will be parallel to `x0`.
#
gram.schmidt <- function(x0) {
n <- length(x0)
V <- diag(rep(1, n)) # The usual basis of R^n
if (max(x0) != 0) {
i <- which.max(abs(x0)) # Replace the nearest element with x0
V <- cbind(x0, V[, -i])
}
L <- chol(crossprod(V[, 1:n]))
t(backsolve(L, t(V), transpose=TRUE))
}
#
# Inverse stereographic projection of `y` relative to the basepoint `x0`.
# The default for `x0` is the usual: (0,0, ..., 0,1).
# Returns a point `x` on the sphere.
#
iStereographic <- function(y, x0) {
if (missing(x0) || max(abs(x0)) == 0)
x0 = c(1, rep(0, length(y)-1)) else x0 <- x0 / sqrt(sum(x0^2))
if (any(is.infinite(y))) {
-x0
} else {
x0.y <- x0 + y
s <- 2 * sum(x0 * x0.y) / sum(x0.y^2)
x <- s * x0.y - x0
x / sqrt(sum(x^2)) # (Guarantees output lies on the sphere)
}
}
#------------------------------------------------------------------------------#
library(mvtnorm) # Loads `rmvnorm`
n <- 4e3
x0 <- rep(1, 3)
U <- gram.schmidt(x0)
sigma <- 0.5 / sqrt(length(x0))
#
# Generate the points.
#
Y <- U[, -1] %*% t(sigma * rmvnorm(n, mean=rep(0, ncol(U)-1))) + U[, 1]
colnames(Y) <- paste("Y", 1:ncol(Y), sep=".")
X <- t(apply(Y, 2, iStereographic, x0=x0))
colnames(X) <- paste("X", 1:ncol(X), sep=".")
#
# Plot the points.
#
if(length(x0) <= 8 && n <= 5e3) pairs(X, asp=1, pch=19, , cex=1/2, col="#00000040")
#
# Check the distances.
#
par(mfrow=c(1,2))
y2 <- colSums((Y-U[,1])^2)
hist(y2, freq=FALSE, breaks=30)
curve(dchisq(x / sigma^2, length(x0)-1) / sigma^2, add=TRUE, col="Tan", lwd=2, n=1001)
x0 <- x0 / sqrt(sum(x0^2))
z2 <- colSums((t(X) - x0)^2)
hist(z2, freq=FALSE, breaks=30)
curve(dchisq(x / sigma^2, length(x0)-1) / sigma^2, add=TRUE, col="SkyBlue", lwd=2, n=1001)
par(mfrow=c(1,1)) | Creating random points in the surface of a n-dimensional sphere | Using a stereographic projection is attractive.
The stereographic projection relative to a point $x_0\in S^{n}\subset \mathbb{R}^{n+1}$ maps any point $x$ not diametrically opposite to $x_0$ (that is, | Creating random points in the surface of a n-dimensional sphere
Using a stereographic projection is attractive.
The stereographic projection relative to a point $x_0\in S^{n}\subset \mathbb{R}^{n+1}$ maps any point $x$ not diametrically opposite to $x_0$ (that is, $x\ne -x_0$) onto the point $y(x;x_0)$ found by moving directly away from $-x_0$ until encountering the tangent plane of $S^n$ at $x_0.$ Write $t$ for the multiple of this direction vector $x-(-x_0) = x+x_0,$ so that
$$y = y(x;x_0)= x + t(x+x_0).$$
Points $y$ on the tangent plane are those for which $y,$ relative to $x_0,$ are perpendicular to the Normal direction at $x_0$ (which is $x_0$ itself). In terms of the Euclidean inner product $\langle\ \rangle$ this means
$$0 = \langle y - x_0, x_0 \rangle = \langle x + t(x+x_0) - x_0, x_0\rangle = t\langle x + x_0, x_0\rangle + \langle x-x_0, x_0\rangle.$$
This linear equation in $t$ has the unique solution
$$t = -\frac{\langle x-x_0,x_0\rangle}{\langle x + x_0, x_0\rangle}.$$
With a little analysis you can verify that $|y-x_0|$ agrees with $x-x_0$ to first order in $x-x_0,$ indicating that when $x$ is close to $x_0,$ Stereographic projection doesn't appreciably affect Euclidean distances: that is, up to first order, Stereographic projection is an approximate isometry near $x_0.$
Consequently, if we generate points $y$ on the tangent plane $T_{x_0}S^n$ near its origin at $x_0$ and view them as stereographic projections of corresponding points $x$ on $S_n,$ then the distribution of the points on the sphere will approximate the distribution of the points on the plane.
This leaves us with two subproblems to solve:
Generate Normally-distributed points near $x_0$ on $T_{x_0}S^n.$
Invert the Stereographic projection (based at $x_0$).
To solve (1), apply the Gram-Schmidt process to the vectors $x_0, e_1, e_2, \ldots, e_{n+1}$ where the $e_i$ are any basis for $\mathbb{R}^n+1.$ The result after $n+1$ steps will be an orthonormal sequence of vectors that includes a single zero vector. After removing that zero vector we will obtain an orthonormal basis $u_0 = x_0, u_1, u_2, \ldots, u_{n}.$
Generate a random point (according to any distribution whatsoever) on $T_{x_0}S^n$ by generating a random vector $Z = (z_1,z_2,\ldots, z_n) \in \mathbb{R}^n$ and setting
$$y = x_0 + z_1 u_1 + z_2 u_2 + \cdots + z_n u_n.\tag{1}$$
Because the $u_i$ are all orthogonal to $x_0$ (by construction), $y-x_0$ is obviously orthogonal to $x_0.$ That proves all such $y$ lie on $T_{x_0}S^n.$ When the $z_i$ are generated with a Normal distribution, $y$ follows a Normal distribution because it is a linear combination of Normal variates. Thus, this method satisfies all the requirements of the question.
To solve (2), find $x\in S^n$ on the line segment between $-x_0$ and $y.$ All such points can be expressed in terms of a unique real number $0 \lt s \le 1$ in the form
$$x = (1-s)(-x_0) + s y = s(x_0+y) - x_0.$$
Applying the equation of the sphere $|x|^2=1$ gives a quadratic equation for $s$
$$1 = |x_0+y|^2\,s^2 - 2\langle x_0,x_0+y\rangle\, s + 1$$
with unique nonzero solution
$$s = \frac{2\langle x_0, x_0+y\rangle}{|x_0+y|^2},$$
whence
$$x = s(x_0+y) - x_0 = \frac{2\langle x_0, x_0+y\rangle}{|x_0+y|^2}\,(x_0+y) - x_0.\tag{2}$$
Formulas $(1)$ and $(2)$ give an effective and efficient algorithm to generate the points $x$ on the sphere near $x_0$ with an approximate Normal distribution (or, indeed, to approximate any distribution of points close to $x_0$).
Here is a scatterplot matrix of a set of 4,000 such points generated near $x_0 = (1,1,1)/\sqrt{3}.$ The standard deviation in the tangent plane is $1/\sqrt{12} \approx 0.29.$ This is large in the sense that the points are scattered across a sizable portion of the $x_0$ hemisphere, thereby making this a fairly severe test of the algorithm.
It was created with the following R implementation. At the end, this R code plots histograms of the squared distances of the $y$ points and the $z$ points to the basepoint $x_0.$ By construction, the former follows a $\chi^2(n)$ distribution. The sphere's curvature contracts the distances the most when they are large, but when $\sigma$ is not too large, the contraction is virtually unnoticeable.
#
# Extend any vector `x0` to an orthonormal basis.
# The first column of the output will be parallel to `x0`.
#
gram.schmidt <- function(x0) {
n <- length(x0)
V <- diag(rep(1, n)) # The usual basis of R^n
if (max(x0) != 0) {
i <- which.max(abs(x0)) # Replace the nearest element with x0
V <- cbind(x0, V[, -i])
}
L <- chol(crossprod(V[, 1:n]))
t(backsolve(L, t(V), transpose=TRUE))
}
#
# Inverse stereographic projection of `y` relative to the basepoint `x0`.
# The default for `x0` is the usual: (0,0, ..., 0,1).
# Returns a point `x` on the sphere.
#
iStereographic <- function(y, x0) {
if (missing(x0) || max(abs(x0)) == 0)
x0 = c(1, rep(0, length(y)-1)) else x0 <- x0 / sqrt(sum(x0^2))
if (any(is.infinite(y))) {
-x0
} else {
x0.y <- x0 + y
s <- 2 * sum(x0 * x0.y) / sum(x0.y^2)
x <- s * x0.y - x0
x / sqrt(sum(x^2)) # (Guarantees output lies on the sphere)
}
}
#------------------------------------------------------------------------------#
library(mvtnorm) # Loads `rmvnorm`
n <- 4e3
x0 <- rep(1, 3)
U <- gram.schmidt(x0)
sigma <- 0.5 / sqrt(length(x0))
#
# Generate the points.
#
Y <- U[, -1] %*% t(sigma * rmvnorm(n, mean=rep(0, ncol(U)-1))) + U[, 1]
colnames(Y) <- paste("Y", 1:ncol(Y), sep=".")
X <- t(apply(Y, 2, iStereographic, x0=x0))
colnames(X) <- paste("X", 1:ncol(X), sep=".")
#
# Plot the points.
#
if(length(x0) <= 8 && n <= 5e3) pairs(X, asp=1, pch=19, , cex=1/2, col="#00000040")
#
# Check the distances.
#
par(mfrow=c(1,2))
y2 <- colSums((Y-U[,1])^2)
hist(y2, freq=FALSE, breaks=30)
curve(dchisq(x / sigma^2, length(x0)-1) / sigma^2, add=TRUE, col="Tan", lwd=2, n=1001)
x0 <- x0 / sqrt(sum(x0^2))
z2 <- colSums((t(X) - x0)^2)
hist(z2, freq=FALSE, breaks=30)
curve(dchisq(x / sigma^2, length(x0)-1) / sigma^2, add=TRUE, col="SkyBlue", lwd=2, n=1001)
par(mfrow=c(1,1)) | Creating random points in the surface of a n-dimensional sphere
Using a stereographic projection is attractive.
The stereographic projection relative to a point $x_0\in S^{n}\subset \mathbb{R}^{n+1}$ maps any point $x$ not diametrically opposite to $x_0$ (that is, |
25,521 | Creating random points in the surface of a n-dimensional sphere | This answer uses a slightly different projection than Whuber's answer.
I want to create random points following a distribution with center X, the points must be in the surface of the n-dimensional sphere, and located very close to X.
This does not specify the problem in much detail. I will assume that the distribution of the points is spherically symmetric around the point X and that you have some desired distribution for the (Euclidian) distance between the points and X.
You can consider the n-sphere sphere as a sum of (n-1)-spheres, slices/rings/frustrums.
Now we project a point from the n-sphere, onto the n-cylinder around it. Below is a view of the idea in 3 dimensions.
https://en.wikipedia.org/wiki/File:Cylindrical_Projection_basics.svg
The trick is then to sample the height on the cylinder and the direction away from the axis separately.
Without loss of generality we can use the coordinate $(1,0,0,0,...,0)$ (solve it for this case and then rotate the solution to your point $X$).
Then use the following algorithm:
Sample the coordinate $x_1$ by sampling which slices the points end up in according to some desired distance function.
Sample the coordinates $x_2, ..., x_n$ by determining where the points end up on the (n-1)-spheres (this is like sampling on a (n-1)-dimensional sphere with the regular technique).
Then rotate the solution to the point $X$. The rotations should bring the first coordinate $(1,0,0,0, ..., 0)$ to the vector $X$, the other coordinates should transform to vectors perpendicular to $X$, any orthonormal basis for the perpendicular space will do. | Creating random points in the surface of a n-dimensional sphere | This answer uses a slightly different projection than Whuber's answer.
I want to create random points following a distribution with center X, the points must be in the surface of the n-dimensional | Creating random points in the surface of a n-dimensional sphere
This answer uses a slightly different projection than Whuber's answer.
I want to create random points following a distribution with center X, the points must be in the surface of the n-dimensional sphere, and located very close to X.
This does not specify the problem in much detail. I will assume that the distribution of the points is spherically symmetric around the point X and that you have some desired distribution for the (Euclidian) distance between the points and X.
You can consider the n-sphere sphere as a sum of (n-1)-spheres, slices/rings/frustrums.
Now we project a point from the n-sphere, onto the n-cylinder around it. Below is a view of the idea in 3 dimensions.
https://en.wikipedia.org/wiki/File:Cylindrical_Projection_basics.svg
The trick is then to sample the height on the cylinder and the direction away from the axis separately.
Without loss of generality we can use the coordinate $(1,0,0,0,...,0)$ (solve it for this case and then rotate the solution to your point $X$).
Then use the following algorithm:
Sample the coordinate $x_1$ by sampling which slices the points end up in according to some desired distance function.
Sample the coordinates $x_2, ..., x_n$ by determining where the points end up on the (n-1)-spheres (this is like sampling on a (n-1)-dimensional sphere with the regular technique).
Then rotate the solution to the point $X$. The rotations should bring the first coordinate $(1,0,0,0, ..., 0)$ to the vector $X$, the other coordinates should transform to vectors perpendicular to $X$, any orthonormal basis for the perpendicular space will do. | Creating random points in the surface of a n-dimensional sphere
This answer uses a slightly different projection than Whuber's answer.
I want to create random points following a distribution with center X, the points must be in the surface of the n-dimensional |
25,522 | Creating random points in the surface of a n-dimensional sphere | First, it is not possible to have the positions be exactly Gaussian since restriction to the surface of a sphere imposes a bound on the range of the coordinates.
You could look at using truncated, to $(-\pi, \pi)$, normals for each component. To be clear, for a 2-sphere (in 3-space) you have fixed the radius, and must choose 2 angles. I am suggesting you put truncated normal distributions on the angles. | Creating random points in the surface of a n-dimensional sphere | First, it is not possible to have the positions be exactly Gaussian since restriction to the surface of a sphere imposes a bound on the range of the coordinates.
You could look at using truncated, to | Creating random points in the surface of a n-dimensional sphere
First, it is not possible to have the positions be exactly Gaussian since restriction to the surface of a sphere imposes a bound on the range of the coordinates.
You could look at using truncated, to $(-\pi, \pi)$, normals for each component. To be clear, for a 2-sphere (in 3-space) you have fixed the radius, and must choose 2 angles. I am suggesting you put truncated normal distributions on the angles. | Creating random points in the surface of a n-dimensional sphere
First, it is not possible to have the positions be exactly Gaussian since restriction to the surface of a sphere imposes a bound on the range of the coordinates.
You could look at using truncated, to |
25,523 | Creating random points in the surface of a n-dimensional sphere | To specifically address your question, I have a simpler (sillier) alternative:
Why not lift your problem?
Your center $X$ is the projection (normalization) of a vector which has not a normalized norm. You could define a vector $x$ which would serve as your unnormalized center and then select data points around $x$ (typically using a Gaussian distribution).
There is a free parameter: the norm of $x$. As a matter of fact what will matter is the ratio between the standard deviation of $X$ and that norm. You will get a value similar to the $\kappa$ value of the multi-dimensional Von Mises destribution. | Creating random points in the surface of a n-dimensional sphere | To specifically address your question, I have a simpler (sillier) alternative:
Why not lift your problem?
Your center $X$ is the projection (normalization) of a vector which has not a normalized nor | Creating random points in the surface of a n-dimensional sphere
To specifically address your question, I have a simpler (sillier) alternative:
Why not lift your problem?
Your center $X$ is the projection (normalization) of a vector which has not a normalized norm. You could define a vector $x$ which would serve as your unnormalized center and then select data points around $x$ (typically using a Gaussian distribution).
There is a free parameter: the norm of $x$. As a matter of fact what will matter is the ratio between the standard deviation of $X$ and that norm. You will get a value similar to the $\kappa$ value of the multi-dimensional Von Mises destribution. | Creating random points in the surface of a n-dimensional sphere
To specifically address your question, I have a simpler (sillier) alternative:
Why not lift your problem?
Your center $X$ is the projection (normalization) of a vector which has not a normalized nor |
25,524 | Validation accuracy vs Testing accuracy | There isn't a standard terminology in this context (and I have seen long discussions and debates regarding this topic), so I completely understand you, but you should get used to different terminology (and assume that terminology might not be consistent or it change across sources).
I would like to point out a few things:
I have never seen people use the expression "validation accuracy" (or dataset) to refer to the test accuracy (or dataset), but I have seen people use the term "test accuracy" (or dataset) to refer to the validation accuracy (or dataset). In other words, the test (or testing) accuracy often refers to the validation accuracy, that is, the accuracy you calculate on the data set you do not use for training, but you use (during the training process) for validating (or "testing") the generalisation ability of your model or for "early stopping".
In k-fold cross-validation, people usually only mention two datasets: training and testing (or validation).
k-fold cross-validation is just a way of validating the model on different subsets of the data. This can be done for several reasons. For example, you have a small amount of data, so your validation (and training) dataset is quite small, so you want to have a better understanding of the model's generalisation ability by validating it on several subsets of the whole dataset.
You should likely have a separate (from the validation dataset) dataset for testing, because the validation dataset can be used for early stopping, so, in a certain way, it is dependent on the training process
I would suggest to use the following terminology
Training dataset: the data used to fit the model.
Validation dataset: the data used to validate the generalisation ability of the model or for early stopping, during the training process.
Testing dataset: the data used to for other purposes other than training and validating.
Note that some of these datasets might overlap, but this might almost never be a good thing (if you have enough data). | Validation accuracy vs Testing accuracy | There isn't a standard terminology in this context (and I have seen long discussions and debates regarding this topic), so I completely understand you, but you should get used to different terminology | Validation accuracy vs Testing accuracy
There isn't a standard terminology in this context (and I have seen long discussions and debates regarding this topic), so I completely understand you, but you should get used to different terminology (and assume that terminology might not be consistent or it change across sources).
I would like to point out a few things:
I have never seen people use the expression "validation accuracy" (or dataset) to refer to the test accuracy (or dataset), but I have seen people use the term "test accuracy" (or dataset) to refer to the validation accuracy (or dataset). In other words, the test (or testing) accuracy often refers to the validation accuracy, that is, the accuracy you calculate on the data set you do not use for training, but you use (during the training process) for validating (or "testing") the generalisation ability of your model or for "early stopping".
In k-fold cross-validation, people usually only mention two datasets: training and testing (or validation).
k-fold cross-validation is just a way of validating the model on different subsets of the data. This can be done for several reasons. For example, you have a small amount of data, so your validation (and training) dataset is quite small, so you want to have a better understanding of the model's generalisation ability by validating it on several subsets of the whole dataset.
You should likely have a separate (from the validation dataset) dataset for testing, because the validation dataset can be used for early stopping, so, in a certain way, it is dependent on the training process
I would suggest to use the following terminology
Training dataset: the data used to fit the model.
Validation dataset: the data used to validate the generalisation ability of the model or for early stopping, during the training process.
Testing dataset: the data used to for other purposes other than training and validating.
Note that some of these datasets might overlap, but this might almost never be a good thing (if you have enough data). | Validation accuracy vs Testing accuracy
There isn't a standard terminology in this context (and I have seen long discussions and debates regarding this topic), so I completely understand you, but you should get used to different terminology |
25,525 | Validation accuracy vs Testing accuracy | @nbro's answer is complete. I just add a couple of explanations to supplement. In more traditional textbooks data is often partitioned into two sets: training and test. In recent years, with more complex models and increasing need for model selection, development sets or validations sets are also considered. Devel/validation should have no overlap with the test set or the reporting accuracy/ error evaluation is not valid. In the modern setting: the model is trained on the training set, tested on the validation set to see if it is a good fit, possibly model is tweaked and trained again and validated again for multiple times. When the final model is selected, the testing set is used to calculate accuracy, error reports. The important thing is that the test set is only touched once. | Validation accuracy vs Testing accuracy | @nbro's answer is complete. I just add a couple of explanations to supplement. In more traditional textbooks data is often partitioned into two sets: training and test. In recent years, with more comp | Validation accuracy vs Testing accuracy
@nbro's answer is complete. I just add a couple of explanations to supplement. In more traditional textbooks data is often partitioned into two sets: training and test. In recent years, with more complex models and increasing need for model selection, development sets or validations sets are also considered. Devel/validation should have no overlap with the test set or the reporting accuracy/ error evaluation is not valid. In the modern setting: the model is trained on the training set, tested on the validation set to see if it is a good fit, possibly model is tweaked and trained again and validated again for multiple times. When the final model is selected, the testing set is used to calculate accuracy, error reports. The important thing is that the test set is only touched once. | Validation accuracy vs Testing accuracy
@nbro's answer is complete. I just add a couple of explanations to supplement. In more traditional textbooks data is often partitioned into two sets: training and test. In recent years, with more comp |
25,526 | Explosive AR(MA) processes are stationary? | Yes, there is a stationary solution for $\rho>1$ in AR(1) process:
$$X_t=\rho X_{t-1}+\varepsilon_t$$
I'm not sure you'll like it though:
$$X_t=-\sum_{k=1}^\infty\frac 1 {\rho^k}\varepsilon_{t+k}$$
Notice the index: $t+k$, you'd need DeLorean to use this in practice.
When $\rho>1$ the process is not invertible. | Explosive AR(MA) processes are stationary? | Yes, there is a stationary solution for $\rho>1$ in AR(1) process:
$$X_t=\rho X_{t-1}+\varepsilon_t$$
I'm not sure you'll like it though:
$$X_t=-\sum_{k=1}^\infty\frac 1 {\rho^k}\varepsilon_{t+k}$$
No | Explosive AR(MA) processes are stationary?
Yes, there is a stationary solution for $\rho>1$ in AR(1) process:
$$X_t=\rho X_{t-1}+\varepsilon_t$$
I'm not sure you'll like it though:
$$X_t=-\sum_{k=1}^\infty\frac 1 {\rho^k}\varepsilon_{t+k}$$
Notice the index: $t+k$, you'd need DeLorean to use this in practice.
When $\rho>1$ the process is not invertible. | Explosive AR(MA) processes are stationary?
Yes, there is a stationary solution for $\rho>1$ in AR(1) process:
$$X_t=\rho X_{t-1}+\varepsilon_t$$
I'm not sure you'll like it though:
$$X_t=-\sum_{k=1}^\infty\frac 1 {\rho^k}\varepsilon_{t+k}$$
No |
25,527 | Explosive AR(MA) processes are stationary? | First we can write the model in reverse AR(1) form as:
$$X_{t} = \frac{1}{\rho} X_{t+1} - \frac{\epsilon_{t+1}}{\rho}.$$
Suppose you now define the observable values using the filter:
$$X_t = - \sum_{k=1}^\infty \frac{\epsilon_{t+k}}{\rho^k}.$$
You can confirm by substitution that both the original AR(1) form and the reversed form hold in this case. As pointed out in an excellent answer to a related question by Michael, this means that the model is not identified unless we exclude this solution by definition. | Explosive AR(MA) processes are stationary? | First we can write the model in reverse AR(1) form as:
$$X_{t} = \frac{1}{\rho} X_{t+1} - \frac{\epsilon_{t+1}}{\rho}.$$
Suppose you now define the observable values using the filter:
$$X_t = - \sum_{ | Explosive AR(MA) processes are stationary?
First we can write the model in reverse AR(1) form as:
$$X_{t} = \frac{1}{\rho} X_{t+1} - \frac{\epsilon_{t+1}}{\rho}.$$
Suppose you now define the observable values using the filter:
$$X_t = - \sum_{k=1}^\infty \frac{\epsilon_{t+k}}{\rho^k}.$$
You can confirm by substitution that both the original AR(1) form and the reversed form hold in this case. As pointed out in an excellent answer to a related question by Michael, this means that the model is not identified unless we exclude this solution by definition. | Explosive AR(MA) processes are stationary?
First we can write the model in reverse AR(1) form as:
$$X_{t} = \frac{1}{\rho} X_{t+1} - \frac{\epsilon_{t+1}}{\rho}.$$
Suppose you now define the observable values using the filter:
$$X_t = - \sum_{ |
25,528 | How can I test if the two parameter estimates in the same model are significantly different? | Assessing the hypothesis that $a$ and $b$ are different is equivalent to testing the null hypothesis $a - b = 0$ (against the alternative that $a-b\ne 0$).
The following analysis presumes it is reasonable for you to estimate $a-b$ as $$U = \hat a - \hat b.$$ It also accepts your model formulation (which often is a reasonable one), which--because the errors are additive (and could even produce negative observed values of $y$)--does not permit us to linearize it by taking logarithms of both sides.
The variance of $U$ can be expressed in terms of the covariance matrix $(c_{ij})$ of $(\hat a, \hat b)$ as
$$\operatorname{Var}(U) = \operatorname{Var}(\hat a - \hat b) = \operatorname{Var}(\hat a) + \operatorname{Var}(\hat b) - 2 \operatorname{Cov}(\hat a, \hat b) = c_{11} + c_{22} - 2c_{12}^2.$$
When $(\hat a, \hat b)$ is estimated with least squares, one usually uses a "t test;" that is, the distribution of $$t = U / \sqrt{\operatorname{Var(U)}}$$ is approximated by a Student t distribution with $n-2$ degrees of freedom (where $n$ is the data count and $2$ counts the number of coefficients). Regardless, $t$ usually is the basis of any test. You may perform a Z test (when $n$ is large or when fitting with Maximum Likelihood) or bootstrap it, for instance.
To be specific, the p-value of the t test is given by
$$p = 2t_{n-2}(-|t|)$$
where $t_{n-2}$ is the Student t (cumulative) distribution function. It is one expression for the "tail area:" the chance that a Student t variable (of $n-2$ degrees of freedom) equals or exceeds the size of the test statistic, $|t|.$
More generally, for numbers $c_1,$ $c_2,$ and $\mu$ you can use exactly the same approach to test any hypothesis
$$H_0: c_1 a + c_2 b = \mu$$
against the two-sided alternative. (This encompasses the special but widespread case of a "contrast".) Use the estimated variance-covariance matrix $(c_{ij})$ to estimate the variance of $U = c_1 a + c_2 b$ and form the statistic
$$t = (c_1 \hat a + c_2 \hat b - \mu) / \sqrt{\operatorname{Var}(U)}.$$
The foregoing is the case $(c_1,c_2) = (1,-1)$ and $\mu=0.$
To check that this advice is correct, I ran the following R code to create data according to this model (with Normally distributed errors e), fit them, and compute the values of $t$ many times. The check is that the probability plot of $t$ (based on the assumed Student t distribution) closely follows the diagonal. Here is that plot in a simulation of size $500$ where $n=5$ (a very small dataset, chosen because the $t$ distribution is far from Normal) and $a=b=-1/2.$
In this example, at least, the procedure works beautifully. Consider re-running the simulation using parameters $a,$ $b,$ $\sigma$ (the error standard deviation), and $n$ that reflect your situation.
Here is the code.
#
# Specify the true parameters.
#
set.seed(17)
a <- -1/2
b <- -1/2
sigma <- 0.25 # Variance of the errors
n <- 5 # Sample size
n.sim <- 500 # Simulation size
#
# Specify the hypothesis.
#
H.0 <- c(1, -1) # Coefficients of `a` and `b`.
mu <- 0
#
# Provide x and z values in terms of their logarithms.
#
log.x <- log(rexp(n))
log.z <- log(rexp(n))
#
# Compute y without error.
#
y.0 <- exp(a * log.x + b * log.z)
#
# Conduct a simulation to estimate the sampling distribution of the t statistic.
#
sim <- replicate(n.sim, {
#
# Add the errors.
#
e <- rnorm(n, 0, sigma)
df <- data.frame(log.x=log.x, log.z=log.z, y.0, y=y.0 + e)
#
# Guess the solution.
#
fit.ols <- lm(log(y) ~ log.x + log.z - 1, subset(df, y > 0))
start <- coefficients(fit.ols) # Initial values of (a.hat, b.hat)
#
# Polish it using nonlinear least squares.
#
fit <- nls(y ~ exp(a * log.x + b * log.z), df, list(a=start[1], b=start[2]))
#
# Test a hypothesis.
#
cc <- vcov(fit)
s <- sqrt((H.0 %*% cc %*% H.0))
(crossprod(H.0, coef(fit)) - mu) / s
})
#
# Display the simulation results.
#
summary(lm(sort(sim) ~ 0 + ppoints(length(sim))))
qqplot(qt(ppoints(length(sim)), df=n-2), sim,
pch=21, bg="#00000010", col="#00000040",
xlab="Student t reference value",
ylab="Test statistic")
abline(0:1, col="Red", lwd=2) | How can I test if the two parameter estimates in the same model are significantly different? | Assessing the hypothesis that $a$ and $b$ are different is equivalent to testing the null hypothesis $a - b = 0$ (against the alternative that $a-b\ne 0$).
The following analysis presumes it is reas | How can I test if the two parameter estimates in the same model are significantly different?
Assessing the hypothesis that $a$ and $b$ are different is equivalent to testing the null hypothesis $a - b = 0$ (against the alternative that $a-b\ne 0$).
The following analysis presumes it is reasonable for you to estimate $a-b$ as $$U = \hat a - \hat b.$$ It also accepts your model formulation (which often is a reasonable one), which--because the errors are additive (and could even produce negative observed values of $y$)--does not permit us to linearize it by taking logarithms of both sides.
The variance of $U$ can be expressed in terms of the covariance matrix $(c_{ij})$ of $(\hat a, \hat b)$ as
$$\operatorname{Var}(U) = \operatorname{Var}(\hat a - \hat b) = \operatorname{Var}(\hat a) + \operatorname{Var}(\hat b) - 2 \operatorname{Cov}(\hat a, \hat b) = c_{11} + c_{22} - 2c_{12}^2.$$
When $(\hat a, \hat b)$ is estimated with least squares, one usually uses a "t test;" that is, the distribution of $$t = U / \sqrt{\operatorname{Var(U)}}$$ is approximated by a Student t distribution with $n-2$ degrees of freedom (where $n$ is the data count and $2$ counts the number of coefficients). Regardless, $t$ usually is the basis of any test. You may perform a Z test (when $n$ is large or when fitting with Maximum Likelihood) or bootstrap it, for instance.
To be specific, the p-value of the t test is given by
$$p = 2t_{n-2}(-|t|)$$
where $t_{n-2}$ is the Student t (cumulative) distribution function. It is one expression for the "tail area:" the chance that a Student t variable (of $n-2$ degrees of freedom) equals or exceeds the size of the test statistic, $|t|.$
More generally, for numbers $c_1,$ $c_2,$ and $\mu$ you can use exactly the same approach to test any hypothesis
$$H_0: c_1 a + c_2 b = \mu$$
against the two-sided alternative. (This encompasses the special but widespread case of a "contrast".) Use the estimated variance-covariance matrix $(c_{ij})$ to estimate the variance of $U = c_1 a + c_2 b$ and form the statistic
$$t = (c_1 \hat a + c_2 \hat b - \mu) / \sqrt{\operatorname{Var}(U)}.$$
The foregoing is the case $(c_1,c_2) = (1,-1)$ and $\mu=0.$
To check that this advice is correct, I ran the following R code to create data according to this model (with Normally distributed errors e), fit them, and compute the values of $t$ many times. The check is that the probability plot of $t$ (based on the assumed Student t distribution) closely follows the diagonal. Here is that plot in a simulation of size $500$ where $n=5$ (a very small dataset, chosen because the $t$ distribution is far from Normal) and $a=b=-1/2.$
In this example, at least, the procedure works beautifully. Consider re-running the simulation using parameters $a,$ $b,$ $\sigma$ (the error standard deviation), and $n$ that reflect your situation.
Here is the code.
#
# Specify the true parameters.
#
set.seed(17)
a <- -1/2
b <- -1/2
sigma <- 0.25 # Variance of the errors
n <- 5 # Sample size
n.sim <- 500 # Simulation size
#
# Specify the hypothesis.
#
H.0 <- c(1, -1) # Coefficients of `a` and `b`.
mu <- 0
#
# Provide x and z values in terms of their logarithms.
#
log.x <- log(rexp(n))
log.z <- log(rexp(n))
#
# Compute y without error.
#
y.0 <- exp(a * log.x + b * log.z)
#
# Conduct a simulation to estimate the sampling distribution of the t statistic.
#
sim <- replicate(n.sim, {
#
# Add the errors.
#
e <- rnorm(n, 0, sigma)
df <- data.frame(log.x=log.x, log.z=log.z, y.0, y=y.0 + e)
#
# Guess the solution.
#
fit.ols <- lm(log(y) ~ log.x + log.z - 1, subset(df, y > 0))
start <- coefficients(fit.ols) # Initial values of (a.hat, b.hat)
#
# Polish it using nonlinear least squares.
#
fit <- nls(y ~ exp(a * log.x + b * log.z), df, list(a=start[1], b=start[2]))
#
# Test a hypothesis.
#
cc <- vcov(fit)
s <- sqrt((H.0 %*% cc %*% H.0))
(crossprod(H.0, coef(fit)) - mu) / s
})
#
# Display the simulation results.
#
summary(lm(sort(sim) ~ 0 + ppoints(length(sim))))
qqplot(qt(ppoints(length(sim)), df=n-2), sim,
pch=21, bg="#00000010", col="#00000040",
xlab="Student t reference value",
ylab="Test statistic")
abline(0:1, col="Red", lwd=2) | How can I test if the two parameter estimates in the same model are significantly different?
Assessing the hypothesis that $a$ and $b$ are different is equivalent to testing the null hypothesis $a - b = 0$ (against the alternative that $a-b\ne 0$).
The following analysis presumes it is reas |
25,529 | Decision trees, Gradient boosting and normality of predictors | Second question. Yes, algorithms based on decision trees are completely insensitive to the specific values of predictors, they react only to their order. It means that you don't have to worry about "non-normality" of your predictors. Moreover, you can apply any monotonic transformation to your data, if you want - it will not change predictions of decision trees at all!
First question. I feel you should leave your data alone. By trimming and winsorizing it, you discard information that might be meaningful for your classification problem.
For linear models, long tails introduce noise that may be harmful. But for decision trees it is not a problem at all.
If you are too afraid of long tails, I would suggest to apply a transformation to your data, that puts it into a prettier scale without distorting the order of your observations. For example, you can make the scale logarithmic by applying
$$
f(x) = \text{sign}(x) \alpha\log(|x / \alpha|+1)
$$
For small x (roughly from $-\alpha$ to $\alpha$), this function is close to identity, but large values are heavily shrunk towards 0, but monotonicity is strictly preserved - thus, no information is lost.
How can removing extremes affect quality of prediction? By removing extreme values, you indeed can prevent your model for making splits in very high or very low points. This restriction leads exactly to non-increasing of the ability of your model to fit the training data. You have quite a large dataset (100K of points is quite much, if it is not very high-dimensional), so I assume that your model doesn't suffer from severe overfitting, if you regularize it properly (e.g. by controlling maximum tree size and number of trees). If it is the case, then restricting the model from splitting in high or low points will lead to degeneration of prediction quality on the test set as well. | Decision trees, Gradient boosting and normality of predictors | Second question. Yes, algorithms based on decision trees are completely insensitive to the specific values of predictors, they react only to their order. It means that you don't have to worry about "n | Decision trees, Gradient boosting and normality of predictors
Second question. Yes, algorithms based on decision trees are completely insensitive to the specific values of predictors, they react only to their order. It means that you don't have to worry about "non-normality" of your predictors. Moreover, you can apply any monotonic transformation to your data, if you want - it will not change predictions of decision trees at all!
First question. I feel you should leave your data alone. By trimming and winsorizing it, you discard information that might be meaningful for your classification problem.
For linear models, long tails introduce noise that may be harmful. But for decision trees it is not a problem at all.
If you are too afraid of long tails, I would suggest to apply a transformation to your data, that puts it into a prettier scale without distorting the order of your observations. For example, you can make the scale logarithmic by applying
$$
f(x) = \text{sign}(x) \alpha\log(|x / \alpha|+1)
$$
For small x (roughly from $-\alpha$ to $\alpha$), this function is close to identity, but large values are heavily shrunk towards 0, but monotonicity is strictly preserved - thus, no information is lost.
How can removing extremes affect quality of prediction? By removing extreme values, you indeed can prevent your model for making splits in very high or very low points. This restriction leads exactly to non-increasing of the ability of your model to fit the training data. You have quite a large dataset (100K of points is quite much, if it is not very high-dimensional), so I assume that your model doesn't suffer from severe overfitting, if you regularize it properly (e.g. by controlling maximum tree size and number of trees). If it is the case, then restricting the model from splitting in high or low points will lead to degeneration of prediction quality on the test set as well. | Decision trees, Gradient boosting and normality of predictors
Second question. Yes, algorithms based on decision trees are completely insensitive to the specific values of predictors, they react only to their order. It means that you don't have to worry about "n |
25,530 | Decision trees, Gradient boosting and normality of predictors | It will sound like circular reasoning, but: the method that scores best on the evaluation criterion is the best method.
Instead of worrying about the most theoretically sound approach, answer: given two approaches, how would you decide which one is best? When you’ve reduced the decision process to a quantitative algorithm, then every idea you have is potentially worth trying out, and the idea that achieves the best outcome is the best idea.
Then the focus is on ensuring the evaluation process leads to valid outcomes (apply cross-validation, calculate confidence intervals, possibly correct for false discovery rate). | Decision trees, Gradient boosting and normality of predictors | It will sound like circular reasoning, but: the method that scores best on the evaluation criterion is the best method.
Instead of worrying about the most theoretically sound approach, answer: given t | Decision trees, Gradient boosting and normality of predictors
It will sound like circular reasoning, but: the method that scores best on the evaluation criterion is the best method.
Instead of worrying about the most theoretically sound approach, answer: given two approaches, how would you decide which one is best? When you’ve reduced the decision process to a quantitative algorithm, then every idea you have is potentially worth trying out, and the idea that achieves the best outcome is the best idea.
Then the focus is on ensuring the evaluation process leads to valid outcomes (apply cross-validation, calculate confidence intervals, possibly correct for false discovery rate). | Decision trees, Gradient boosting and normality of predictors
It will sound like circular reasoning, but: the method that scores best on the evaluation criterion is the best method.
Instead of worrying about the most theoretically sound approach, answer: given t |
25,531 | Bias / variance tradeoff math | First, nobody says that squared bias and variance behave just like $e^{\pm x}$, in case you are wondering. The point simply is that one increases and the other decreases. It'd similar to supply and demand curves in microeconomics, which are traditionally depicted as straight lines, which sometimes confuses people. Again, the point simply is that one slopes downward and the other upward.
Your key confusion is about what is on the horizontal axis. It's model complexity - not sample size. Yes, as you write, if we use some unbiased estimator, then increasing the sample size will reduce its variance, and we will get a better model. However, the bias-variance tradeoff is in the context of a fixed sample size, and what we vary is the model complexity, e.g., by adding predictors.
If model A is too small and does not contain predictors whose true parameter value is nonzero, and model B encompasses model A but contains all predictors whose parameter values are nonzero, then parameter estimates from model A will be biased and from model B unbiased - but the variance of parameter estimates in model A will be smaller than for the same parameters in model B. | Bias / variance tradeoff math | First, nobody says that squared bias and variance behave just like $e^{\pm x}$, in case you are wondering. The point simply is that one increases and the other decreases. It'd similar to supply and de | Bias / variance tradeoff math
First, nobody says that squared bias and variance behave just like $e^{\pm x}$, in case you are wondering. The point simply is that one increases and the other decreases. It'd similar to supply and demand curves in microeconomics, which are traditionally depicted as straight lines, which sometimes confuses people. Again, the point simply is that one slopes downward and the other upward.
Your key confusion is about what is on the horizontal axis. It's model complexity - not sample size. Yes, as you write, if we use some unbiased estimator, then increasing the sample size will reduce its variance, and we will get a better model. However, the bias-variance tradeoff is in the context of a fixed sample size, and what we vary is the model complexity, e.g., by adding predictors.
If model A is too small and does not contain predictors whose true parameter value is nonzero, and model B encompasses model A but contains all predictors whose parameter values are nonzero, then parameter estimates from model A will be biased and from model B unbiased - but the variance of parameter estimates in model A will be smaller than for the same parameters in model B. | Bias / variance tradeoff math
First, nobody says that squared bias and variance behave just like $e^{\pm x}$, in case you are wondering. The point simply is that one increases and the other decreases. It'd similar to supply and de |
25,532 | Bias / variance tradeoff math | Problems occur when a model $f(x,\theta)$ has a high tendency to fit the noise.
In that case the model tends to over-fit. That is, it is not only expressing the true model but also the random noise that you do not want to capture with your model (because the noise is a non-systematic part that does not allow you to make predictions for new data).
One might improve (reduce) the total error of fitting, by introducing some bias, when the this bias makes the variance/over-fitting reduce more strongly than the increase of the bias/under-fitting (ie not correctly representing the true model).
1. Why exactly $E[(\hat{\theta}_n - E[\hat{\theta}_n])^2]$ and $E[\hat{\theta}_n - \theta]$ cannot be decreased simultaneously?
This is not true. They can be decreased simultaneously (depending on the case). Imagine that you introduced some bias which both increased the variance as well as the bias. Then in the reverse direction reducing this bias will simultaneously reduce bias and variance.
For example a scaled root mean squared difference $c \sqrt{\frac{1}{n} {\sum(x_i-\bar{x})^2}}$ for sample of size $n$ is an unbiased estimator for the population standard deviation $\sigma$ when $c=\sqrt{\frac{n}{n-1}}$. Now, if you would have $c>\sqrt{\frac{n}{n-1}}$, then you would both reduce the bias as well as the variance when you reduce the size of this constant $c$.
However, the bias that is (intentionally) added in regularization is often of the kind that reduces the variance (e.g. you could reduce $c$ to a level below $\sqrt{\frac{n}{n-1}}$). Thus, in practice, you get a trade-off in bias versus variance and reducing the bias will increase the variance (and vice versa).
2. Why can't we just take some unbiased estimator and reduce the variance by increasing sample size?
In principle you can.
But,
This may require much more sampling effort which is expensive, and this is often a limitation.
Possibly there might also be computational difficulties with certain
estimation problems and the sample size would need to
increase extremely in order to solve this, if it is possible at all.
(e.g. high dimensionality parameters>measurements, or as in ridge regression: very shallow paths around the global optimum)
Often there is also no objection to bias. When it is about reducing the total error (as in many cases) then the use of a biased but less erroneous estimator is to be preferred.
About your counter example.
Related to your second question you can indeed reduce the error by increasing the sample size. And related to your first question you can also reduce both bias and variance (say you use a scaled sample mean $c\frac{\sum{x_i}}{n}$ as estimator of the population mean and consider varying the scaling parameter $c$).
However the region of practical interest is where the decreasing bias coincides with an increasing variance. The image below shows this contrast by using a sample (size = 5) taken from a normal distribution with variance = 1 and mean = 1. The unscaled sample mean is the unbiased predictor of the population mean. If you would increase the scaling of this predictor than you have both increasing bias and increasing variance. However if you decrease the scaling of the predictor then you have increasing bias, but decreasing variance. The "optimal" predictor is then actually not the sample mean but some shrinked estimator (see also Why is the James-Stein estimator called a "shrinkage" estimator?). | Bias / variance tradeoff math | Problems occur when a model $f(x,\theta)$ has a high tendency to fit the noise.
In that case the model tends to over-fit. That is, it is not only expressing the true model but also the random noise th | Bias / variance tradeoff math
Problems occur when a model $f(x,\theta)$ has a high tendency to fit the noise.
In that case the model tends to over-fit. That is, it is not only expressing the true model but also the random noise that you do not want to capture with your model (because the noise is a non-systematic part that does not allow you to make predictions for new data).
One might improve (reduce) the total error of fitting, by introducing some bias, when the this bias makes the variance/over-fitting reduce more strongly than the increase of the bias/under-fitting (ie not correctly representing the true model).
1. Why exactly $E[(\hat{\theta}_n - E[\hat{\theta}_n])^2]$ and $E[\hat{\theta}_n - \theta]$ cannot be decreased simultaneously?
This is not true. They can be decreased simultaneously (depending on the case). Imagine that you introduced some bias which both increased the variance as well as the bias. Then in the reverse direction reducing this bias will simultaneously reduce bias and variance.
For example a scaled root mean squared difference $c \sqrt{\frac{1}{n} {\sum(x_i-\bar{x})^2}}$ for sample of size $n$ is an unbiased estimator for the population standard deviation $\sigma$ when $c=\sqrt{\frac{n}{n-1}}$. Now, if you would have $c>\sqrt{\frac{n}{n-1}}$, then you would both reduce the bias as well as the variance when you reduce the size of this constant $c$.
However, the bias that is (intentionally) added in regularization is often of the kind that reduces the variance (e.g. you could reduce $c$ to a level below $\sqrt{\frac{n}{n-1}}$). Thus, in practice, you get a trade-off in bias versus variance and reducing the bias will increase the variance (and vice versa).
2. Why can't we just take some unbiased estimator and reduce the variance by increasing sample size?
In principle you can.
But,
This may require much more sampling effort which is expensive, and this is often a limitation.
Possibly there might also be computational difficulties with certain
estimation problems and the sample size would need to
increase extremely in order to solve this, if it is possible at all.
(e.g. high dimensionality parameters>measurements, or as in ridge regression: very shallow paths around the global optimum)
Often there is also no objection to bias. When it is about reducing the total error (as in many cases) then the use of a biased but less erroneous estimator is to be preferred.
About your counter example.
Related to your second question you can indeed reduce the error by increasing the sample size. And related to your first question you can also reduce both bias and variance (say you use a scaled sample mean $c\frac{\sum{x_i}}{n}$ as estimator of the population mean and consider varying the scaling parameter $c$).
However the region of practical interest is where the decreasing bias coincides with an increasing variance. The image below shows this contrast by using a sample (size = 5) taken from a normal distribution with variance = 1 and mean = 1. The unscaled sample mean is the unbiased predictor of the population mean. If you would increase the scaling of this predictor than you have both increasing bias and increasing variance. However if you decrease the scaling of the predictor then you have increasing bias, but decreasing variance. The "optimal" predictor is then actually not the sample mean but some shrinked estimator (see also Why is the James-Stein estimator called a "shrinkage" estimator?). | Bias / variance tradeoff math
Problems occur when a model $f(x,\theta)$ has a high tendency to fit the noise.
In that case the model tends to over-fit. That is, it is not only expressing the true model but also the random noise th |
25,533 | Why does log-transformation of the RNA-seq data reduce the amount of explained variance in PCA? | Based on the size of your dataset, I suspect you are working with the single cell RNA-seq data.
If so, I can confirm your observation: with scRNA-seq data, PCA explained variances after log-transform are typically much lower than beforehand. Here is a replication of your finding with the Tasic et al. 2016 dataset I have at hand:
Here I used $\log(x+1)$ because of exact zeros. Note that log-transformed data yield explained variances roughly similar to the standardized data (when each variable is centered and scaled to have unit variance).
The reason for this is that different variables (genes) have VERY different variances. RNA-seq data are ultimately counts of RNA molecules, and the variance is monotonically growing with the mean (think Poisson distribution). So the genes that are highly expressed will have high variance whereas the genes that are barely expressed or detected at all, will have almost zero variance:
Without any transformations, there is one gene that alone explains above 40% of the variance (i.e. its variance is above 40% of the total variance). In this dataset, it happens to be this gene: https://en.wikipedia.org/wiki/Neuropeptide_Y which is very highly expressed (RPKM values over 100000) in some cells and has zero expression in some other cells. When you do PCA on the raw data, PC1 will basically coincide with this single gene.
This is similar to what happens in the accepted answer to PCA on correlation or covariance?:
Notice that PCA on covariance is dominated by run800m and javelin: PC1 is almost equal to run800m (and explains 82% of the variance) and PC2 is almost equal to javelin (together they explain 97%). PCA on correlation is much more informative and reveals some structure in the data and relationships between variables (but note that the explained variances drop to 64% and 71%).
Update
In the comments, @An-old-man-in-the-sea brought up the issue of the variance-stabilizing transformations. The RNA-seq counts are usually modeled with a negative binomial distribution that has the following mean-variance relationship: $$V(\mu) = \mu + \frac{1}{r}\mu^2.$$
If we neglect the first term (which makes some sense under assumption that highly expressed genes carry the most information for PCA), then the remaining mean-variance relationship becomes quadratic which coincides with the log-normal distribution and has logarithm as the variance stabilizing transformation: $$\int\frac{1}{\sqrt{\mu^2}}d\mu=\log(\mu).$$
Alternatively, small powers $\mu^\alpha$ with e.g. $\alpha\approx 0.1$ or so can also make sense and would be variance-stabilizing for $V(\mu)=\mu^{2-2\alpha}$, so something in between the linear and the quadratic mean-variance relationships.
Another option is to use $$\int\frac{1}{\sqrt{\mu+\frac{1}{r}\mu^2}}d\mu=\operatorname{arsinh}\Big(\frac{x}{r}\Big)=\log\Big(\sqrt{\frac{\mu}{r}}+\sqrt{\frac{\mu^2}{r^2}+1}\Big),$$ possibly with Anscombe's correction as $$\operatorname{arsinh}\Big(\frac{x+3/8}{r-3/4}\Big).$$ Clearly for large $\mu$ all of these formulas reduce to $\log(x)$.
See Harrison, 2015, Anscombe's 1948 variance stabilizing transformation for the negative binomial distribution is well suited to RNA-Seq expression data and Anscombe, 1948, The Transformation of Poisson, Binomial and Negative-Binomial Data. | Why does log-transformation of the RNA-seq data reduce the amount of explained variance in PCA? | Based on the size of your dataset, I suspect you are working with the single cell RNA-seq data.
If so, I can confirm your observation: with scRNA-seq data, PCA explained variances after log-transform | Why does log-transformation of the RNA-seq data reduce the amount of explained variance in PCA?
Based on the size of your dataset, I suspect you are working with the single cell RNA-seq data.
If so, I can confirm your observation: with scRNA-seq data, PCA explained variances after log-transform are typically much lower than beforehand. Here is a replication of your finding with the Tasic et al. 2016 dataset I have at hand:
Here I used $\log(x+1)$ because of exact zeros. Note that log-transformed data yield explained variances roughly similar to the standardized data (when each variable is centered and scaled to have unit variance).
The reason for this is that different variables (genes) have VERY different variances. RNA-seq data are ultimately counts of RNA molecules, and the variance is monotonically growing with the mean (think Poisson distribution). So the genes that are highly expressed will have high variance whereas the genes that are barely expressed or detected at all, will have almost zero variance:
Without any transformations, there is one gene that alone explains above 40% of the variance (i.e. its variance is above 40% of the total variance). In this dataset, it happens to be this gene: https://en.wikipedia.org/wiki/Neuropeptide_Y which is very highly expressed (RPKM values over 100000) in some cells and has zero expression in some other cells. When you do PCA on the raw data, PC1 will basically coincide with this single gene.
This is similar to what happens in the accepted answer to PCA on correlation or covariance?:
Notice that PCA on covariance is dominated by run800m and javelin: PC1 is almost equal to run800m (and explains 82% of the variance) and PC2 is almost equal to javelin (together they explain 97%). PCA on correlation is much more informative and reveals some structure in the data and relationships between variables (but note that the explained variances drop to 64% and 71%).
Update
In the comments, @An-old-man-in-the-sea brought up the issue of the variance-stabilizing transformations. The RNA-seq counts are usually modeled with a negative binomial distribution that has the following mean-variance relationship: $$V(\mu) = \mu + \frac{1}{r}\mu^2.$$
If we neglect the first term (which makes some sense under assumption that highly expressed genes carry the most information for PCA), then the remaining mean-variance relationship becomes quadratic which coincides with the log-normal distribution and has logarithm as the variance stabilizing transformation: $$\int\frac{1}{\sqrt{\mu^2}}d\mu=\log(\mu).$$
Alternatively, small powers $\mu^\alpha$ with e.g. $\alpha\approx 0.1$ or so can also make sense and would be variance-stabilizing for $V(\mu)=\mu^{2-2\alpha}$, so something in between the linear and the quadratic mean-variance relationships.
Another option is to use $$\int\frac{1}{\sqrt{\mu+\frac{1}{r}\mu^2}}d\mu=\operatorname{arsinh}\Big(\frac{x}{r}\Big)=\log\Big(\sqrt{\frac{\mu}{r}}+\sqrt{\frac{\mu^2}{r^2}+1}\Big),$$ possibly with Anscombe's correction as $$\operatorname{arsinh}\Big(\frac{x+3/8}{r-3/4}\Big).$$ Clearly for large $\mu$ all of these formulas reduce to $\log(x)$.
See Harrison, 2015, Anscombe's 1948 variance stabilizing transformation for the negative binomial distribution is well suited to RNA-Seq expression data and Anscombe, 1948, The Transformation of Poisson, Binomial and Negative-Binomial Data. | Why does log-transformation of the RNA-seq data reduce the amount of explained variance in PCA?
Based on the size of your dataset, I suspect you are working with the single cell RNA-seq data.
If so, I can confirm your observation: with scRNA-seq data, PCA explained variances after log-transform |
25,534 | Why does log-transformation of the RNA-seq data reduce the amount of explained variance in PCA? | The PCA uses only linear algebra to find "best" components to explain the variance. See this answer for longer explanation on PCA. Therefore if your dataset already has linear relationships between its variables, you'll get best linear models without any non-linear transformations on raw data.
You could ask your question again using exp or tanh or any non-linear function instead of log and you could expect similar effect.
You are basically worsening linear relations between variables when you apply logarithm (or any function) on a variable.
Usually it is good to have some theoretical basis to log-transform a variable. | Why does log-transformation of the RNA-seq data reduce the amount of explained variance in PCA? | The PCA uses only linear algebra to find "best" components to explain the variance. See this answer for longer explanation on PCA. Therefore if your dataset already has linear relationships between it | Why does log-transformation of the RNA-seq data reduce the amount of explained variance in PCA?
The PCA uses only linear algebra to find "best" components to explain the variance. See this answer for longer explanation on PCA. Therefore if your dataset already has linear relationships between its variables, you'll get best linear models without any non-linear transformations on raw data.
You could ask your question again using exp or tanh or any non-linear function instead of log and you could expect similar effect.
You are basically worsening linear relations between variables when you apply logarithm (or any function) on a variable.
Usually it is good to have some theoretical basis to log-transform a variable. | Why does log-transformation of the RNA-seq data reduce the amount of explained variance in PCA?
The PCA uses only linear algebra to find "best" components to explain the variance. See this answer for longer explanation on PCA. Therefore if your dataset already has linear relationships between it |
25,535 | Distribution of linear regression coefficients | I know there are a lot of very knowledgeable people here, but I decided to have a shot at answering this anyway. Please correct me if I am wrong!
First, for clarification, you're looking for the distribution of the ordinary least-squares estimates of the regression coefficients, right? Under frequentist inference, the regression coefficients themselves are fixed and unobservable.
Secondly, $\pmb{\hat{\beta}} \sim N(\pmb{\beta}, (\mathbf{X}^T\mathbf{X})^{-1}\sigma^2)$ still holds in the second case as you are still using a general linear model, which is a more general form than simple linear regression. The ordinary least-squares estimate is still the garden-variety $\pmb{\hat{\beta}} = (\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^{T}\mathbf{Y}$ you know and love (or not) from linear algebra class. The repsonse vector $\mathbf{Y}$ is multivariate normal, so $\pmb{\hat{\beta}}$ is normal as well; the mean and variance can be derived in a straightforward manner, independent of the normality assumption:
$E(\pmb{\hat{\beta}}) = E((\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^T\mathbf{Y}) = E[(\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^T(\mathbf{X}\pmb{\beta}+\epsilon)] = \pmb{\beta}$
$Var(\pmb{\hat{\beta}}) = Var((\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^T\mathbf{Y}) = (\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^TVar(Y)\mathbf{X}(\mathbf{X}^T \mathbf{X})^{-1} = (\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^T\sigma^2\mathbf{I}\mathbf{X}(\mathbf{X}^T \mathbf{X})^{-1} = \sigma^2(\mathbf{X}^T \mathbf{X})^{-1}$
However, assuming you got the model right when you did the estimation, X looks a bit different from what we're used to:
$\mathbf{X} = \begin{bmatrix} 1 & \exp({X_1}) \\ 1 & \exp(X_2) \\ \vdots & \vdots \end{bmatrix}$
This was the distribution of $\hat{\beta_1}$ that I got using a similar simulation to yours:
I was able to reproduce what you got, however, using the wrong $\mathbf{X}$, i.e. the usual one:
$\mathbf{X} = \begin{bmatrix} 1 & {X_1} \\ 1 & X_2 \\ \vdots & \vdots \end{bmatrix}$
So it seems that when you were estimating the model in the second case, you may have been getting the model assumptions wrong, i.e. used the wrong design matrix. | Distribution of linear regression coefficients | I know there are a lot of very knowledgeable people here, but I decided to have a shot at answering this anyway. Please correct me if I am wrong!
First, for clarification, you're looking for the distr | Distribution of linear regression coefficients
I know there are a lot of very knowledgeable people here, but I decided to have a shot at answering this anyway. Please correct me if I am wrong!
First, for clarification, you're looking for the distribution of the ordinary least-squares estimates of the regression coefficients, right? Under frequentist inference, the regression coefficients themselves are fixed and unobservable.
Secondly, $\pmb{\hat{\beta}} \sim N(\pmb{\beta}, (\mathbf{X}^T\mathbf{X})^{-1}\sigma^2)$ still holds in the second case as you are still using a general linear model, which is a more general form than simple linear regression. The ordinary least-squares estimate is still the garden-variety $\pmb{\hat{\beta}} = (\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^{T}\mathbf{Y}$ you know and love (or not) from linear algebra class. The repsonse vector $\mathbf{Y}$ is multivariate normal, so $\pmb{\hat{\beta}}$ is normal as well; the mean and variance can be derived in a straightforward manner, independent of the normality assumption:
$E(\pmb{\hat{\beta}}) = E((\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^T\mathbf{Y}) = E[(\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^T(\mathbf{X}\pmb{\beta}+\epsilon)] = \pmb{\beta}$
$Var(\pmb{\hat{\beta}}) = Var((\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^T\mathbf{Y}) = (\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^TVar(Y)\mathbf{X}(\mathbf{X}^T \mathbf{X})^{-1} = (\mathbf{X}^T \mathbf{X})^{-1}\mathbf{X}^T\sigma^2\mathbf{I}\mathbf{X}(\mathbf{X}^T \mathbf{X})^{-1} = \sigma^2(\mathbf{X}^T \mathbf{X})^{-1}$
However, assuming you got the model right when you did the estimation, X looks a bit different from what we're used to:
$\mathbf{X} = \begin{bmatrix} 1 & \exp({X_1}) \\ 1 & \exp(X_2) \\ \vdots & \vdots \end{bmatrix}$
This was the distribution of $\hat{\beta_1}$ that I got using a similar simulation to yours:
I was able to reproduce what you got, however, using the wrong $\mathbf{X}$, i.e. the usual one:
$\mathbf{X} = \begin{bmatrix} 1 & {X_1} \\ 1 & X_2 \\ \vdots & \vdots \end{bmatrix}$
So it seems that when you were estimating the model in the second case, you may have been getting the model assumptions wrong, i.e. used the wrong design matrix. | Distribution of linear regression coefficients
I know there are a lot of very knowledgeable people here, but I decided to have a shot at answering this anyway. Please correct me if I am wrong!
First, for clarification, you're looking for the distr |
25,536 | PCA too slow when both n,p are large: Alternatives? | Question 1: Let's say you have observed a data matrix $X \in \mathbb R^{n \times p}$. From this you can compute the eigendecomposition $X^T X = Q \Lambda Q^T$. The question now is: if we get new data coming from the same population, perhaps collected into a matrix $Z \in \mathbb R^{m \times p}$, will $ZQ$ be close to the ideal orthogonal rotation of $Z$? This kind of question is addressed by the Davis-Kahan theorem, and matrix perturbation theory in general (if you can get ahold of a copy, Stewart and Sun's 1990 textbook is the standard reference).
Question 2: you definitely can speed things up if you know you only need the top $k$ eigenvectors. In R I use rARPACK for this; I'm sure there's a Java equivalent since they're all fortran wrappers anyway.
Question 3: I don't know anything about Java implementations, but this thread discusses speeding up PCA as does this CV thread. There's a ton of research on this sort of thing and there are tons of methods out there using things like low rank approximations or randomization. | PCA too slow when both n,p are large: Alternatives? | Question 1: Let's say you have observed a data matrix $X \in \mathbb R^{n \times p}$. From this you can compute the eigendecomposition $X^T X = Q \Lambda Q^T$. The question now is: if we get new data | PCA too slow when both n,p are large: Alternatives?
Question 1: Let's say you have observed a data matrix $X \in \mathbb R^{n \times p}$. From this you can compute the eigendecomposition $X^T X = Q \Lambda Q^T$. The question now is: if we get new data coming from the same population, perhaps collected into a matrix $Z \in \mathbb R^{m \times p}$, will $ZQ$ be close to the ideal orthogonal rotation of $Z$? This kind of question is addressed by the Davis-Kahan theorem, and matrix perturbation theory in general (if you can get ahold of a copy, Stewart and Sun's 1990 textbook is the standard reference).
Question 2: you definitely can speed things up if you know you only need the top $k$ eigenvectors. In R I use rARPACK for this; I'm sure there's a Java equivalent since they're all fortran wrappers anyway.
Question 3: I don't know anything about Java implementations, but this thread discusses speeding up PCA as does this CV thread. There's a ton of research on this sort of thing and there are tons of methods out there using things like low rank approximations or randomization. | PCA too slow when both n,p are large: Alternatives?
Question 1: Let's say you have observed a data matrix $X \in \mathbb R^{n \times p}$. From this you can compute the eigendecomposition $X^T X = Q \Lambda Q^T$. The question now is: if we get new data |
25,537 | PCA too slow when both n,p are large: Alternatives? | The code you are using will invert the entire matrix. This is probably O(p^3) already. You can approximate the result in O(p^2) but that will still be slow (but probably 100x faster).
Essentially, take an arbitrary vector and do power iterations. With high probability, you'll get a good approximation of the first eigenvector. Then remove this factor from the matrix, repeat to get the second. Etc.
But have you tried if the fast Barnes Hut tSNE implementations in ELKI will maybe just work on your data with an index such as cover tree? I've had that implementation work well when others failed. | PCA too slow when both n,p are large: Alternatives? | The code you are using will invert the entire matrix. This is probably O(p^3) already. You can approximate the result in O(p^2) but that will still be slow (but probably 100x faster).
Essentially, tak | PCA too slow when both n,p are large: Alternatives?
The code you are using will invert the entire matrix. This is probably O(p^3) already. You can approximate the result in O(p^2) but that will still be slow (but probably 100x faster).
Essentially, take an arbitrary vector and do power iterations. With high probability, you'll get a good approximation of the first eigenvector. Then remove this factor from the matrix, repeat to get the second. Etc.
But have you tried if the fast Barnes Hut tSNE implementations in ELKI will maybe just work on your data with an index such as cover tree? I've had that implementation work well when others failed. | PCA too slow when both n,p are large: Alternatives?
The code you are using will invert the entire matrix. This is probably O(p^3) already. You can approximate the result in O(p^2) but that will still be slow (but probably 100x faster).
Essentially, tak |
25,538 | PCA too slow when both n,p are large: Alternatives? | If your goal is just to effect dimension reduction in a simple and direct manner, you could try an alternating least squares (ALS) technique. For instance Apache Spark's mlib has an ALS implementation and I believe offers a Java api. This should give you an $n \times K$ matrix and a $K \times p$ matrix. The $K \times p$ matrix will contain visualisable row vectors. | PCA too slow when both n,p are large: Alternatives? | If your goal is just to effect dimension reduction in a simple and direct manner, you could try an alternating least squares (ALS) technique. For instance Apache Spark's mlib has an ALS implementation | PCA too slow when both n,p are large: Alternatives?
If your goal is just to effect dimension reduction in a simple and direct manner, you could try an alternating least squares (ALS) technique. For instance Apache Spark's mlib has an ALS implementation and I believe offers a Java api. This should give you an $n \times K$ matrix and a $K \times p$ matrix. The $K \times p$ matrix will contain visualisable row vectors. | PCA too slow when both n,p are large: Alternatives?
If your goal is just to effect dimension reduction in a simple and direct manner, you could try an alternating least squares (ALS) technique. For instance Apache Spark's mlib has an ALS implementation |
25,539 | How do I normalize data in online learning? | In an ideal world, our training data should be representative of the production data, which means that the descriptive statistics (such as the mean, max, or min) should not change too much. Thus, in an "online-learning" environment, we should be able to use the max and min value from the historical training data to do the normalization.
If the training data is not representative of the production data, or we do not know how production data is distributed, the answer is 1. collect data; 2. do "training off line;" and then put into production. | How do I normalize data in online learning? | In an ideal world, our training data should be representative of the production data, which means that the descriptive statistics (such as the mean, max, or min) should not change too much. Thus, in a | How do I normalize data in online learning?
In an ideal world, our training data should be representative of the production data, which means that the descriptive statistics (such as the mean, max, or min) should not change too much. Thus, in an "online-learning" environment, we should be able to use the max and min value from the historical training data to do the normalization.
If the training data is not representative of the production data, or we do not know how production data is distributed, the answer is 1. collect data; 2. do "training off line;" and then put into production. | How do I normalize data in online learning?
In an ideal world, our training data should be representative of the production data, which means that the descriptive statistics (such as the mean, max, or min) should not change too much. Thus, in a |
25,540 | How do I normalize data in online learning? | One possibility is to update statistics (mean, variance, min, max, etc.) using all historical data in an online manner and use them to normalize your data. Welford's online algorithm is such an example.
However, this kind of “online” normalization is not injective (if used in a strict online manner). In the sense that two distinct inputs that arrived at different time may be mapped / normalized to the same output value. Furthermore, this mapping / filtering / normalization is not guaranteed to be monotonic (especially in the beginning when very few data have been observed).
So depending on the scarcity of the data, different strategies can be used. If the data is scarce or the sample efficiency is a crucial criterion, for example in some real life applications, we just use the strict online strategy. Otherwise, for example in cases where there is a simulator to generate data, we may refer to "cold start", we begin with gathering data to have statistics (mean, variance, etc.) that are stable enough before using them to normalize training data. At the training time, training data can be used or not to further adjust / update those statistics depending on the scarcity of the data. At test time, test data will not be used to adjust / update statistics. | How do I normalize data in online learning? | One possibility is to update statistics (mean, variance, min, max, etc.) using all historical data in an online manner and use them to normalize your data. Welford's online algorithm is such an exampl | How do I normalize data in online learning?
One possibility is to update statistics (mean, variance, min, max, etc.) using all historical data in an online manner and use them to normalize your data. Welford's online algorithm is such an example.
However, this kind of “online” normalization is not injective (if used in a strict online manner). In the sense that two distinct inputs that arrived at different time may be mapped / normalized to the same output value. Furthermore, this mapping / filtering / normalization is not guaranteed to be monotonic (especially in the beginning when very few data have been observed).
So depending on the scarcity of the data, different strategies can be used. If the data is scarce or the sample efficiency is a crucial criterion, for example in some real life applications, we just use the strict online strategy. Otherwise, for example in cases where there is a simulator to generate data, we may refer to "cold start", we begin with gathering data to have statistics (mean, variance, etc.) that are stable enough before using them to normalize training data. At the training time, training data can be used or not to further adjust / update those statistics depending on the scarcity of the data. At test time, test data will not be used to adjust / update statistics. | How do I normalize data in online learning?
One possibility is to update statistics (mean, variance, min, max, etc.) using all historical data in an online manner and use them to normalize your data. Welford's online algorithm is such an exampl |
25,541 | How do I normalize data in online learning? | I encountered this issue when I put a classifier into production. The two alternatives we considered were:
1. To use historical data's (as has been proposed in other questions) metrics (min, max, sdv) to normalize new coming data.
2. To renormalize all data with the new coming data and recalculate the model.
Even if option number two is in theory more correct (all data is taken into account for normalization), it brings new issues (recalculation of the model, big delay in classification time). Thus, if a sufficiently big sample has been used in the first normalization, I would NOT look for new normalization parameters each time data is added but use the old ones. | How do I normalize data in online learning? | I encountered this issue when I put a classifier into production. The two alternatives we considered were:
1. To use historical data's (as has been proposed in other questions) metrics (min, max, sdv) | How do I normalize data in online learning?
I encountered this issue when I put a classifier into production. The two alternatives we considered were:
1. To use historical data's (as has been proposed in other questions) metrics (min, max, sdv) to normalize new coming data.
2. To renormalize all data with the new coming data and recalculate the model.
Even if option number two is in theory more correct (all data is taken into account for normalization), it brings new issues (recalculation of the model, big delay in classification time). Thus, if a sufficiently big sample has been used in the first normalization, I would NOT look for new normalization parameters each time data is added but use the old ones. | How do I normalize data in online learning?
I encountered this issue when I put a classifier into production. The two alternatives we considered were:
1. To use historical data's (as has been proposed in other questions) metrics (min, max, sdv) |
25,542 | Logistic regression for data from Poisson distributions | $Y$ has two possible values for any given value of $X$. According to the assumptions,
$$\Pr(X=x|Y=0) = \exp(-\lambda_0) \frac{\lambda_0^x}{x!}$$
and
$$\Pr(X=x|Y=1) = \exp(-\lambda_1) \frac{\lambda_1^x}{x!}.$$
Therefore (this is a trivial case of Bayes' Theorem) the chance that $Y=1$ conditional on $X=x$ is the relative probability of the latter, namely
$$\Pr(Y=1|X=x) = \frac{\exp(-\lambda_1) \frac{\lambda_1^x}{x!}}{\exp(-\lambda_1) \frac{\lambda_1^x}{x!} + \exp(-\lambda_0) \frac{\lambda_0^x}{x!}}= \frac{1}{1 + \exp(\beta_0 + \beta_1 x)}$$
where
$$\beta_0 = \lambda_1 - \lambda_0$$
and
$$\beta_1 = -\log(\lambda_1/\lambda_0).$$
That indeed is the standard logistic regression model. | Logistic regression for data from Poisson distributions | $Y$ has two possible values for any given value of $X$. According to the assumptions,
$$\Pr(X=x|Y=0) = \exp(-\lambda_0) \frac{\lambda_0^x}{x!}$$
and
$$\Pr(X=x|Y=1) = \exp(-\lambda_1) \frac{\lambda_1 | Logistic regression for data from Poisson distributions
$Y$ has two possible values for any given value of $X$. According to the assumptions,
$$\Pr(X=x|Y=0) = \exp(-\lambda_0) \frac{\lambda_0^x}{x!}$$
and
$$\Pr(X=x|Y=1) = \exp(-\lambda_1) \frac{\lambda_1^x}{x!}.$$
Therefore (this is a trivial case of Bayes' Theorem) the chance that $Y=1$ conditional on $X=x$ is the relative probability of the latter, namely
$$\Pr(Y=1|X=x) = \frac{\exp(-\lambda_1) \frac{\lambda_1^x}{x!}}{\exp(-\lambda_1) \frac{\lambda_1^x}{x!} + \exp(-\lambda_0) \frac{\lambda_0^x}{x!}}= \frac{1}{1 + \exp(\beta_0 + \beta_1 x)}$$
where
$$\beta_0 = \lambda_1 - \lambda_0$$
and
$$\beta_1 = -\log(\lambda_1/\lambda_0).$$
That indeed is the standard logistic regression model. | Logistic regression for data from Poisson distributions
$Y$ has two possible values for any given value of $X$. According to the assumptions,
$$\Pr(X=x|Y=0) = \exp(-\lambda_0) \frac{\lambda_0^x}{x!}$$
and
$$\Pr(X=x|Y=1) = \exp(-\lambda_1) \frac{\lambda_1 |
25,543 | Why is a random walk not a stationary process? [duplicate] | For stationarity, the entire distribution of $p_t$ has to be constant over time, not only its mean. And while the mean of $p_t$ is indeed constant, e.g., it’s standard deviation isn’t. The larger $t$, the higher is the standard deviation of $p_t$ (over all realisations of the random walk – which is what you have to consider for stationarity), since individual realisations of the random walk can stray further and further from $p_0$.
From another point of view, non-stationarity is tied to special points in time, and here $t=0$ is special, since $p_0$ is fixed to $1$.
To turn this into a stationary process, you would have to equally allow for all initial conditions – which is impossible as there is no uniform distribution on the real numbers. | Why is a random walk not a stationary process? [duplicate] | For stationarity, the entire distribution of $p_t$ has to be constant over time, not only its mean. And while the mean of $p_t$ is indeed constant, e.g., it’s standard deviation isn’t. The larger $t$, | Why is a random walk not a stationary process? [duplicate]
For stationarity, the entire distribution of $p_t$ has to be constant over time, not only its mean. And while the mean of $p_t$ is indeed constant, e.g., it’s standard deviation isn’t. The larger $t$, the higher is the standard deviation of $p_t$ (over all realisations of the random walk – which is what you have to consider for stationarity), since individual realisations of the random walk can stray further and further from $p_0$.
From another point of view, non-stationarity is tied to special points in time, and here $t=0$ is special, since $p_0$ is fixed to $1$.
To turn this into a stationary process, you would have to equally allow for all initial conditions – which is impossible as there is no uniform distribution on the real numbers. | Why is a random walk not a stationary process? [duplicate]
For stationarity, the entire distribution of $p_t$ has to be constant over time, not only its mean. And while the mean of $p_t$ is indeed constant, e.g., it’s standard deviation isn’t. The larger $t$, |
25,544 | Why is a random walk not a stationary process? [duplicate] | It’s not stationary because if you assume $p_t = bp_{t-1} + a_t$, then the variance of this process is $\sigma^2_{p_t}$ = $\sigma^2_{a_t} / (1-b^2)$. Hence when b = 1, the variance explodes, (i.e- the time series could be anywhere). This violates the condition required to be stationary (constant variance) | Why is a random walk not a stationary process? [duplicate] | It’s not stationary because if you assume $p_t = bp_{t-1} + a_t$, then the variance of this process is $\sigma^2_{p_t}$ = $\sigma^2_{a_t} / (1-b^2)$. Hence when b = 1, the variance explodes, (i.e- th | Why is a random walk not a stationary process? [duplicate]
It’s not stationary because if you assume $p_t = bp_{t-1} + a_t$, then the variance of this process is $\sigma^2_{p_t}$ = $\sigma^2_{a_t} / (1-b^2)$. Hence when b = 1, the variance explodes, (i.e- the time series could be anywhere). This violates the condition required to be stationary (constant variance) | Why is a random walk not a stationary process? [duplicate]
It’s not stationary because if you assume $p_t = bp_{t-1} + a_t$, then the variance of this process is $\sigma^2_{p_t}$ = $\sigma^2_{a_t} / (1-b^2)$. Hence when b = 1, the variance explodes, (i.e- th |
25,545 | The Least Squares Assumptions | You do not need assumptions on the 4th moments for consistency of the OLS estimator, but you do need assumptions on higher moments of $x$ and $\epsilon$ for asymptotic normality and to consistently estimate what the asymptotic covariance matrix is.
In some sense though, that is a mathematical, technical point, not a practical point. For OLS to work well in finite samples in some sense requires more than the minimal assumptions necessary to achieve asymptotic consistency or normality as $n \rightarrow \infty$.
Sufficient conditions for consistency:
If you have regression equation:
$$ y_i = \mathbf{x}_i' \boldsymbol{\beta} + \epsilon_i $$
The OLS estimator $\hat{\mathbf{b}}$ can be written as:
$$ \hat{\mathbf{b}} = \boldsymbol{\beta} + \left( \frac{X'X}{n}\right)^{-1}\left(\frac{X'\boldsymbol{\epsilon}}{n} \right)$$
For consistency, you need to be able to apply Kolmogorov's Law of Large Numbers or, in the case of time-series with serial dependence, something like the Ergodic Theorem of Karlin and Taylor so that:
$$ \frac{1}{n} X'X \xrightarrow{p} \mathrm{E}[\mathbf{x}_i\mathbf{x}_i'] \quad \quad \quad \frac{1}{n} X'\boldsymbol{\epsilon} \xrightarrow{p} \mathrm{E}\left[\mathbf{x}_i' \epsilon_i\right] $$
Other assumptions needed are:
$\mathrm{E}[\mathbf{x}_i\mathbf{x}_i']$ is full rank and hence the matrix is invertible.
Regressors are predetermined or strictly exogenous so that $\mathrm{E}\left[\mathbf{x}_i \epsilon_i\right] = \mathbf{0}$.
Then $\left( \frac{X'X}{n}\right)^{-1}\left(\frac{X'\boldsymbol{\epsilon}}{n} \right) \xrightarrow{p} \mathbf{0}$ and you get $\hat{\mathbf{b}} \xrightarrow{p} \boldsymbol{\beta}$
If you want the central limit theorem to apply then you need assumptions on higher moments, for example, $\mathrm{E}[\mathbf{g}_i\mathbf{g}_i']$ where $\mathbf{g_i} = \mathbf{x}_i \epsilon_i$. The central limit theorem is what gives you asymptotic normality of $\hat{\mathbf{b}}$ and allows you to talk about standard errors. For the second moment $\mathrm{E}[\mathbf{g}_i\mathbf{g}_i']$ to exist, you need the 4th moments of $x$ and $\epsilon$ to exist. You want to argue that $\sqrt{n}\left(\frac{1}{n} \sum_i \mathbf{x}_i' \epsilon_i \right) \xrightarrow{d} \mathcal{N}\left( 0, \Sigma \right)$ where $\Sigma = \mathrm{E}\left[\mathbf{x}_i\mathbf{x}_i'\epsilon_i^2 \right]$. For this to work, $\Sigma$ has to be finite.
A nice discussion (which motivated this post) is given in Hayashi's Econometrics. (See also p. 149 for 4th moments and estimating the covariance matrix.)
Discussion:
These requirements on 4th moments is probably a technical point rather than a practical point. You're probably not going to encounter pathological distributions where this is a problem in everyday data? It's for more commonf or other assumptions of OLS to go awry.
A different question, undoubtedly answered elsewhere on Stackexchange, is how large of a sample you need for finite samples to get close to the asymptotic results. There's some sense in which fantastic outliers lead to slow convergence. For example, try estimating the mean of a lognormal distribution with really high variance. The sample mean is a consistent, unbiased estimator of the population mean, but in that log-normal case with crazy excess kurtosis etc... (follow link), finite sample results are really quite off.
Finite vs. infinite is a hugely important distinction in mathematics. That's not the problem you encounter in everyday statistics. Practical problems are more in the small vs. big category. Is the variance, kurtosis etc... small enough so that I can achieve reasonable estimates given my sample size?
Pathological example where OLS estimator is consistent but not asymptotically normal
Consider:
$$ y_i = b x_i + \epsilon_i$$
Where $x_i \sim \mathcal{N}(0,1)$ but $\epsilon_i$ is drawn from a t-distribution with 2 degrees of freedom thus $\mathrm{Var}(\epsilon_i) = \infty$. The OLS estimate converges in probability to $b$ but the sample distribution for the OLS estimate $\hat{b}$ is not normally distributed. Below is the empirical distribution for $\hat{b}$ based upon 10000 simulations of a regression with 10000 observations.
The distribution of $\hat{b}$ isn't normal, the tails are too heavy. But if you increase the degrees of freedom to 3 so that the second moment of $\epsilon_i$ exists then the central limit applies and you get:
Code to generate it:
beta = [-4; 3.7];
n = 1e5;
n_sim = 10000;
for s=1:n_sim
X = [ones(n, 1), randn(n, 1)];
u = trnd(2,n,1) / 100;
y = X * beta + u;
b(:,s) = X \ y;
end
b = b';
qqplot(b(:,2)); | The Least Squares Assumptions | You do not need assumptions on the 4th moments for consistency of the OLS estimator, but you do need assumptions on higher moments of $x$ and $\epsilon$ for asymptotic normality and to consistently es | The Least Squares Assumptions
You do not need assumptions on the 4th moments for consistency of the OLS estimator, but you do need assumptions on higher moments of $x$ and $\epsilon$ for asymptotic normality and to consistently estimate what the asymptotic covariance matrix is.
In some sense though, that is a mathematical, technical point, not a practical point. For OLS to work well in finite samples in some sense requires more than the minimal assumptions necessary to achieve asymptotic consistency or normality as $n \rightarrow \infty$.
Sufficient conditions for consistency:
If you have regression equation:
$$ y_i = \mathbf{x}_i' \boldsymbol{\beta} + \epsilon_i $$
The OLS estimator $\hat{\mathbf{b}}$ can be written as:
$$ \hat{\mathbf{b}} = \boldsymbol{\beta} + \left( \frac{X'X}{n}\right)^{-1}\left(\frac{X'\boldsymbol{\epsilon}}{n} \right)$$
For consistency, you need to be able to apply Kolmogorov's Law of Large Numbers or, in the case of time-series with serial dependence, something like the Ergodic Theorem of Karlin and Taylor so that:
$$ \frac{1}{n} X'X \xrightarrow{p} \mathrm{E}[\mathbf{x}_i\mathbf{x}_i'] \quad \quad \quad \frac{1}{n} X'\boldsymbol{\epsilon} \xrightarrow{p} \mathrm{E}\left[\mathbf{x}_i' \epsilon_i\right] $$
Other assumptions needed are:
$\mathrm{E}[\mathbf{x}_i\mathbf{x}_i']$ is full rank and hence the matrix is invertible.
Regressors are predetermined or strictly exogenous so that $\mathrm{E}\left[\mathbf{x}_i \epsilon_i\right] = \mathbf{0}$.
Then $\left( \frac{X'X}{n}\right)^{-1}\left(\frac{X'\boldsymbol{\epsilon}}{n} \right) \xrightarrow{p} \mathbf{0}$ and you get $\hat{\mathbf{b}} \xrightarrow{p} \boldsymbol{\beta}$
If you want the central limit theorem to apply then you need assumptions on higher moments, for example, $\mathrm{E}[\mathbf{g}_i\mathbf{g}_i']$ where $\mathbf{g_i} = \mathbf{x}_i \epsilon_i$. The central limit theorem is what gives you asymptotic normality of $\hat{\mathbf{b}}$ and allows you to talk about standard errors. For the second moment $\mathrm{E}[\mathbf{g}_i\mathbf{g}_i']$ to exist, you need the 4th moments of $x$ and $\epsilon$ to exist. You want to argue that $\sqrt{n}\left(\frac{1}{n} \sum_i \mathbf{x}_i' \epsilon_i \right) \xrightarrow{d} \mathcal{N}\left( 0, \Sigma \right)$ where $\Sigma = \mathrm{E}\left[\mathbf{x}_i\mathbf{x}_i'\epsilon_i^2 \right]$. For this to work, $\Sigma$ has to be finite.
A nice discussion (which motivated this post) is given in Hayashi's Econometrics. (See also p. 149 for 4th moments and estimating the covariance matrix.)
Discussion:
These requirements on 4th moments is probably a technical point rather than a practical point. You're probably not going to encounter pathological distributions where this is a problem in everyday data? It's for more commonf or other assumptions of OLS to go awry.
A different question, undoubtedly answered elsewhere on Stackexchange, is how large of a sample you need for finite samples to get close to the asymptotic results. There's some sense in which fantastic outliers lead to slow convergence. For example, try estimating the mean of a lognormal distribution with really high variance. The sample mean is a consistent, unbiased estimator of the population mean, but in that log-normal case with crazy excess kurtosis etc... (follow link), finite sample results are really quite off.
Finite vs. infinite is a hugely important distinction in mathematics. That's not the problem you encounter in everyday statistics. Practical problems are more in the small vs. big category. Is the variance, kurtosis etc... small enough so that I can achieve reasonable estimates given my sample size?
Pathological example where OLS estimator is consistent but not asymptotically normal
Consider:
$$ y_i = b x_i + \epsilon_i$$
Where $x_i \sim \mathcal{N}(0,1)$ but $\epsilon_i$ is drawn from a t-distribution with 2 degrees of freedom thus $\mathrm{Var}(\epsilon_i) = \infty$. The OLS estimate converges in probability to $b$ but the sample distribution for the OLS estimate $\hat{b}$ is not normally distributed. Below is the empirical distribution for $\hat{b}$ based upon 10000 simulations of a regression with 10000 observations.
The distribution of $\hat{b}$ isn't normal, the tails are too heavy. But if you increase the degrees of freedom to 3 so that the second moment of $\epsilon_i$ exists then the central limit applies and you get:
Code to generate it:
beta = [-4; 3.7];
n = 1e5;
n_sim = 10000;
for s=1:n_sim
X = [ones(n, 1), randn(n, 1)];
u = trnd(2,n,1) / 100;
y = X * beta + u;
b(:,s) = X \ y;
end
b = b';
qqplot(b(:,2)); | The Least Squares Assumptions
You do not need assumptions on the 4th moments for consistency of the OLS estimator, but you do need assumptions on higher moments of $x$ and $\epsilon$ for asymptotic normality and to consistently es |
25,546 | The Least Squares Assumptions | This is a sufficient assumption, but not a minimal one [1]. OLS is not biased under these conditions, it is just inconsistent. The asymptotic properties of OLS break down when $X$ can have extremely large influence and/or if you can obtain extremely large residuals. You may not have encountered a formal presentation of the Lindeberg Feller central limit theorem, but that is what they are addressing here with the fourth moment conditions, and the Lindeberg condition tells us basically the same thing: no overlarge influence points, no overlarge high leverage points [2].
These theoretical underpinnings of statistics cause a lot of confusion when boiled down for practical applications. There is no definition of an outlier, it is an intuitive concept. To understand it roughly, the observation would have to be a high leverage point or high influence point, e.g. one for which the deletion diagnostic (DF beta) is very large, or for which the Mahalanobis distance in the predictors is large (in univariate stats that's just a Z score). But let's return to practical matters: if I conduct a random survey of people and their household income, and out of 100 people, 1 of the persons I sample is a millionaire, my best guess is that millionaires are representative of 1% of the population. In a biostatistcs lecture, these principals are discussed and emphasized that any diagnostic tool is essentially exploratory[3]. A nice point is made here: when exploratory statistics uncover an outlier, the "result" of such an analysis is not "the analysis which excludes the outlier is the one I believe", it is, "removing one point completely changed my analysis."
Kurtosis is a scaled quantity which depends upon the second moment of a distribution, but the assumption of finite, non-zero variance for these values is tacit since it is impossible for this property to hold in the fourth moment but not in the second. So basically yes, but overall I have never inspected either kurtosis or fourth moments. I don't find them to be a practical or intuitive measure. In this day when a histogram or scatter plot is produced by the snap of one's fingers, it behooves us to use qualitative graphical diagnostic statistics, by inspecting these plots.
[1] https://math.stackexchange.com/questions/79773/how-does-one-prove-that-lindeberg-condition-is-satisfied
[2] http://projecteuclid.org/download/pdf_1/euclid.ss/1177013818
[3] http://faculty.washington.edu/semerson/b517_2012/b517L03-2012-10-03/b517L03-2012-10-03.html | The Least Squares Assumptions | This is a sufficient assumption, but not a minimal one [1]. OLS is not biased under these conditions, it is just inconsistent. The asymptotic properties of OLS break down when $X$ can have extremely l | The Least Squares Assumptions
This is a sufficient assumption, but not a minimal one [1]. OLS is not biased under these conditions, it is just inconsistent. The asymptotic properties of OLS break down when $X$ can have extremely large influence and/or if you can obtain extremely large residuals. You may not have encountered a formal presentation of the Lindeberg Feller central limit theorem, but that is what they are addressing here with the fourth moment conditions, and the Lindeberg condition tells us basically the same thing: no overlarge influence points, no overlarge high leverage points [2].
These theoretical underpinnings of statistics cause a lot of confusion when boiled down for practical applications. There is no definition of an outlier, it is an intuitive concept. To understand it roughly, the observation would have to be a high leverage point or high influence point, e.g. one for which the deletion diagnostic (DF beta) is very large, or for which the Mahalanobis distance in the predictors is large (in univariate stats that's just a Z score). But let's return to practical matters: if I conduct a random survey of people and their household income, and out of 100 people, 1 of the persons I sample is a millionaire, my best guess is that millionaires are representative of 1% of the population. In a biostatistcs lecture, these principals are discussed and emphasized that any diagnostic tool is essentially exploratory[3]. A nice point is made here: when exploratory statistics uncover an outlier, the "result" of such an analysis is not "the analysis which excludes the outlier is the one I believe", it is, "removing one point completely changed my analysis."
Kurtosis is a scaled quantity which depends upon the second moment of a distribution, but the assumption of finite, non-zero variance for these values is tacit since it is impossible for this property to hold in the fourth moment but not in the second. So basically yes, but overall I have never inspected either kurtosis or fourth moments. I don't find them to be a practical or intuitive measure. In this day when a histogram or scatter plot is produced by the snap of one's fingers, it behooves us to use qualitative graphical diagnostic statistics, by inspecting these plots.
[1] https://math.stackexchange.com/questions/79773/how-does-one-prove-that-lindeberg-condition-is-satisfied
[2] http://projecteuclid.org/download/pdf_1/euclid.ss/1177013818
[3] http://faculty.washington.edu/semerson/b517_2012/b517L03-2012-10-03/b517L03-2012-10-03.html | The Least Squares Assumptions
This is a sufficient assumption, but not a minimal one [1]. OLS is not biased under these conditions, it is just inconsistent. The asymptotic properties of OLS break down when $X$ can have extremely l |
25,547 | Why do the probability distributions multiply here? | These operations are being performed on likelihoods rather than probabilities. Although the distinction may be subtle, you identified a crucial aspect of it: the product of two densities is (almost) never a density. (See the comment thread for a discussion of why "almost" is required.)
The language in the blog hints at this--but at the same time gets it subtly wrong--so let's analyze it:
The mean of this distribution is the configuration for which both estimates are most likely, and is therefore the best guess of the true configuration given all the information we have.
We have already observed the product is not a distribution. (Although it could be turned into one via multiplication by a suitable number, that's not what's going on here.)
The words "estimates" and "best guess" indicate that this machinery is being used to estimate a parameter--in this case, the "true configuration" (x,y coordinates).
Unfortunately, the mean is not the best guess. The mode is. This is the Maximum Likelihood (ML) Principle.
In order for the blog's explanation to make sense, we have to suppose the following. First, there is a true, definite location. Let's abstractly call it $\mu$. Second, each "sensor" is not reporting $\mu$. Instead it reports a value $X_i$ that is likely to be close to $\mu$. The sensor's "Gaussian" gives the probability density for the distribution of $X_i$. To be very clear, the density for sensor $i$ is a function $f_i$, depending on $\mu$, with the property that for any region $\mathcal{R}$ (in the plane), the chance that the sensor will report a value in $\mathcal{R}$ is
$$\Pr(X_i \in \mathcal{R}) = \int_{\mathcal{R}} f_i(x;\mu) dx.$$
Third, the two sensors are assumed to be operating with physical independence, which is taken to imply statistical independence.
By definition, the likelihood of the two observations $x_1, x_2$ is the probability densities they would have under this joint distribution, given the true location is $\mu$. The independence assumption implies that's the product of the densities. To clarify a subtle point,
The product function that assigns $f_1(x;\mu)f_2(x;\mu)$ to an observation $x$ is not a probability density for $x$; however,
The product $f_1(x_1;\mu)f_2(x_2;\mu)$ is the joint density for the ordered pair $(x_1, x_2)$.
In the posted figure, $x_1$ is the center of one blob, $x_2$ is the center of another, and the points within its space represent possible values of $\mu$. Notice that neither $f_1$ nor $f_2$ is intended to say anything at all about probabilities of $\mu$! $\mu$ is just an unknown fixed value. It's not a random variable.
Here is another subtle twist: the likelihood is considered a function of $\mu$. We have the data--we're just trying to figure out what $\mu$ is likely to be. Thus, what we need to be plotting is the likelihood function
$$\Lambda(\mu) = f_1(x_1;\mu)f_2(x_2;\mu).$$
It is a singular coincidence that this, too, happens to be a Gaussian! The demonstration is revealing. Let's do the math in just one dimension (rather than two or more) to see the pattern--everything generalizes to more dimensions. The logarithm of a Gaussian has the form
$$\log f_i(x_i;\mu) = A_i - B_i(x_i-\mu)^2$$
for constants $A_i$ and $B_i$. Thus the log likelihood is
$$\eqalign{
\log \Lambda(\mu) &= A_1 - B_1(x_1-\mu)^2 + A_2 - B_2(x_2-\mu)^2 \\
&= C - (B_1+B_2)\left(\mu - \frac{B_1x_1+B_2x_2}{B_1+B_2}\right)^2
}$$
where $C$ does not depend on $\mu$. This is the log of a Gaussian where the role of the $x_i$ has been replaced by that weighted mean shown in the fraction.
Let's return to the main thread. The ML estimate of $\mu$ is that value which maximizes the likelihood. Equivalently, it maximizes this Gaussian we just derived from the product of the Gaussians. By definition, the maximum is a mode. It is coincidence--resulting from the point symmetry of each Gaussian around its center--that the mode happens to coincide with the mean.
This analysis has revealed that several coincidences in the particular situation have obscured the underlying concepts:
a multivariate (joint) distribution was easily confused with a univariate distribution (which it is not);
the likelihood looked like a probability distribution (which it is not);
the product of Gaussians happens to be Gaussian (a regularity which is not generally true when sensors vary in non-Gaussian ways);
and their mode happens to coincide with their mean (which is guaranteed only for sensors with symmetric responses around the true values).
Only by focusing on these concepts and stripping away the coincidental behaviors can we see what's really going on. | Why do the probability distributions multiply here? | These operations are being performed on likelihoods rather than probabilities. Although the distinction may be subtle, you identified a crucial aspect of it: the product of two densities is (almost) | Why do the probability distributions multiply here?
These operations are being performed on likelihoods rather than probabilities. Although the distinction may be subtle, you identified a crucial aspect of it: the product of two densities is (almost) never a density. (See the comment thread for a discussion of why "almost" is required.)
The language in the blog hints at this--but at the same time gets it subtly wrong--so let's analyze it:
The mean of this distribution is the configuration for which both estimates are most likely, and is therefore the best guess of the true configuration given all the information we have.
We have already observed the product is not a distribution. (Although it could be turned into one via multiplication by a suitable number, that's not what's going on here.)
The words "estimates" and "best guess" indicate that this machinery is being used to estimate a parameter--in this case, the "true configuration" (x,y coordinates).
Unfortunately, the mean is not the best guess. The mode is. This is the Maximum Likelihood (ML) Principle.
In order for the blog's explanation to make sense, we have to suppose the following. First, there is a true, definite location. Let's abstractly call it $\mu$. Second, each "sensor" is not reporting $\mu$. Instead it reports a value $X_i$ that is likely to be close to $\mu$. The sensor's "Gaussian" gives the probability density for the distribution of $X_i$. To be very clear, the density for sensor $i$ is a function $f_i$, depending on $\mu$, with the property that for any region $\mathcal{R}$ (in the plane), the chance that the sensor will report a value in $\mathcal{R}$ is
$$\Pr(X_i \in \mathcal{R}) = \int_{\mathcal{R}} f_i(x;\mu) dx.$$
Third, the two sensors are assumed to be operating with physical independence, which is taken to imply statistical independence.
By definition, the likelihood of the two observations $x_1, x_2$ is the probability densities they would have under this joint distribution, given the true location is $\mu$. The independence assumption implies that's the product of the densities. To clarify a subtle point,
The product function that assigns $f_1(x;\mu)f_2(x;\mu)$ to an observation $x$ is not a probability density for $x$; however,
The product $f_1(x_1;\mu)f_2(x_2;\mu)$ is the joint density for the ordered pair $(x_1, x_2)$.
In the posted figure, $x_1$ is the center of one blob, $x_2$ is the center of another, and the points within its space represent possible values of $\mu$. Notice that neither $f_1$ nor $f_2$ is intended to say anything at all about probabilities of $\mu$! $\mu$ is just an unknown fixed value. It's not a random variable.
Here is another subtle twist: the likelihood is considered a function of $\mu$. We have the data--we're just trying to figure out what $\mu$ is likely to be. Thus, what we need to be plotting is the likelihood function
$$\Lambda(\mu) = f_1(x_1;\mu)f_2(x_2;\mu).$$
It is a singular coincidence that this, too, happens to be a Gaussian! The demonstration is revealing. Let's do the math in just one dimension (rather than two or more) to see the pattern--everything generalizes to more dimensions. The logarithm of a Gaussian has the form
$$\log f_i(x_i;\mu) = A_i - B_i(x_i-\mu)^2$$
for constants $A_i$ and $B_i$. Thus the log likelihood is
$$\eqalign{
\log \Lambda(\mu) &= A_1 - B_1(x_1-\mu)^2 + A_2 - B_2(x_2-\mu)^2 \\
&= C - (B_1+B_2)\left(\mu - \frac{B_1x_1+B_2x_2}{B_1+B_2}\right)^2
}$$
where $C$ does not depend on $\mu$. This is the log of a Gaussian where the role of the $x_i$ has been replaced by that weighted mean shown in the fraction.
Let's return to the main thread. The ML estimate of $\mu$ is that value which maximizes the likelihood. Equivalently, it maximizes this Gaussian we just derived from the product of the Gaussians. By definition, the maximum is a mode. It is coincidence--resulting from the point symmetry of each Gaussian around its center--that the mode happens to coincide with the mean.
This analysis has revealed that several coincidences in the particular situation have obscured the underlying concepts:
a multivariate (joint) distribution was easily confused with a univariate distribution (which it is not);
the likelihood looked like a probability distribution (which it is not);
the product of Gaussians happens to be Gaussian (a regularity which is not generally true when sensors vary in non-Gaussian ways);
and their mode happens to coincide with their mean (which is guaranteed only for sensors with symmetric responses around the true values).
Only by focusing on these concepts and stripping away the coincidental behaviors can we see what's really going on. | Why do the probability distributions multiply here?
These operations are being performed on likelihoods rather than probabilities. Although the distinction may be subtle, you identified a crucial aspect of it: the product of two densities is (almost) |
25,548 | Why do the probability distributions multiply here? | I already see an excellent answer but I'm just posting mine since I already started writing it.
Physician 1 has this prediction model: $d_1\sim N(\mu_1, \sigma_1)$
Physician 2 has this prediction model: $d_2\sim N(\mu_2, \sigma_2)$
So in order for us to evaluate the joint probability $P(d_1,d_2)=P(d_1|d_2)P(d_2)$ we only need to realize that this factorizes into $P(d_1)P(d_2)$ since $P(d_1|d_2)=P(d_1)$ due to the independence of the two physicians. | Why do the probability distributions multiply here? | I already see an excellent answer but I'm just posting mine since I already started writing it.
Physician 1 has this prediction model: $d_1\sim N(\mu_1, \sigma_1)$
Physician 2 has this prediction mod | Why do the probability distributions multiply here?
I already see an excellent answer but I'm just posting mine since I already started writing it.
Physician 1 has this prediction model: $d_1\sim N(\mu_1, \sigma_1)$
Physician 2 has this prediction model: $d_2\sim N(\mu_2, \sigma_2)$
So in order for us to evaluate the joint probability $P(d_1,d_2)=P(d_1|d_2)P(d_2)$ we only need to realize that this factorizes into $P(d_1)P(d_2)$ since $P(d_1|d_2)=P(d_1)$ due to the independence of the two physicians. | Why do the probability distributions multiply here?
I already see an excellent answer but I'm just posting mine since I already started writing it.
Physician 1 has this prediction model: $d_1\sim N(\mu_1, \sigma_1)$
Physician 2 has this prediction mod |
25,549 | Does "Log loss" refer to Logarithmic loss or Logistic loss? | Logarithmic loss = Logistic loss = log loss = $-y_i\log(p_i) - (1 -y_i) \log(1 -p_i)$
Sometimes people take a different logarithmic base, but it typically doesn't matter. I hear logistic loss more often.
FYI:
How is logistic loss and cross-entropy related?
Thesaurus for statistics and machine learning terms
When is log-loss metric appropriate for evaluating performance of a classifier?
Multi-class logarithmic loss function per class | Does "Log loss" refer to Logarithmic loss or Logistic loss? | Logarithmic loss = Logistic loss = log loss = $-y_i\log(p_i) - (1 -y_i) \log(1 -p_i)$
Sometimes people take a different logarithmic base, but it typically doesn't matter. I hear logistic loss more oft | Does "Log loss" refer to Logarithmic loss or Logistic loss?
Logarithmic loss = Logistic loss = log loss = $-y_i\log(p_i) - (1 -y_i) \log(1 -p_i)$
Sometimes people take a different logarithmic base, but it typically doesn't matter. I hear logistic loss more often.
FYI:
How is logistic loss and cross-entropy related?
Thesaurus for statistics and machine learning terms
When is log-loss metric appropriate for evaluating performance of a classifier?
Multi-class logarithmic loss function per class | Does "Log loss" refer to Logarithmic loss or Logistic loss?
Logarithmic loss = Logistic loss = log loss = $-y_i\log(p_i) - (1 -y_i) \log(1 -p_i)$
Sometimes people take a different logarithmic base, but it typically doesn't matter. I hear logistic loss more oft |
25,550 | invariance of correlation to linear transformation: $\text{corr}(aX+b, cY+d) = \text{corr}(X,Y)$ | Since $$\text{corr}(X,Y)=\frac{\text{cov}(X,Y)}{\text{var}(X)^{1/2}\,\text{var}(Y)^{1/2}}$$
and $$\text{cov}(aX+b,cY+d)=ac\,\text{cov}(X,Y)$$
the equality$$\text{corr}(aX+b, cY+d) = \text{corr}(X,Y)$$only holds when $a$ and $c$ are both positive or both negative, i.e. $ac>0$. | invariance of correlation to linear transformation: $\text{corr}(aX+b, cY+d) = \text{corr}(X,Y)$ | Since $$\text{corr}(X,Y)=\frac{\text{cov}(X,Y)}{\text{var}(X)^{1/2}\,\text{var}(Y)^{1/2}}$$
and $$\text{cov}(aX+b,cY+d)=ac\,\text{cov}(X,Y)$$
the equality$$\text{corr}(aX+b, cY+d) = \text{corr}(X,Y)$$ | invariance of correlation to linear transformation: $\text{corr}(aX+b, cY+d) = \text{corr}(X,Y)$
Since $$\text{corr}(X,Y)=\frac{\text{cov}(X,Y)}{\text{var}(X)^{1/2}\,\text{var}(Y)^{1/2}}$$
and $$\text{cov}(aX+b,cY+d)=ac\,\text{cov}(X,Y)$$
the equality$$\text{corr}(aX+b, cY+d) = \text{corr}(X,Y)$$only holds when $a$ and $c$ are both positive or both negative, i.e. $ac>0$. | invariance of correlation to linear transformation: $\text{corr}(aX+b, cY+d) = \text{corr}(X,Y)$
Since $$\text{corr}(X,Y)=\frac{\text{cov}(X,Y)}{\text{var}(X)^{1/2}\,\text{var}(Y)^{1/2}}$$
and $$\text{cov}(aX+b,cY+d)=ac\,\text{cov}(X,Y)$$
the equality$$\text{corr}(aX+b, cY+d) = \text{corr}(X,Y)$$ |
25,551 | invariance of correlation to linear transformation: $\text{corr}(aX+b, cY+d) = \text{corr}(X,Y)$ | We know that:
COR(a X + b, c Y + d) = V(a X + b, c Y + d)/(V(a X + b) V(c Y + d))^0.5.
V(a X + b, c Y + d) = V(a X, c Y) = a c V(X, Y).
V(a X + b) = V(a X) = a^2 V(X).
V(c Y + d) = V(c Y) = c^2 V(Y).
Combining all of the above:
COR(a X + b, c Y + d) = a c V(X, Y) / (a^2 c^2 V(X) V(Y))^0.5 = a c COR(X, Y) / (|a| |c|) = sign(a) sign(c) COR(X, Y).
So COR(a X + b, c Y + d) equals COR(X, Y) for a c >= 0, and -COR(X, Y) for a c < 0. | invariance of correlation to linear transformation: $\text{corr}(aX+b, cY+d) = \text{corr}(X,Y)$ | We know that:
COR(a X + b, c Y + d) = V(a X + b, c Y + d)/(V(a X + b) V(c Y + d))^0.5.
V(a X + b, c Y + d) = V(a X, c Y) = a c V(X, Y).
V(a X + b) = V(a X) = a^2 V(X).
V(c Y + d) = V(c Y) = c^2 V(Y). | invariance of correlation to linear transformation: $\text{corr}(aX+b, cY+d) = \text{corr}(X,Y)$
We know that:
COR(a X + b, c Y + d) = V(a X + b, c Y + d)/(V(a X + b) V(c Y + d))^0.5.
V(a X + b, c Y + d) = V(a X, c Y) = a c V(X, Y).
V(a X + b) = V(a X) = a^2 V(X).
V(c Y + d) = V(c Y) = c^2 V(Y).
Combining all of the above:
COR(a X + b, c Y + d) = a c V(X, Y) / (a^2 c^2 V(X) V(Y))^0.5 = a c COR(X, Y) / (|a| |c|) = sign(a) sign(c) COR(X, Y).
So COR(a X + b, c Y + d) equals COR(X, Y) for a c >= 0, and -COR(X, Y) for a c < 0. | invariance of correlation to linear transformation: $\text{corr}(aX+b, cY+d) = \text{corr}(X,Y)$
We know that:
COR(a X + b, c Y + d) = V(a X + b, c Y + d)/(V(a X + b) V(c Y + d))^0.5.
V(a X + b, c Y + d) = V(a X, c Y) = a c V(X, Y).
V(a X + b) = V(a X) = a^2 V(X).
V(c Y + d) = V(c Y) = c^2 V(Y). |
25,552 | regression with scikit-learn with multiple outputs, svr or gbm possible? | Why not make a wrapper that would fit m regressors (where m is dimensionality of each y) like this?
class VectorRegression(sklearn.base.BaseEstimator):
def __init__(self, estimator):
self.estimator = estimator
def fit(self, X, y):
n, m = y.shape
# Fit a separate regressor for each column of y
self.estimators_ = [sklearn.base.clone(self.estimator).fit(X, y[:, i])
for i in range(m)]
return self
def predict(self, X):
# Join regressors' predictions
res = [est.predict(X)[:, np.newaxis] for est in self.estimators_]
return np.hstack(res)
Note: I haven't tested this code, but you got the idea. | regression with scikit-learn with multiple outputs, svr or gbm possible? | Why not make a wrapper that would fit m regressors (where m is dimensionality of each y) like this?
class VectorRegression(sklearn.base.BaseEstimator):
def __init__(self, estimator):
self. | regression with scikit-learn with multiple outputs, svr or gbm possible?
Why not make a wrapper that would fit m regressors (where m is dimensionality of each y) like this?
class VectorRegression(sklearn.base.BaseEstimator):
def __init__(self, estimator):
self.estimator = estimator
def fit(self, X, y):
n, m = y.shape
# Fit a separate regressor for each column of y
self.estimators_ = [sklearn.base.clone(self.estimator).fit(X, y[:, i])
for i in range(m)]
return self
def predict(self, X):
# Join regressors' predictions
res = [est.predict(X)[:, np.newaxis] for est in self.estimators_]
return np.hstack(res)
Note: I haven't tested this code, but you got the idea. | regression with scikit-learn with multiple outputs, svr or gbm possible?
Why not make a wrapper that would fit m regressors (where m is dimensionality of each y) like this?
class VectorRegression(sklearn.base.BaseEstimator):
def __init__(self, estimator):
self. |
25,553 | regression with scikit-learn with multiple outputs, svr or gbm possible? | Scikit-Learn also has a general class, MultiOutputRegressor, which can be used to use a single-output regression model and fit one regressor separately to each target.
Your code would then look something like this (using k-NN as example):
from sklearn.neighbors import KNeighborsRegressor
from sklearn.multioutput import MultiOutputRegressor
X = np.random.random((10,3))
y = np.random.random((10,2))
X2 = np.random.random((7,3))
knn = KNeighborsRegressor()
regr = MultiOutputRegressor(knn)
regr.fit(X,y)
regr.predict(X2) | regression with scikit-learn with multiple outputs, svr or gbm possible? | Scikit-Learn also has a general class, MultiOutputRegressor, which can be used to use a single-output regression model and fit one regressor separately to each target.
Your code would then look somet | regression with scikit-learn with multiple outputs, svr or gbm possible?
Scikit-Learn also has a general class, MultiOutputRegressor, which can be used to use a single-output regression model and fit one regressor separately to each target.
Your code would then look something like this (using k-NN as example):
from sklearn.neighbors import KNeighborsRegressor
from sklearn.multioutput import MultiOutputRegressor
X = np.random.random((10,3))
y = np.random.random((10,2))
X2 = np.random.random((7,3))
knn = KNeighborsRegressor()
regr = MultiOutputRegressor(knn)
regr.fit(X,y)
regr.predict(X2) | regression with scikit-learn with multiple outputs, svr or gbm possible?
Scikit-Learn also has a general class, MultiOutputRegressor, which can be used to use a single-output regression model and fit one regressor separately to each target.
Your code would then look somet |
25,554 | regression with scikit-learn with multiple outputs, svr or gbm possible? | To answer the question from the edit. I guess that algorithms that naturally support MultiOutput targets, perform best. This is because these algorithms calculate the multiple output variables simultaneously and hence take possible correlations between outputs into account. This is not the case, if you use MultiOutputRegressor from sklearn which fits a model for each output variable individually.
SVR naturally only supports single-output regression. But there are different adaptions that can be made to make the algorithm fit also to a multi-output regression task. For an extensive overview check the paper in the Reference section of this repository.
You can find an example for an implementation of Multiple-output support vector regression in python here. It is based on the paper Multi-step-ahead time series prediction using multiple-output support vector regression.
You also might want to check out this answer. | regression with scikit-learn with multiple outputs, svr or gbm possible? | To answer the question from the edit. I guess that algorithms that naturally support MultiOutput targets, perform best. This is because these algorithms calculate the multiple output variables simulta | regression with scikit-learn with multiple outputs, svr or gbm possible?
To answer the question from the edit. I guess that algorithms that naturally support MultiOutput targets, perform best. This is because these algorithms calculate the multiple output variables simultaneously and hence take possible correlations between outputs into account. This is not the case, if you use MultiOutputRegressor from sklearn which fits a model for each output variable individually.
SVR naturally only supports single-output regression. But there are different adaptions that can be made to make the algorithm fit also to a multi-output regression task. For an extensive overview check the paper in the Reference section of this repository.
You can find an example for an implementation of Multiple-output support vector regression in python here. It is based on the paper Multi-step-ahead time series prediction using multiple-output support vector regression.
You also might want to check out this answer. | regression with scikit-learn with multiple outputs, svr or gbm possible?
To answer the question from the edit. I guess that algorithms that naturally support MultiOutput targets, perform best. This is because these algorithms calculate the multiple output variables simulta |
25,555 | regression with scikit-learn with multiple outputs, svr or gbm possible? | I think that scikit-learn only supports multi-output regressors in decision tress: DecisionTreeRegressor. | regression with scikit-learn with multiple outputs, svr or gbm possible? | I think that scikit-learn only supports multi-output regressors in decision tress: DecisionTreeRegressor. | regression with scikit-learn with multiple outputs, svr or gbm possible?
I think that scikit-learn only supports multi-output regressors in decision tress: DecisionTreeRegressor. | regression with scikit-learn with multiple outputs, svr or gbm possible?
I think that scikit-learn only supports multi-output regressors in decision tress: DecisionTreeRegressor. |
25,556 | Difference between PCA and spectral clustering for a small sample set of Boolean features | What is the conceptual difference between doing direct PCA vs. using the eigenvalues of the similarity matrix?
PCA is done on a covariance or correlation matrix, but spectral clustering can take any similarity matrix (e.g. built with cosine similarity) and find clusters there.
Second, spectral clustering algorithms are based on graph partitioning (usually it's about finding the best cuts of the graph), while PCA finds the directions that have most of the variance. Although in both cases we end up finding the eigenvectors, the conceptual approaches are different.
And finally, I see that PCA and spectral clustering serve different purposes: one is a dimensionality reduction technique and the other is more an approach to clustering (but it's done via dimensionality reduction) | Difference between PCA and spectral clustering for a small sample set of Boolean features | What is the conceptual difference between doing direct PCA vs. using the eigenvalues of the similarity matrix?
PCA is done on a covariance or correlation matrix, but spectral clustering can take any | Difference between PCA and spectral clustering for a small sample set of Boolean features
What is the conceptual difference between doing direct PCA vs. using the eigenvalues of the similarity matrix?
PCA is done on a covariance or correlation matrix, but spectral clustering can take any similarity matrix (e.g. built with cosine similarity) and find clusters there.
Second, spectral clustering algorithms are based on graph partitioning (usually it's about finding the best cuts of the graph), while PCA finds the directions that have most of the variance. Although in both cases we end up finding the eigenvectors, the conceptual approaches are different.
And finally, I see that PCA and spectral clustering serve different purposes: one is a dimensionality reduction technique and the other is more an approach to clustering (but it's done via dimensionality reduction) | Difference between PCA and spectral clustering for a small sample set of Boolean features
What is the conceptual difference between doing direct PCA vs. using the eigenvalues of the similarity matrix?
PCA is done on a covariance or correlation matrix, but spectral clustering can take any |
25,557 | Difference between PCA and spectral clustering for a small sample set of Boolean features | For Boolean (i.e., categorical with two classes) features, a good alternative to using PCA consists in using Multiple Correspondence Analysis (MCA), which is simply the extension of PCA to categorical variables (see related thread). For some background about MCA, the papers are Husson et al. (2010), or Abdi and Valentin (2007). An excellent R package to perform MCA is FactoMineR. It provides you with tools to plot two-dimensional maps of the loadings of the observations on the principal components, which is very insightful.
Below are two map examples from one of my past research projects (plotted with ggplot2). I had only about 60 observations and it gave good results. The first map represents the observations in the space PC1-PC2, the second map in the space PC3-PC4... The variables are also represented in the map, which helps with interpreting the meaning of the dimensions. Collecting the insight from several of these maps can give you a pretty nice picture of what's happening in your data.
On the website linked above, you will also find information about a novel procedure, HCPC, which stands for Hierarchical Clustering on Principal Components, and which might be of interest to you. Basically, this method works as follows:
perform a MCA,
retain the first $k$ dimensions (where $k<p$, with $p$ your original number of features). This step is useful in that it removes some noise, and hence allows a more stable clustering,
perform an agglomerative (bottom-up) hierarchical clustering in the space of the retained PCs. Since you use the coordinates of the projections of the observations in the PC space (real numbers), you can use the Euclidean distance, with Ward's criterion for the linkage (minimum increase in within-cluster variance). You can cut the dendogram at the height you like or let the R function cut if or you based on some heuristic,
(optional) stabilize the clusters by performing a K-means clustering. The initial configuration is given by the centers of the clusters found at the previous step.
Then, you have lots of ways to investigate the clusters (most representative features, most representative individuals, etc.) | Difference between PCA and spectral clustering for a small sample set of Boolean features | For Boolean (i.e., categorical with two classes) features, a good alternative to using PCA consists in using Multiple Correspondence Analysis (MCA), which is simply the extension of PCA to categorical | Difference between PCA and spectral clustering for a small sample set of Boolean features
For Boolean (i.e., categorical with two classes) features, a good alternative to using PCA consists in using Multiple Correspondence Analysis (MCA), which is simply the extension of PCA to categorical variables (see related thread). For some background about MCA, the papers are Husson et al. (2010), or Abdi and Valentin (2007). An excellent R package to perform MCA is FactoMineR. It provides you with tools to plot two-dimensional maps of the loadings of the observations on the principal components, which is very insightful.
Below are two map examples from one of my past research projects (plotted with ggplot2). I had only about 60 observations and it gave good results. The first map represents the observations in the space PC1-PC2, the second map in the space PC3-PC4... The variables are also represented in the map, which helps with interpreting the meaning of the dimensions. Collecting the insight from several of these maps can give you a pretty nice picture of what's happening in your data.
On the website linked above, you will also find information about a novel procedure, HCPC, which stands for Hierarchical Clustering on Principal Components, and which might be of interest to you. Basically, this method works as follows:
perform a MCA,
retain the first $k$ dimensions (where $k<p$, with $p$ your original number of features). This step is useful in that it removes some noise, and hence allows a more stable clustering,
perform an agglomerative (bottom-up) hierarchical clustering in the space of the retained PCs. Since you use the coordinates of the projections of the observations in the PC space (real numbers), you can use the Euclidean distance, with Ward's criterion for the linkage (minimum increase in within-cluster variance). You can cut the dendogram at the height you like or let the R function cut if or you based on some heuristic,
(optional) stabilize the clusters by performing a K-means clustering. The initial configuration is given by the centers of the clusters found at the previous step.
Then, you have lots of ways to investigate the clusters (most representative features, most representative individuals, etc.) | Difference between PCA and spectral clustering for a small sample set of Boolean features
For Boolean (i.e., categorical with two classes) features, a good alternative to using PCA consists in using Multiple Correspondence Analysis (MCA), which is simply the extension of PCA to categorical |
25,558 | Any example code of REINFORCE algorithm proposed by Williams? | From David Silver's RL lecture on Policy Gradient methods, slide 21 here is pseudo-code for the episodic Reinforce algorithm, which basically is a gradient-based method where the expected return is sampled directly from the episode (as opposed to estimating it with some learned function). In this case the expected return is actually the total episodic reward onward that step, $G_t$.
initialise $\theta$
for each episode {$s_1, a_1, r_2 ... s_{T-1}, a_{T-1}, r_T$} sampled from policy $\pi_\theta$ do
for t = 1 to T - 1 do
$\theta \leftarrow \theta + \alpha \nabla_\theta \log \pi_\theta(s_t,a_t) G_t$
end for
end for
This algorithm suffers from high variance because the sampled rewards can be very different from one episode to another therefore this algorithm is usually used with a baseline substracted from the policy. Here is a more detailed explanation complete with code samples. | Any example code of REINFORCE algorithm proposed by Williams? | From David Silver's RL lecture on Policy Gradient methods, slide 21 here is pseudo-code for the episodic Reinforce algorithm, which basically is a gradient-based method where the expected return is sa | Any example code of REINFORCE algorithm proposed by Williams?
From David Silver's RL lecture on Policy Gradient methods, slide 21 here is pseudo-code for the episodic Reinforce algorithm, which basically is a gradient-based method where the expected return is sampled directly from the episode (as opposed to estimating it with some learned function). In this case the expected return is actually the total episodic reward onward that step, $G_t$.
initialise $\theta$
for each episode {$s_1, a_1, r_2 ... s_{T-1}, a_{T-1}, r_T$} sampled from policy $\pi_\theta$ do
for t = 1 to T - 1 do
$\theta \leftarrow \theta + \alpha \nabla_\theta \log \pi_\theta(s_t,a_t) G_t$
end for
end for
This algorithm suffers from high variance because the sampled rewards can be very different from one episode to another therefore this algorithm is usually used with a baseline substracted from the policy. Here is a more detailed explanation complete with code samples. | Any example code of REINFORCE algorithm proposed by Williams?
From David Silver's RL lecture on Policy Gradient methods, slide 21 here is pseudo-code for the episodic Reinforce algorithm, which basically is a gradient-based method where the expected return is sa |
25,559 | Any example code of REINFORCE algorithm proposed by Williams? | The REINFORCE algorithm for policy-gradient reinforcement learning is a simple stochastic gradient algorithm. It works well when episodes are reasonably short so lots of episodes can be simulated. Value-function methods are better for longer episodes because they can start learning before the end of a single episode. | Any example code of REINFORCE algorithm proposed by Williams? | The REINFORCE algorithm for policy-gradient reinforcement learning is a simple stochastic gradient algorithm. It works well when episodes are reasonably short so lots of episodes can be simulated. Val | Any example code of REINFORCE algorithm proposed by Williams?
The REINFORCE algorithm for policy-gradient reinforcement learning is a simple stochastic gradient algorithm. It works well when episodes are reasonably short so lots of episodes can be simulated. Value-function methods are better for longer episodes because they can start learning before the end of a single episode. | Any example code of REINFORCE algorithm proposed by Williams?
The REINFORCE algorithm for policy-gradient reinforcement learning is a simple stochastic gradient algorithm. It works well when episodes are reasonably short so lots of episodes can be simulated. Val |
25,560 | Need help calculating poisson posterior distribution given prior | You can read these Wikipedia pages on conjugate prior and prior probability first. In short, the posterior probability is the probability of parameters $\theta$ given the data x, or $p(\theta|x)$, and prior probability is a probability about the uncertainty of parameters based on subjective assessment, or $p(\theta)$.
Based on the Bayes' theorem, the relationship between the prior, the posterior, and the likelihood function is
$p(\theta|x) = \frac{p(x|\theta)p(\theta)}{\int p(x|\theta^{`})p(\theta^{`})}$. For certain choices of $p(\theta)$, the posterior and the prior has same algebraic form, and the integral in the denominator has closed form; such $p(\theta)$ is called a conjugate prior. The bottom of the conjugate prior page shows that the Gamma distribution is a conjugate prior of the Poisson distribution.
Before computing the posterior $p(\lambda|x)$ with prior $g(\lambda;\alpha,\beta) = \frac{\beta^{\alpha} \lambda^{\alpha-1} e^{-\lambda\beta}}{\Gamma(\alpha)}$ and Poisson pmf $p(x|\lambda)= \frac{e^{-\lambda}\lambda^x }{x!}$, what are the samples from the Poisson distribution? You need to have some samples drawn from the Poission distribution to compute the likelihood with them and then compute the posterior.
Suppose the samples are $x = \{x_1, ..., x_n\}$, then $p(x|\lambda) = \frac{\lambda^{\sum{x_i}}* e^{-n\lambda}}{x_1*...x_n}$. It's too troublesome to type the derivation since the likelihood of Poisson samples is fairly complex. The following derivation considers that there's only one sample x and $p(x|\lambda)= \frac{e^{-\lambda}\lambda^x }{x!}$, and you can derive the posterior for n samples own your own and see if the posterior follows a Gamma distribution with parameters $\alpha + \sum_{i=1}^n x_i ,\ \beta + n\!$ as listed in the first link.
$p(\lambda|x)= (\frac{e^{-\lambda}\lambda^x}{x!} * \frac{\beta^{\alpha} \lambda^{\alpha-1} e^{-\lambda\beta}}{\Gamma(\alpha)})\div (\int_0^\infty (\frac{e^{-\lambda}\lambda^x}{x!} * \frac{\beta^{\alpha} \lambda^{\alpha-1} e^{-\lambda\beta}}{\Gamma(\alpha)}) d\lambda)$. The denominator = $\int_0^\infty ( \frac{\beta^{\alpha+x} \lambda^{\alpha+x-1} e^{-\lambda(\beta+1)}}{\beta^x*x!*\Gamma(\alpha)}) d\lambda = \frac{ \Gamma(\alpha +x)}{\beta^x*x!*\Gamma(\alpha)} \int_0^\infty Gamma(\lambda; \alpha + x, \beta+1) d\lambda = \frac{ \Gamma(\alpha +x)}{\beta^x*x!*\Gamma(\alpha)}$.
After canceling some terms in the numerator and denominator, $p(\lambda|x)= \frac{ \beta^{\alpha+x} \lambda^{\alpha+x-1} e^{-\lambda(\beta+1)}}{\Gamma(\alpha+x)} = Gamma(\lambda; \alpha + x, \beta+1)$. Substitute the values $\alpha, \beta, \lambda$ , and x to obtain the posterior probability. | Need help calculating poisson posterior distribution given prior | You can read these Wikipedia pages on conjugate prior and prior probability first. In short, the posterior probability is the probability of parameters $\theta$ given the data x, or $p(\theta|x)$, and | Need help calculating poisson posterior distribution given prior
You can read these Wikipedia pages on conjugate prior and prior probability first. In short, the posterior probability is the probability of parameters $\theta$ given the data x, or $p(\theta|x)$, and prior probability is a probability about the uncertainty of parameters based on subjective assessment, or $p(\theta)$.
Based on the Bayes' theorem, the relationship between the prior, the posterior, and the likelihood function is
$p(\theta|x) = \frac{p(x|\theta)p(\theta)}{\int p(x|\theta^{`})p(\theta^{`})}$. For certain choices of $p(\theta)$, the posterior and the prior has same algebraic form, and the integral in the denominator has closed form; such $p(\theta)$ is called a conjugate prior. The bottom of the conjugate prior page shows that the Gamma distribution is a conjugate prior of the Poisson distribution.
Before computing the posterior $p(\lambda|x)$ with prior $g(\lambda;\alpha,\beta) = \frac{\beta^{\alpha} \lambda^{\alpha-1} e^{-\lambda\beta}}{\Gamma(\alpha)}$ and Poisson pmf $p(x|\lambda)= \frac{e^{-\lambda}\lambda^x }{x!}$, what are the samples from the Poisson distribution? You need to have some samples drawn from the Poission distribution to compute the likelihood with them and then compute the posterior.
Suppose the samples are $x = \{x_1, ..., x_n\}$, then $p(x|\lambda) = \frac{\lambda^{\sum{x_i}}* e^{-n\lambda}}{x_1*...x_n}$. It's too troublesome to type the derivation since the likelihood of Poisson samples is fairly complex. The following derivation considers that there's only one sample x and $p(x|\lambda)= \frac{e^{-\lambda}\lambda^x }{x!}$, and you can derive the posterior for n samples own your own and see if the posterior follows a Gamma distribution with parameters $\alpha + \sum_{i=1}^n x_i ,\ \beta + n\!$ as listed in the first link.
$p(\lambda|x)= (\frac{e^{-\lambda}\lambda^x}{x!} * \frac{\beta^{\alpha} \lambda^{\alpha-1} e^{-\lambda\beta}}{\Gamma(\alpha)})\div (\int_0^\infty (\frac{e^{-\lambda}\lambda^x}{x!} * \frac{\beta^{\alpha} \lambda^{\alpha-1} e^{-\lambda\beta}}{\Gamma(\alpha)}) d\lambda)$. The denominator = $\int_0^\infty ( \frac{\beta^{\alpha+x} \lambda^{\alpha+x-1} e^{-\lambda(\beta+1)}}{\beta^x*x!*\Gamma(\alpha)}) d\lambda = \frac{ \Gamma(\alpha +x)}{\beta^x*x!*\Gamma(\alpha)} \int_0^\infty Gamma(\lambda; \alpha + x, \beta+1) d\lambda = \frac{ \Gamma(\alpha +x)}{\beta^x*x!*\Gamma(\alpha)}$.
After canceling some terms in the numerator and denominator, $p(\lambda|x)= \frac{ \beta^{\alpha+x} \lambda^{\alpha+x-1} e^{-\lambda(\beta+1)}}{\Gamma(\alpha+x)} = Gamma(\lambda; \alpha + x, \beta+1)$. Substitute the values $\alpha, \beta, \lambda$ , and x to obtain the posterior probability. | Need help calculating poisson posterior distribution given prior
You can read these Wikipedia pages on conjugate prior and prior probability first. In short, the posterior probability is the probability of parameters $\theta$ given the data x, or $p(\theta|x)$, and |
25,561 | Need help calculating poisson posterior distribution given prior | I think the derivation by @Tom is partially correct and there is a value missing from the final expression.
\begin{align}
\int_0^{\infty}(\frac{\beta^{\alpha}\lambda^{\alpha+x-1}e^{-\lambda(\beta+1)}}{x!\Gamma(\alpha)})d\lambda & = \frac{\Gamma(\alpha+x)}{x!\Gamma(\alpha)}\frac{\beta^\alpha}{(1+\beta)^{\alpha+x}}\int_0^{\infty}\frac{(\beta+1)^{\alpha+x}}{\Gamma(\alpha+x)}\lambda^{\alpha+x-1}e^{-\lambda(\beta+1)}d\lambda \\
& = \frac{\Gamma(\alpha+x)}{x!\Gamma(\alpha)}\frac{\beta^\alpha}{(1+\beta)^{\alpha+x}}\int_0^{\infty}Gamma(\lambda;\alpha+x,\beta+1)d\lambda \\
& = \frac{\Gamma(\alpha+x)}{x!\Gamma(\alpha)}\frac{\beta^\alpha}{(1+\beta)^{\alpha+x}}
\end{align}
I guess this would be the final form.
Now if you plug this as denominator in $p(x|\lambda)$ then after some cancellation we get $Gamma(\lambda ;\alpha+x,\beta+1)$ | Need help calculating poisson posterior distribution given prior | I think the derivation by @Tom is partially correct and there is a value missing from the final expression.
\begin{align}
\int_0^{\infty}(\frac{\beta^{\alpha}\lambda^{\alpha+x-1}e^{-\lambda(\beta+1)} | Need help calculating poisson posterior distribution given prior
I think the derivation by @Tom is partially correct and there is a value missing from the final expression.
\begin{align}
\int_0^{\infty}(\frac{\beta^{\alpha}\lambda^{\alpha+x-1}e^{-\lambda(\beta+1)}}{x!\Gamma(\alpha)})d\lambda & = \frac{\Gamma(\alpha+x)}{x!\Gamma(\alpha)}\frac{\beta^\alpha}{(1+\beta)^{\alpha+x}}\int_0^{\infty}\frac{(\beta+1)^{\alpha+x}}{\Gamma(\alpha+x)}\lambda^{\alpha+x-1}e^{-\lambda(\beta+1)}d\lambda \\
& = \frac{\Gamma(\alpha+x)}{x!\Gamma(\alpha)}\frac{\beta^\alpha}{(1+\beta)^{\alpha+x}}\int_0^{\infty}Gamma(\lambda;\alpha+x,\beta+1)d\lambda \\
& = \frac{\Gamma(\alpha+x)}{x!\Gamma(\alpha)}\frac{\beta^\alpha}{(1+\beta)^{\alpha+x}}
\end{align}
I guess this would be the final form.
Now if you plug this as denominator in $p(x|\lambda)$ then after some cancellation we get $Gamma(\lambda ;\alpha+x,\beta+1)$ | Need help calculating poisson posterior distribution given prior
I think the derivation by @Tom is partially correct and there is a value missing from the final expression.
\begin{align}
\int_0^{\infty}(\frac{\beta^{\alpha}\lambda^{\alpha+x-1}e^{-\lambda(\beta+1)} |
25,562 | What is a log-odds distribution? | why does it help with numbers bounded above and below?
A distribution defined on $(0,1)$ is what makes it suitable as a model for data on $(0,1)$. I don't think the text implies anything more than "it's a model for data on $(0,1)$" (or more generally, on $(a,b)$).
what is this distribution ... ?
The term 'log-odds distribution' is, unfortunately, not completely standard (and not a very common term even then).
I'll discuss some possibilities for what it might mean. Let's start by considering a way to construct distributions for values in the unit interval.
A common way to model a continuous random variable, $P$ in $(0,1)$ is the beta distribution, and a common way to model discrete proportions in $[0,1]$ is a scaled binomial ($P=X/n$, at least when $X$ is a count).
An alternative to using a beta distribution would be to take some continuous inverse CDF ($F^{-1}$) and use it to transform the values in $(0,1)$ to the real line (or rarely, the real half-line) and then use any relevant distribution ($G$) to model the values on the transformed range. This opens up many possibilities, since any pair of continuous distributions on the real line ($F,G$) are available for the transformation and the model.
So, for example, the log-odds transformation $Y=\log(\frac{P}{1-P})$ (also called the logit) would be one such inverse-cdf transformation (being the inverse CDF of a standard logistic), and then there are many distributions we might consider as models for $Y$.
We might then use (for example) a logistic$(\mu,\tau)$ model for $Y$, a simple two-parameter family on the real line. Transforming back to $(0,1)$ via the inverse log-odds transformation (i.e. $P=\frac{\exp(Y)}{1+\exp(Y)}$) yields a two parameter distribution for $P$, one that can be unimodal, or U shaped, or J shaped, symmetric or skew, in many ways somewhat like a beta distribution (personally, I'd call this logit-logistic, since its logit is logistic). Here are some examples for different values of $\mu,\tau$:
$\hspace{1.5cm}$
Looking at the brief mention in the text by Witten et al, this might be what's intended by "log-odds distribution" - but they might as easily mean something else.
Another possibility is that the logit-normal was intended.
However, the term seems to have been used by van Erp & van Gelder (2008)$^{[1]}$, for example, to refer to a log-odds transformation on a beta distribution (so in effect taking $F$ as a logistic and $G$ as the distribution of the log of a beta-prime random variable, or equivalently the distribution of the difference of the logs of two chi-square random variables). However, they are using this to do model count proportions, which are discrete. This of course, leads to some problems (caused by trying to model a distribution with finite probability at 0 and 1 with one on $(0,1)$), which they then seem to spend a lot of effort on. (It would seem easier to just avoid the inappropriate model, but maybe that's just me.)
Several other documents (I found at least three) refer to the sample distribution of log-odds (i.e. on the scale of $Y$ above) as "the log-odds distribution" (in some cases where $P$ is a discrete proportion* and in some cases where it's a continuous proportion) - so in that case it's not a probability model as such, but it's something to which you might apply some distributional model on the real line.
* again, this has the problem that if $P$ is exactly 0 or 1, the value of $Y$ will be $-\infty$ or $\infty$ respectively ... which suggests we must bound the distribution away from 0 and 1 to use it for this purpose.
The dissertation by Yan Guo (2009)$^{[2]}$ uses the term to refer to a log-logistic distribution, a right-skew distribution on the real half-line.
So as you see, it's not a term with a single meaning. Without a clearer indication from Witten or one of the other authors of that book, we're left to guess what is intended.
[1]: Noel van Erp & Pieter van Gelder, (2008),
"How to Interpret the Beta Distribution in Case of a Breakdown,"
Proceedings of the 6th International Probabilistic Workshop, Darmstadt
pdf link
[2]: Yan Guo, (2009),
The New Methods on NDE Systems Pod Capability Assessment and Robustness,
Dissertation submitted to the Graduate School of Wayne State University, Detroit, Michigan | What is a log-odds distribution? | why does it help with numbers bounded above and below?
A distribution defined on $(0,1)$ is what makes it suitable as a model for data on $(0,1)$. I don't think the text implies anything more than "i | What is a log-odds distribution?
why does it help with numbers bounded above and below?
A distribution defined on $(0,1)$ is what makes it suitable as a model for data on $(0,1)$. I don't think the text implies anything more than "it's a model for data on $(0,1)$" (or more generally, on $(a,b)$).
what is this distribution ... ?
The term 'log-odds distribution' is, unfortunately, not completely standard (and not a very common term even then).
I'll discuss some possibilities for what it might mean. Let's start by considering a way to construct distributions for values in the unit interval.
A common way to model a continuous random variable, $P$ in $(0,1)$ is the beta distribution, and a common way to model discrete proportions in $[0,1]$ is a scaled binomial ($P=X/n$, at least when $X$ is a count).
An alternative to using a beta distribution would be to take some continuous inverse CDF ($F^{-1}$) and use it to transform the values in $(0,1)$ to the real line (or rarely, the real half-line) and then use any relevant distribution ($G$) to model the values on the transformed range. This opens up many possibilities, since any pair of continuous distributions on the real line ($F,G$) are available for the transformation and the model.
So, for example, the log-odds transformation $Y=\log(\frac{P}{1-P})$ (also called the logit) would be one such inverse-cdf transformation (being the inverse CDF of a standard logistic), and then there are many distributions we might consider as models for $Y$.
We might then use (for example) a logistic$(\mu,\tau)$ model for $Y$, a simple two-parameter family on the real line. Transforming back to $(0,1)$ via the inverse log-odds transformation (i.e. $P=\frac{\exp(Y)}{1+\exp(Y)}$) yields a two parameter distribution for $P$, one that can be unimodal, or U shaped, or J shaped, symmetric or skew, in many ways somewhat like a beta distribution (personally, I'd call this logit-logistic, since its logit is logistic). Here are some examples for different values of $\mu,\tau$:
$\hspace{1.5cm}$
Looking at the brief mention in the text by Witten et al, this might be what's intended by "log-odds distribution" - but they might as easily mean something else.
Another possibility is that the logit-normal was intended.
However, the term seems to have been used by van Erp & van Gelder (2008)$^{[1]}$, for example, to refer to a log-odds transformation on a beta distribution (so in effect taking $F$ as a logistic and $G$ as the distribution of the log of a beta-prime random variable, or equivalently the distribution of the difference of the logs of two chi-square random variables). However, they are using this to do model count proportions, which are discrete. This of course, leads to some problems (caused by trying to model a distribution with finite probability at 0 and 1 with one on $(0,1)$), which they then seem to spend a lot of effort on. (It would seem easier to just avoid the inappropriate model, but maybe that's just me.)
Several other documents (I found at least three) refer to the sample distribution of log-odds (i.e. on the scale of $Y$ above) as "the log-odds distribution" (in some cases where $P$ is a discrete proportion* and in some cases where it's a continuous proportion) - so in that case it's not a probability model as such, but it's something to which you might apply some distributional model on the real line.
* again, this has the problem that if $P$ is exactly 0 or 1, the value of $Y$ will be $-\infty$ or $\infty$ respectively ... which suggests we must bound the distribution away from 0 and 1 to use it for this purpose.
The dissertation by Yan Guo (2009)$^{[2]}$ uses the term to refer to a log-logistic distribution, a right-skew distribution on the real half-line.
So as you see, it's not a term with a single meaning. Without a clearer indication from Witten or one of the other authors of that book, we're left to guess what is intended.
[1]: Noel van Erp & Pieter van Gelder, (2008),
"How to Interpret the Beta Distribution in Case of a Breakdown,"
Proceedings of the 6th International Probabilistic Workshop, Darmstadt
pdf link
[2]: Yan Guo, (2009),
The New Methods on NDE Systems Pod Capability Assessment and Robustness,
Dissertation submitted to the Graduate School of Wayne State University, Detroit, Michigan | What is a log-odds distribution?
why does it help with numbers bounded above and below?
A distribution defined on $(0,1)$ is what makes it suitable as a model for data on $(0,1)$. I don't think the text implies anything more than "i |
25,563 | What is a log-odds distribution? | I'm a software engineer (not a statistician) and I recently read a book called An Introduction to Statistical Learning. With applications in R.
I think what you're reading about is log-odds or logit. page 132
http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Fourth%20Printing.pdf
Brilliant book - I read it from cover to cover. Hope this helps | What is a log-odds distribution? | I'm a software engineer (not a statistician) and I recently read a book called An Introduction to Statistical Learning. With applications in R.
I think what you're reading about is log-odds or logit. | What is a log-odds distribution?
I'm a software engineer (not a statistician) and I recently read a book called An Introduction to Statistical Learning. With applications in R.
I think what you're reading about is log-odds or logit. page 132
http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Fourth%20Printing.pdf
Brilliant book - I read it from cover to cover. Hope this helps | What is a log-odds distribution?
I'm a software engineer (not a statistician) and I recently read a book called An Introduction to Statistical Learning. With applications in R.
I think what you're reading about is log-odds or logit. |
25,564 | How to prove that the manifold assumption is correct? | It quickly becomes apparent, by looking at many accounts of the "manifold assumption," that many writers are notably sloppy about its meaning. The more careful ones define it with a subtle but hugely important caveat: that the data lie on or close to a low-dimensional manifold.
Even those who do not include the "or close to" clause clearly adopt the manifold assumption as an approximate fiction, convenient for performing mathematical analysis, because their applications must contemplate deviations between the data and the estimated manifold. Indeed, many writers later introduce an explicit mechanism for deviations, such as contemplating regression of $y$ against $\mathrm x$ where $\mathrm x$ is constrained to lie on a manifold $M^k\subset \mathbb{R}^d$ but the $y$ may include random deviations. This is equivalent to supposing that the tuples $(\mathrm x_i, y_i)$ lie close to, but not necessarily on, an immersed $k$-dimensional manifold of the form
$$(\mathrm x,f(x)) \in M^k \times \mathbb{R} \subset \mathbb{R}^d\times \mathbb{R}\approx \mathbb{R}^{d+1}$$
for some smooth (regression) function $f:\mathbb{R}^d\to \mathbb{R}$. Since we may view all the perturbed points $(\mathrm x,y)=(\mathrm x,f(\mathrm x)+\varepsilon)$, which are merely close to the graph of $f$ (a $k$ dimensional manifold), as lying on the $k+1$-dimensional manifold $M^k\times \mathbb R$, this helps explain why such sloppiness about distinguishing "on" from "close to" may be unimportant in theory.
The difference between "on" and "close to" is hugely important for applications. "Close to" allows that the data may deviate from the manifold. As such, if you choose to estimate that manifold, then the typical amount of deviation between the data and the manifold can be quantified. One fitted manifold will be better than another when the typical amount of deviation is less, ceteris paribus.
The figure shows two versions of the manifold assumption for the data (large blue dots): the black manifold is relatively simple (requiring only four parameters to describe) but only comes "close to" the data, while the red dotted manifold fits the data perfectly but is complicated (17 parameters are needed).
As in all such problems, there is a tradeoff between the complexity of describing the manifold and the goodness of fit (the overfitting problem). It is always the case that a one-dimensional manifold can be found to fit any finite amount of data in $\mathbb{R}^d$ perfectly (as with the red dotted manifold in the figure, just run a smooth curve through all the points, in any order: almost surely it will not intersect itself, but if it does, perturb the curve in the neighborhood of any such intersection to eliminate it). At the other extreme, if only a limited class of manifolds is allowed (such as straight Euclidean hyperplanes only), then a good fit may be impossible, regardless of the dimensions, and the typical deviation between data and the fit may be large.
This leads to a straightforward, practical way to assess the manifold assumption: if the model/predictor/classifier developed from the manifold assumption works acceptably well, then the assumption was justified. Thus, the appropriate conditions sought in the question will be that some relevant measure of goodness of fit be acceptably small. (What measure? It depends on the problem and is tantamount to selecting a loss function.)
It is possible that manifolds of different dimension (with different kinds of constraints on their curvature) may fit the data--and predict held-out data--equally well. Nothing can be "proven" about "the underlying" manifold in general, especially when working with large, messy, human datasets. All we can usually hope for is that it the fitted manifold is a good model.
If you do not come up with a good model/predictor/classifier, then either the manifold assumption is invalid, you are assuming manifolds of too small a dimension, or you haven't looked hard enough or well enough. | How to prove that the manifold assumption is correct? | It quickly becomes apparent, by looking at many accounts of the "manifold assumption," that many writers are notably sloppy about its meaning. The more careful ones define it with a subtle but hugely | How to prove that the manifold assumption is correct?
It quickly becomes apparent, by looking at many accounts of the "manifold assumption," that many writers are notably sloppy about its meaning. The more careful ones define it with a subtle but hugely important caveat: that the data lie on or close to a low-dimensional manifold.
Even those who do not include the "or close to" clause clearly adopt the manifold assumption as an approximate fiction, convenient for performing mathematical analysis, because their applications must contemplate deviations between the data and the estimated manifold. Indeed, many writers later introduce an explicit mechanism for deviations, such as contemplating regression of $y$ against $\mathrm x$ where $\mathrm x$ is constrained to lie on a manifold $M^k\subset \mathbb{R}^d$ but the $y$ may include random deviations. This is equivalent to supposing that the tuples $(\mathrm x_i, y_i)$ lie close to, but not necessarily on, an immersed $k$-dimensional manifold of the form
$$(\mathrm x,f(x)) \in M^k \times \mathbb{R} \subset \mathbb{R}^d\times \mathbb{R}\approx \mathbb{R}^{d+1}$$
for some smooth (regression) function $f:\mathbb{R}^d\to \mathbb{R}$. Since we may view all the perturbed points $(\mathrm x,y)=(\mathrm x,f(\mathrm x)+\varepsilon)$, which are merely close to the graph of $f$ (a $k$ dimensional manifold), as lying on the $k+1$-dimensional manifold $M^k\times \mathbb R$, this helps explain why such sloppiness about distinguishing "on" from "close to" may be unimportant in theory.
The difference between "on" and "close to" is hugely important for applications. "Close to" allows that the data may deviate from the manifold. As such, if you choose to estimate that manifold, then the typical amount of deviation between the data and the manifold can be quantified. One fitted manifold will be better than another when the typical amount of deviation is less, ceteris paribus.
The figure shows two versions of the manifold assumption for the data (large blue dots): the black manifold is relatively simple (requiring only four parameters to describe) but only comes "close to" the data, while the red dotted manifold fits the data perfectly but is complicated (17 parameters are needed).
As in all such problems, there is a tradeoff between the complexity of describing the manifold and the goodness of fit (the overfitting problem). It is always the case that a one-dimensional manifold can be found to fit any finite amount of data in $\mathbb{R}^d$ perfectly (as with the red dotted manifold in the figure, just run a smooth curve through all the points, in any order: almost surely it will not intersect itself, but if it does, perturb the curve in the neighborhood of any such intersection to eliminate it). At the other extreme, if only a limited class of manifolds is allowed (such as straight Euclidean hyperplanes only), then a good fit may be impossible, regardless of the dimensions, and the typical deviation between data and the fit may be large.
This leads to a straightforward, practical way to assess the manifold assumption: if the model/predictor/classifier developed from the manifold assumption works acceptably well, then the assumption was justified. Thus, the appropriate conditions sought in the question will be that some relevant measure of goodness of fit be acceptably small. (What measure? It depends on the problem and is tantamount to selecting a loss function.)
It is possible that manifolds of different dimension (with different kinds of constraints on their curvature) may fit the data--and predict held-out data--equally well. Nothing can be "proven" about "the underlying" manifold in general, especially when working with large, messy, human datasets. All we can usually hope for is that it the fitted manifold is a good model.
If you do not come up with a good model/predictor/classifier, then either the manifold assumption is invalid, you are assuming manifolds of too small a dimension, or you haven't looked hard enough or well enough. | How to prove that the manifold assumption is correct?
It quickly becomes apparent, by looking at many accounts of the "manifold assumption," that many writers are notably sloppy about its meaning. The more careful ones define it with a subtle but hugely |
25,565 | How to prove that the manifold assumption is correct? | Any finite set of points can fit on any manifold (theorem reference needed, I cant remember what the theorem is, I just remember this fact from uni).
If one does not want all points to be identified, then the lowest possible dimension is 1.
Take as a simple example, given N 2d points, there exists some N - 1 order polynomial where all N points lie on that polynomial. Therefore we have a 1d manifold for any 2d dataset. I think the logic for arbitrary dimensions is similar.
So, that's not the issue, the real assumptions are on the structure/simplicity of the manifold, particularly when treating connected Riemannian manifolds as metric spaces. Ive read papers on this manifold hocus pocus, and found if you read carefully some pretty huge assumptions emerge!
The assumptions made are when the induced definition of "closeness" is assumed to "preserve the information in our dataset", but since this not formally defined in Information Theoretic terms, the resulting definition is pretty ad hoc and quite a huge assumption indeed. In particlar the problem seems to be that "closeness" is preserved, i.e. two close points, stay close, but that "farness" is not, and so two "far" points do not stay far.
In conclusion I would be very wary of such trickery in machine learning unless its known the dataset is indeed naturally euclidean, e.g. visual pattern recognition. I would not consider these approaches appropriate for more general problems. | How to prove that the manifold assumption is correct? | Any finite set of points can fit on any manifold (theorem reference needed, I cant remember what the theorem is, I just remember this fact from uni).
If one does not want all points to be identified, | How to prove that the manifold assumption is correct?
Any finite set of points can fit on any manifold (theorem reference needed, I cant remember what the theorem is, I just remember this fact from uni).
If one does not want all points to be identified, then the lowest possible dimension is 1.
Take as a simple example, given N 2d points, there exists some N - 1 order polynomial where all N points lie on that polynomial. Therefore we have a 1d manifold for any 2d dataset. I think the logic for arbitrary dimensions is similar.
So, that's not the issue, the real assumptions are on the structure/simplicity of the manifold, particularly when treating connected Riemannian manifolds as metric spaces. Ive read papers on this manifold hocus pocus, and found if you read carefully some pretty huge assumptions emerge!
The assumptions made are when the induced definition of "closeness" is assumed to "preserve the information in our dataset", but since this not formally defined in Information Theoretic terms, the resulting definition is pretty ad hoc and quite a huge assumption indeed. In particlar the problem seems to be that "closeness" is preserved, i.e. two close points, stay close, but that "farness" is not, and so two "far" points do not stay far.
In conclusion I would be very wary of such trickery in machine learning unless its known the dataset is indeed naturally euclidean, e.g. visual pattern recognition. I would not consider these approaches appropriate for more general problems. | How to prove that the manifold assumption is correct?
Any finite set of points can fit on any manifold (theorem reference needed, I cant remember what the theorem is, I just remember this fact from uni).
If one does not want all points to be identified, |
25,566 | Pros of Jeffries Matusita distance | Some key differences, preceding a longer explanation below, are that:
Crucially: the Jeffries-Matusita distance applies to distributions, rather than vectors in general.
The J-M distance formula you quote above only applies to vectors representing discrete probability distributions (i.e. vectors that sum to 1).
Unlike the Euclidean distance, the J-M distance can be generalised to any distributions for which the Bhattacharrya distance can be formulated.
The J-M distance has, via the Bhattacharrya distance, a probabilistic interpretation.
The Jeffries-Matusita distance, which seems to be particularly popular in the Remote Sensing literature, is a transformation of the Bhattacharrya distance (a popular measure of the dissimilarity between two distributions, denoted here as $b_{p,q}$) from the range $[0, \inf)$ to the fixed range $[0, \sqrt{2}]$:
$$
JM_{p,q}=\sqrt{2(1-\exp(-b(p,q))}
$$
A practical advantage of the J-M distance, according to this paper is that this measure "tends to suppress high separability values, whilst overemphasising low separability values".
The Bhattacharrya distance measures the dissimilarity of two distributions $p$ and $q$ in the following abstract continuous sense:
$$
b(p,q)=-\ln\int{\sqrt{p(x)q(x)}}dx
$$
If the distributions $p$ and $q$ are captured by histograms, represented by unit length vectors (where the $i$th element is the normalised count for $i$th of $N$ bins) this becomes:
$$
b(p,q)=-\ln\sum_{i=1}^{N}\sqrt{p_i\cdot q_i}
$$
And consequently the J-M distance for the two histograms is:
$$
JM_{p,q}=\sqrt{2\left(1-\sum_{i=1}^{N}{\sqrt{p_i\cdot q_i}}\right)}
$$
Which, noting that for normalised histograms $\sum_{i}{p_i}=1$, is the same as the formula you gave above:
$$
JM_{p,q}=\sqrt{\sum_{i=1}^{N}{\left(\sqrt{p_i} - \sqrt{q_i}\right)^2}}=\sqrt{\sum_{i=1}^{N}{\left(p_i -2 \sqrt{p_i}\sqrt{q_i} + q_i \right)}}=\sqrt{2\left(1-\sum_{i=1}^{N}{\sqrt{p_i\cdot q_i}}\right)}
$$ | Pros of Jeffries Matusita distance | Some key differences, preceding a longer explanation below, are that:
Crucially: the Jeffries-Matusita distance applies to distributions, rather than vectors in general.
The J-M distance formula you | Pros of Jeffries Matusita distance
Some key differences, preceding a longer explanation below, are that:
Crucially: the Jeffries-Matusita distance applies to distributions, rather than vectors in general.
The J-M distance formula you quote above only applies to vectors representing discrete probability distributions (i.e. vectors that sum to 1).
Unlike the Euclidean distance, the J-M distance can be generalised to any distributions for which the Bhattacharrya distance can be formulated.
The J-M distance has, via the Bhattacharrya distance, a probabilistic interpretation.
The Jeffries-Matusita distance, which seems to be particularly popular in the Remote Sensing literature, is a transformation of the Bhattacharrya distance (a popular measure of the dissimilarity between two distributions, denoted here as $b_{p,q}$) from the range $[0, \inf)$ to the fixed range $[0, \sqrt{2}]$:
$$
JM_{p,q}=\sqrt{2(1-\exp(-b(p,q))}
$$
A practical advantage of the J-M distance, according to this paper is that this measure "tends to suppress high separability values, whilst overemphasising low separability values".
The Bhattacharrya distance measures the dissimilarity of two distributions $p$ and $q$ in the following abstract continuous sense:
$$
b(p,q)=-\ln\int{\sqrt{p(x)q(x)}}dx
$$
If the distributions $p$ and $q$ are captured by histograms, represented by unit length vectors (where the $i$th element is the normalised count for $i$th of $N$ bins) this becomes:
$$
b(p,q)=-\ln\sum_{i=1}^{N}\sqrt{p_i\cdot q_i}
$$
And consequently the J-M distance for the two histograms is:
$$
JM_{p,q}=\sqrt{2\left(1-\sum_{i=1}^{N}{\sqrt{p_i\cdot q_i}}\right)}
$$
Which, noting that for normalised histograms $\sum_{i}{p_i}=1$, is the same as the formula you gave above:
$$
JM_{p,q}=\sqrt{\sum_{i=1}^{N}{\left(\sqrt{p_i} - \sqrt{q_i}\right)^2}}=\sqrt{\sum_{i=1}^{N}{\left(p_i -2 \sqrt{p_i}\sqrt{q_i} + q_i \right)}}=\sqrt{2\left(1-\sum_{i=1}^{N}{\sqrt{p_i\cdot q_i}}\right)}
$$ | Pros of Jeffries Matusita distance
Some key differences, preceding a longer explanation below, are that:
Crucially: the Jeffries-Matusita distance applies to distributions, rather than vectors in general.
The J-M distance formula you |
25,567 | How to use the chi-squared test to determine if data follow the Poisson distribution | The way you did the chi-squared test is not correct. There are several issues. First, your data frame looks like this:
variable frequency
1 0 20
2 1 10
3 2 5
4 3 3
5 4 2
6 5 1
So when you run mean(df$variable), you get 2.5, which is just the mean of 0:5. That is, it is unweighted. Instead, create your variable like this:
x = rep(0:5, times=c(20, 10, 5, 3, 2, 1))
table(x)
# x
# 0 1 2 3 4 5
# 20 10 5 3 2 1
mean(x)
# [1] 1.02439
The table() call shows that the code gives us what we wanted, and so mean() estimates lambda correctly.
Next, your estimated probabilities only go to 5, but the Poisson distribution goes to infinity. So you need to account for the probabilities of the values that you don't have in your dataset. This is not hard to do, you just calculate the complement:
probs = dpois(0:5, lambda=mean(x))
probs
# [1] 0.359015310 0.367771781 0.188370912 0.064321775 0.016472650 0.003374884
comp = 1-sum(probs)
# [1] 0.0006726867
Lastly, in R's chisq.test() function, the x= and y= arguments aren't exactly for the expected and observed values in the way you set this up. For one thing, what you are calling "expected" are actually probabilities (i.e., the output from dpois()), to make these expected values, you would have to multiply those probabilities (and be sure to include the compliment) by the total count. But even then, you wouldn't use those for y=. At any rate, you don't actually have to do that, you can just assign the probabilities to the p= argument. In addition, you will need to add a 0 to your observed values vector to represent all of the possible values that don't show up in your dataset:
chisq.test(x=c(20, 10, 5, 3, 2, 1, 0), p=c(probs, comp))
# Chi-squared test for given probabilities
#
# data: c(20, 10, 5, 3, 2, 1, 0)
# X-squared = 12.6058, df = 6, p-value = 0.04974
#
# Warning message:
# In chisq.test(x = c(20, 10, 5, 3, 2, 1, 0), p = c(probs, comp)) :
# Chi-squared approximation may be incorrect
The warning message suggests we may prefer to simulate instead, so we try again:
chisq.test(x=c(20, 10, 5, 3, 2, 1, 0), p=c(probs, comp), simulate.p.value=TRUE)
# Chi-squared test for given probabilities with simulated p-value
# (based on 2000 replicates)
#
# data: c(20, 10, 5, 3, 2, 1, 0)
# X-squared = 12.6058, df = NA, p-value = 0.07046
This is presumably a more accurate p-value, but it raises a question about how it should be interpreted. You ask "As the P-value is >0.05, I have concluded below that the distribution of variable follows a Poisson distribution - could someone confirm this?" Using the correct approach, we note that the first p-value was just <.05, but the second (simulated) p-value was just >.05. Although the latter p-value is more accurate, I would not rush to conclude that the data did come from a Poisson distribution. Here are some facts to bear in mind:
As stated in the title of a paper by Gelman and Stern, The difference between "significant" and "non-significant" is not itself statistically significant.
Real data never come from any of the idealized distributions, a fact pointed out well by @Glen_b recently here: What distribution does this histogram look like?
Real data tends not to follow the simple distributional shapes of the common one-, two- or three- parameter distributions. Real distributions are more like heterogeneous mixtures. Simple distributional forms are convenient fictions (models, to be precise) - they approximate reality in ways that make it easier to work with.
You cannot use the fact of a non-significant result to affirm the null hypothesis, as I explain here: Why do statisticians say a non-significant result means "you can't reject the null" as opposed to accepting the null hypothesis? | How to use the chi-squared test to determine if data follow the Poisson distribution | The way you did the chi-squared test is not correct. There are several issues. First, your data frame looks like this:
variable frequency
1 0 20
2 1 10
3 2 | How to use the chi-squared test to determine if data follow the Poisson distribution
The way you did the chi-squared test is not correct. There are several issues. First, your data frame looks like this:
variable frequency
1 0 20
2 1 10
3 2 5
4 3 3
5 4 2
6 5 1
So when you run mean(df$variable), you get 2.5, which is just the mean of 0:5. That is, it is unweighted. Instead, create your variable like this:
x = rep(0:5, times=c(20, 10, 5, 3, 2, 1))
table(x)
# x
# 0 1 2 3 4 5
# 20 10 5 3 2 1
mean(x)
# [1] 1.02439
The table() call shows that the code gives us what we wanted, and so mean() estimates lambda correctly.
Next, your estimated probabilities only go to 5, but the Poisson distribution goes to infinity. So you need to account for the probabilities of the values that you don't have in your dataset. This is not hard to do, you just calculate the complement:
probs = dpois(0:5, lambda=mean(x))
probs
# [1] 0.359015310 0.367771781 0.188370912 0.064321775 0.016472650 0.003374884
comp = 1-sum(probs)
# [1] 0.0006726867
Lastly, in R's chisq.test() function, the x= and y= arguments aren't exactly for the expected and observed values in the way you set this up. For one thing, what you are calling "expected" are actually probabilities (i.e., the output from dpois()), to make these expected values, you would have to multiply those probabilities (and be sure to include the compliment) by the total count. But even then, you wouldn't use those for y=. At any rate, you don't actually have to do that, you can just assign the probabilities to the p= argument. In addition, you will need to add a 0 to your observed values vector to represent all of the possible values that don't show up in your dataset:
chisq.test(x=c(20, 10, 5, 3, 2, 1, 0), p=c(probs, comp))
# Chi-squared test for given probabilities
#
# data: c(20, 10, 5, 3, 2, 1, 0)
# X-squared = 12.6058, df = 6, p-value = 0.04974
#
# Warning message:
# In chisq.test(x = c(20, 10, 5, 3, 2, 1, 0), p = c(probs, comp)) :
# Chi-squared approximation may be incorrect
The warning message suggests we may prefer to simulate instead, so we try again:
chisq.test(x=c(20, 10, 5, 3, 2, 1, 0), p=c(probs, comp), simulate.p.value=TRUE)
# Chi-squared test for given probabilities with simulated p-value
# (based on 2000 replicates)
#
# data: c(20, 10, 5, 3, 2, 1, 0)
# X-squared = 12.6058, df = NA, p-value = 0.07046
This is presumably a more accurate p-value, but it raises a question about how it should be interpreted. You ask "As the P-value is >0.05, I have concluded below that the distribution of variable follows a Poisson distribution - could someone confirm this?" Using the correct approach, we note that the first p-value was just <.05, but the second (simulated) p-value was just >.05. Although the latter p-value is more accurate, I would not rush to conclude that the data did come from a Poisson distribution. Here are some facts to bear in mind:
As stated in the title of a paper by Gelman and Stern, The difference between "significant" and "non-significant" is not itself statistically significant.
Real data never come from any of the idealized distributions, a fact pointed out well by @Glen_b recently here: What distribution does this histogram look like?
Real data tends not to follow the simple distributional shapes of the common one-, two- or three- parameter distributions. Real distributions are more like heterogeneous mixtures. Simple distributional forms are convenient fictions (models, to be precise) - they approximate reality in ways that make it easier to work with.
You cannot use the fact of a non-significant result to affirm the null hypothesis, as I explain here: Why do statisticians say a non-significant result means "you can't reject the null" as opposed to accepting the null hypothesis? | How to use the chi-squared test to determine if data follow the Poisson distribution
The way you did the chi-squared test is not correct. There are several issues. First, your data frame looks like this:
variable frequency
1 0 20
2 1 10
3 2 |
25,568 | How to use the chi-squared test to determine if data follow the Poisson distribution | If I have understood what you meant you should:
estimate the parameter of the Poisson distribution for your data, assuming it is Poisson distributed, say
lambdaEst = mean(x)
calculate, for each $0,1, 2, ...$, their theoretical probabilities assuming a Poisson distribution, for example
probTheo0 = dpois(x = 0, lambda = lambdaEst, log = FALSE)
then compare actual with theoretical probabilities via a chi-square test following this approach ChiSquare Test CV solution | How to use the chi-squared test to determine if data follow the Poisson distribution | If I have understood what you meant you should:
estimate the parameter of the Poisson distribution for your data, assuming it is Poisson distributed, say
lambdaEst = mean(x)
calculate, for each | How to use the chi-squared test to determine if data follow the Poisson distribution
If I have understood what you meant you should:
estimate the parameter of the Poisson distribution for your data, assuming it is Poisson distributed, say
lambdaEst = mean(x)
calculate, for each $0,1, 2, ...$, their theoretical probabilities assuming a Poisson distribution, for example
probTheo0 = dpois(x = 0, lambda = lambdaEst, log = FALSE)
then compare actual with theoretical probabilities via a chi-square test following this approach ChiSquare Test CV solution | How to use the chi-squared test to determine if data follow the Poisson distribution
If I have understood what you meant you should:
estimate the parameter of the Poisson distribution for your data, assuming it is Poisson distributed, say
lambdaEst = mean(x)
calculate, for each |
25,569 | How to use the chi-squared test to determine if data follow the Poisson distribution | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
This small script would work for any data.frame with observations:
df <- data.frame(variable = 0:5, frequency = c(20, 10, 5, 3, 2, 1))
N <- df$variable
observed <- df$frequency / sum(df$frequency)
lambdaEst <- sum(observed * N)
probs <- dpois(df$variable, lambda = lambdaEst)
comp = 1 - sum(probs)
chisq.test(x= c(rev(sort(df$frequency)),0), p=c(probs, comp)) | How to use the chi-squared test to determine if data follow the Poisson distribution | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| How to use the chi-squared test to determine if data follow the Poisson distribution
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
This small script would work for any data.frame with observations:
df <- data.frame(variable = 0:5, frequency = c(20, 10, 5, 3, 2, 1))
N <- df$variable
observed <- df$frequency / sum(df$frequency)
lambdaEst <- sum(observed * N)
probs <- dpois(df$variable, lambda = lambdaEst)
comp = 1 - sum(probs)
chisq.test(x= c(rev(sort(df$frequency)),0), p=c(probs, comp)) | How to use the chi-squared test to determine if data follow the Poisson distribution
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
25,570 | How would re-weighting American Community Survey diversity data affect its margins of error? | Update 2014-01-15
I realize that I didn't answer Danica's original question about whether the margin of error for the indirectly adjusted proportion disabled would be larger or smaller than the margin of error for the same rate in ACS. The answer is: if the company category proportions do not differ drastically from the state ACS proportions, the margin of error given below will be smaller than the ACS margin of error.
The reason: the indirect rate treats organization job category person counts (or relative proportions) as fixed numbers. The ACS estimate of proportion disabled requires, in effect, an estimate of those proportions, and the margins of error will increase to reflect this.
To illustrate, write the disabled rate as:
$$
\hat{P}_{adj} = \sum \dfrac{n_i}{n} \hat{p_i} \\
$$
where $\hat{p}_i$ is the estimated disabled rate in category $i$ in the ACS.
On the other hand, the ACS estimated rate is, in effect:
$$
\hat{P}_{acs} = \sum\widehat{\left(\frac{N_i}{N}\right)} \hat{p_i} $$
where $N_i$ and $N$ are respectively the population category and overall totals and $N_i/N$ is the
population proportion in category $i$.
Thus, the standard error for the ACS rate will be larger because of the need to estimate $N_i/N$ in addition to $p_i$.
If the organization category proportions and population estimated proportions differ greatly, then the it is possible that $SE( \hat{P}_{adj} )>SE( \hat{P}_{acs} )$. In a two-category example that I constructed, the categories were represented in proportions $N_1/N= 0.7345$ and $N_2/N= 0.2655$. The standard error for the estimated proportion disabled
was $SE( \hat{P}_{acs} ) = 0.0677$.
If I considered 0.7345 and 0.2655 to be the fixed values $n_1/n$ and $n_2/n$ (the indirect adjustment approach), $SE(\hat{P}_{adj} )=0.0375$, much smaller. If instead, $n_1/n= 0.15$ and $n_2/n =0.85$, $SE( \hat{P}_{adj} )=0.0678$, about the same as $SE( \hat{P}_{acs} )$ At the extreme $n_1/n= 0.001$ and $n_2/n =0.999$, $SE( \hat{P}_{adj} )=0.079$.
I'd be surprised if organization and population category proportions differ so drastically. If they don't, I think that it's safe to use the ACS margin of error as a conservative, possibly very conservative, estimate of the true margin of error.
Update 2014-01-14
Short answer
In my opinion, it would be irresponsible to present such a statistic without a CI or margin of error (half CI length). To compute these, you will need to download and analyze the ACS Public Use Microdata Sample (PUMS) (http://www.census.gov/acs/www/data_documentation/public_use_microdata_sample/).
Long answer
This isn't really a re-weighting of the ACS. It is a version of indirect standardization, a standard procedure in epidemiology (google or see any epi text). In this case state ACS job (category) disability rates are weighted by organization job category employee counts. This will compute an expected number of disabled people in the organization E, which can be compared to the observed number O. The usual metric for the comparison is a standardized ratio R= (O/E). (The usual term is "SMR", for "standardized mortality ratio", but here the "outcome" is disability.). R is also the ratio of the observed disability rate (O/n) and the indirectly standardized rate (E/n), where n is the number of the organization's employees.
In this case, it appears that only a CI for E or E/n will be needed, so I will start with that:
If
n_i = the organization employee count in job category i
p_i = disability rate for job category i in the ACS
Then
E = sum (n_i p_i)
The variance of E is:
var(E) = nn' V nn
where nn is the column vector of organization category counts and V is the estimated variance-covariance matrix of the ACS category disability rates.
Also, trivially, se(E) = sqrt(var(E)) and se(E/n) = se(E)/n.
and a 90% CI for E is
E ± 1.645 SE(E)
Divide by n to get the CI for E/n.
To estimate var(E) you would need to download and analyze the ACS Public Use Microdata Sample (PUMS) data (http://www.census.gov/acs/www/data_documentation/public_use_microdata_sample/).
I can only speak of the process for computing var(E) in Stata. As I don't know if that's available to you, I'll defer the details. However someone knowledgeable about the survey capabilities of R or (possibly) SAS can also provide code from the equations above.
Confidence Interval for the ratio R
Confidence intervals for R are ordinarily based on a Poisson assumption for O, but this assumption may be incorrect.
We can consider O and E to be independent, so
log R = log(O) - log(E) ->
var(log R) = var(log O) + var(log(E))
var(log(E)) can be computed as one more Stata step after the computation of var(E).
Under the Poisson independence assumption:
var(log O) ~ 1/E(O).
A program like Stata could fit, say, a negative binomial model or generalized linear model and give you a more accurate variance term.
An approximate 90% CI for log R is
log R ± 1.645 sqrt(var(log R))
and the endpoints can be exponentiated to get the CI for R. | How would re-weighting American Community Survey diversity data affect its margins of error? | Update 2014-01-15
I realize that I didn't answer Danica's original question about whether the margin of error for the indirectly adjusted proportion disabled would be larger or smaller than the margin | How would re-weighting American Community Survey diversity data affect its margins of error?
Update 2014-01-15
I realize that I didn't answer Danica's original question about whether the margin of error for the indirectly adjusted proportion disabled would be larger or smaller than the margin of error for the same rate in ACS. The answer is: if the company category proportions do not differ drastically from the state ACS proportions, the margin of error given below will be smaller than the ACS margin of error.
The reason: the indirect rate treats organization job category person counts (or relative proportions) as fixed numbers. The ACS estimate of proportion disabled requires, in effect, an estimate of those proportions, and the margins of error will increase to reflect this.
To illustrate, write the disabled rate as:
$$
\hat{P}_{adj} = \sum \dfrac{n_i}{n} \hat{p_i} \\
$$
where $\hat{p}_i$ is the estimated disabled rate in category $i$ in the ACS.
On the other hand, the ACS estimated rate is, in effect:
$$
\hat{P}_{acs} = \sum\widehat{\left(\frac{N_i}{N}\right)} \hat{p_i} $$
where $N_i$ and $N$ are respectively the population category and overall totals and $N_i/N$ is the
population proportion in category $i$.
Thus, the standard error for the ACS rate will be larger because of the need to estimate $N_i/N$ in addition to $p_i$.
If the organization category proportions and population estimated proportions differ greatly, then the it is possible that $SE( \hat{P}_{adj} )>SE( \hat{P}_{acs} )$. In a two-category example that I constructed, the categories were represented in proportions $N_1/N= 0.7345$ and $N_2/N= 0.2655$. The standard error for the estimated proportion disabled
was $SE( \hat{P}_{acs} ) = 0.0677$.
If I considered 0.7345 and 0.2655 to be the fixed values $n_1/n$ and $n_2/n$ (the indirect adjustment approach), $SE(\hat{P}_{adj} )=0.0375$, much smaller. If instead, $n_1/n= 0.15$ and $n_2/n =0.85$, $SE( \hat{P}_{adj} )=0.0678$, about the same as $SE( \hat{P}_{acs} )$ At the extreme $n_1/n= 0.001$ and $n_2/n =0.999$, $SE( \hat{P}_{adj} )=0.079$.
I'd be surprised if organization and population category proportions differ so drastically. If they don't, I think that it's safe to use the ACS margin of error as a conservative, possibly very conservative, estimate of the true margin of error.
Update 2014-01-14
Short answer
In my opinion, it would be irresponsible to present such a statistic without a CI or margin of error (half CI length). To compute these, you will need to download and analyze the ACS Public Use Microdata Sample (PUMS) (http://www.census.gov/acs/www/data_documentation/public_use_microdata_sample/).
Long answer
This isn't really a re-weighting of the ACS. It is a version of indirect standardization, a standard procedure in epidemiology (google or see any epi text). In this case state ACS job (category) disability rates are weighted by organization job category employee counts. This will compute an expected number of disabled people in the organization E, which can be compared to the observed number O. The usual metric for the comparison is a standardized ratio R= (O/E). (The usual term is "SMR", for "standardized mortality ratio", but here the "outcome" is disability.). R is also the ratio of the observed disability rate (O/n) and the indirectly standardized rate (E/n), where n is the number of the organization's employees.
In this case, it appears that only a CI for E or E/n will be needed, so I will start with that:
If
n_i = the organization employee count in job category i
p_i = disability rate for job category i in the ACS
Then
E = sum (n_i p_i)
The variance of E is:
var(E) = nn' V nn
where nn is the column vector of organization category counts and V is the estimated variance-covariance matrix of the ACS category disability rates.
Also, trivially, se(E) = sqrt(var(E)) and se(E/n) = se(E)/n.
and a 90% CI for E is
E ± 1.645 SE(E)
Divide by n to get the CI for E/n.
To estimate var(E) you would need to download and analyze the ACS Public Use Microdata Sample (PUMS) data (http://www.census.gov/acs/www/data_documentation/public_use_microdata_sample/).
I can only speak of the process for computing var(E) in Stata. As I don't know if that's available to you, I'll defer the details. However someone knowledgeable about the survey capabilities of R or (possibly) SAS can also provide code from the equations above.
Confidence Interval for the ratio R
Confidence intervals for R are ordinarily based on a Poisson assumption for O, but this assumption may be incorrect.
We can consider O and E to be independent, so
log R = log(O) - log(E) ->
var(log R) = var(log O) + var(log(E))
var(log(E)) can be computed as one more Stata step after the computation of var(E).
Under the Poisson independence assumption:
var(log O) ~ 1/E(O).
A program like Stata could fit, say, a negative binomial model or generalized linear model and give you a more accurate variance term.
An approximate 90% CI for log R is
log R ± 1.645 sqrt(var(log R))
and the endpoints can be exponentiated to get the CI for R. | How would re-weighting American Community Survey diversity data affect its margins of error?
Update 2014-01-15
I realize that I didn't answer Danica's original question about whether the margin of error for the indirectly adjusted proportion disabled would be larger or smaller than the margin |
25,571 | How would re-weighting American Community Survey diversity data affect its margins of error? | FWIW there are good resources for the ACS and accessing PUMS here (http://www.asdfree.com/2012/12/analyze-american-community-survey-acs.html).
Also there is a package for handling ACS data on the CRAN - called, naturally, ACS - which I have found really helpful for doing atypical things with ACS data. This is a good step-by-step for the package (unfortunately the documentation isn't super intuitive) -- http://dusp.mit.edu/sites/all/files/attachments/publication/working_with_acs_R.pdf | How would re-weighting American Community Survey diversity data affect its margins of error? | FWIW there are good resources for the ACS and accessing PUMS here (http://www.asdfree.com/2012/12/analyze-american-community-survey-acs.html).
Also there is a package for handling ACS data on the CRA | How would re-weighting American Community Survey diversity data affect its margins of error?
FWIW there are good resources for the ACS and accessing PUMS here (http://www.asdfree.com/2012/12/analyze-american-community-survey-acs.html).
Also there is a package for handling ACS data on the CRAN - called, naturally, ACS - which I have found really helpful for doing atypical things with ACS data. This is a good step-by-step for the package (unfortunately the documentation isn't super intuitive) -- http://dusp.mit.edu/sites/all/files/attachments/publication/working_with_acs_R.pdf | How would re-weighting American Community Survey diversity data affect its margins of error?
FWIW there are good resources for the ACS and accessing PUMS here (http://www.asdfree.com/2012/12/analyze-american-community-survey-acs.html).
Also there is a package for handling ACS data on the CRA |
25,572 | How would re-weighting American Community Survey diversity data affect its margins of error? | adding to the http://asdfree.com link in @pricele2's answer..in order to solve this problem with free software, i would encourage you to follow these steps:
(1) (two hours of hard work) get acquainted with the r language. watch the first 50 videos, two minutes each
http://twotorials.com/
(2) (one hour of easy instruction-following) install monetdb on your computer
http://www.asdfree.com/2013/03/column-store-r-or-how-i-learned-to-stop.html
(3) (thirty minutes of instruction-following + overnight download) download the acs pums onto your computer. only get the years you need.
https://github.com/ajdamico/usgsd/blob/master/American%20Community%20Survey/download%20all%20microdata.R
(4) (four hours of learning and programming and checking your work) recode the variables that you need to recode, according to whatever specifications you require
https://github.com/ajdamico/usgsd/blob/master/American%20Community%20Survey/2011%20single-year%20-%20variable%20recode%20example.R
(5) (two hours of actual analysis) run the exact command you're looking for, capture the standard error, and calculate a confidence interval.
https://github.com/ajdamico/usgsd/blob/master/American%20Community%20Survey/2011%20single-year%20-%20analysis%20examples.R
(6) (four hours of programming) if you need a ratio estimator, follow the ratio estimation example (with correctly-survey-adjusted standard error) here:
https://github.com/ajdamico/usgsd/blob/master/Censo%20Demografico/variable%20recode%20example.R#L552 | How would re-weighting American Community Survey diversity data affect its margins of error? | adding to the http://asdfree.com link in @pricele2's answer..in order to solve this problem with free software, i would encourage you to follow these steps:
(1) (two hours of hard work) get acquainted | How would re-weighting American Community Survey diversity data affect its margins of error?
adding to the http://asdfree.com link in @pricele2's answer..in order to solve this problem with free software, i would encourage you to follow these steps:
(1) (two hours of hard work) get acquainted with the r language. watch the first 50 videos, two minutes each
http://twotorials.com/
(2) (one hour of easy instruction-following) install monetdb on your computer
http://www.asdfree.com/2013/03/column-store-r-or-how-i-learned-to-stop.html
(3) (thirty minutes of instruction-following + overnight download) download the acs pums onto your computer. only get the years you need.
https://github.com/ajdamico/usgsd/blob/master/American%20Community%20Survey/download%20all%20microdata.R
(4) (four hours of learning and programming and checking your work) recode the variables that you need to recode, according to whatever specifications you require
https://github.com/ajdamico/usgsd/blob/master/American%20Community%20Survey/2011%20single-year%20-%20variable%20recode%20example.R
(5) (two hours of actual analysis) run the exact command you're looking for, capture the standard error, and calculate a confidence interval.
https://github.com/ajdamico/usgsd/blob/master/American%20Community%20Survey/2011%20single-year%20-%20analysis%20examples.R
(6) (four hours of programming) if you need a ratio estimator, follow the ratio estimation example (with correctly-survey-adjusted standard error) here:
https://github.com/ajdamico/usgsd/blob/master/Censo%20Demografico/variable%20recode%20example.R#L552 | How would re-weighting American Community Survey diversity data affect its margins of error?
adding to the http://asdfree.com link in @pricele2's answer..in order to solve this problem with free software, i would encourage you to follow these steps:
(1) (two hours of hard work) get acquainted |
25,573 | What is a Classifier? | From Wikipedia,
"An algorithm that implements classification, especially in a concrete
implementation, is known as a classifier. The term "classifier"
sometimes also refers to the mathematical function, implemented by a
classification algorithm, that maps input data to a category."
For the whole source: http://en.wikipedia.org/wiki/Statistical_classification | What is a Classifier? | From Wikipedia,
"An algorithm that implements classification, especially in a concrete
implementation, is known as a classifier. The term "classifier"
sometimes also refers to the mathematical f | What is a Classifier?
From Wikipedia,
"An algorithm that implements classification, especially in a concrete
implementation, is known as a classifier. The term "classifier"
sometimes also refers to the mathematical function, implemented by a
classification algorithm, that maps input data to a category."
For the whole source: http://en.wikipedia.org/wiki/Statistical_classification | What is a Classifier?
From Wikipedia,
"An algorithm that implements classification, especially in a concrete
implementation, is known as a classifier. The term "classifier"
sometimes also refers to the mathematical f |
25,574 | What is a Classifier? | Classifiers are algorithm which maps the input data to specific type of category .
Category is like any population of object which can be club together on the basis of the similarities. | What is a Classifier? | Classifiers are algorithm which maps the input data to specific type of category .
Category is like any population of object which can be club together on the basis of the similarities. | What is a Classifier?
Classifiers are algorithm which maps the input data to specific type of category .
Category is like any population of object which can be club together on the basis of the similarities. | What is a Classifier?
Classifiers are algorithm which maps the input data to specific type of category .
Category is like any population of object which can be club together on the basis of the similarities. |
25,575 | What is a Classifier? | A classifier can also refer to the field in the dataset which is the dependent variable of a statistical model.
For example, in a churn model which predicts if a customer is at-risk of cancelling his/her subscription, the classifier may be a binary 0/1 flag variable in the historical analytical dataset, off of which the model was developed, which signals if the record has churned (1) or not churned (0). | What is a Classifier? | A classifier can also refer to the field in the dataset which is the dependent variable of a statistical model.
For example, in a churn model which predicts if a customer is at-risk of cancelling his/ | What is a Classifier?
A classifier can also refer to the field in the dataset which is the dependent variable of a statistical model.
For example, in a churn model which predicts if a customer is at-risk of cancelling his/her subscription, the classifier may be a binary 0/1 flag variable in the historical analytical dataset, off of which the model was developed, which signals if the record has churned (1) or not churned (0). | What is a Classifier?
A classifier can also refer to the field in the dataset which is the dependent variable of a statistical model.
For example, in a churn model which predicts if a customer is at-risk of cancelling his/ |
25,576 | What is a Classifier? | A classifier is a system where you input data and then obtain outputs related to the grouping (i.e.: classification) in which those inputs belong to.
As an example, a common dataset to test classifiers with is the iris dataset. The data that gets input to the classifier contains four measurements related to some flowers' physical dimensions. The job of the classifier then is to output the correct flower type for every input. | What is a Classifier? | A classifier is a system where you input data and then obtain outputs related to the grouping (i.e.: classification) in which those inputs belong to.
As an example, a common dataset to test classifier | What is a Classifier?
A classifier is a system where you input data and then obtain outputs related to the grouping (i.e.: classification) in which those inputs belong to.
As an example, a common dataset to test classifiers with is the iris dataset. The data that gets input to the classifier contains four measurements related to some flowers' physical dimensions. The job of the classifier then is to output the correct flower type for every input. | What is a Classifier?
A classifier is a system where you input data and then obtain outputs related to the grouping (i.e.: classification) in which those inputs belong to.
As an example, a common dataset to test classifier |
25,577 | Piecewise linear regression with knots as parameters | Making the knots free parameters in the model turns the problem into a complex one not amenable to using standard estimation software. Computation of standard errors becomes very complex. Linear splines are very sensitive to where the knots are placed, and model "elbows" that are unlikely to be real unless $X=$ calendar time. Cubic splines have the advantages of (1) not having elbows because they have 3 orders of continuity, and (2) giving similar fits even if you move the knots around. Thus you can usually set knots based on quantiles of $X$ and not make knot estimation part of the optimization problem. Restricting the cubic regression splines to be linear in the tails (beyond the outer knots), called natural splines or restricted cubic splines, reduces the number of parameters to estimate and makes for more realistic fits.
This approach allows you to use standard estimation and hypothesis testing tools and does not require any special regression fitting functions, once you create the design matrix. Much more information is at Handouts under http://biostat.mc.vanderbilt.edu/CourseBios330. Once you fit the restricted cubic spline you can plot it along with confidence bands (which are obtained using standard methods also) and see slope changes. If you have special knowledge of regions of volatility you can put two knots closer together in that pre-specified region of $X$. | Piecewise linear regression with knots as parameters | Making the knots free parameters in the model turns the problem into a complex one not amenable to using standard estimation software. Computation of standard errors becomes very complex. Linear spl | Piecewise linear regression with knots as parameters
Making the knots free parameters in the model turns the problem into a complex one not amenable to using standard estimation software. Computation of standard errors becomes very complex. Linear splines are very sensitive to where the knots are placed, and model "elbows" that are unlikely to be real unless $X=$ calendar time. Cubic splines have the advantages of (1) not having elbows because they have 3 orders of continuity, and (2) giving similar fits even if you move the knots around. Thus you can usually set knots based on quantiles of $X$ and not make knot estimation part of the optimization problem. Restricting the cubic regression splines to be linear in the tails (beyond the outer knots), called natural splines or restricted cubic splines, reduces the number of parameters to estimate and makes for more realistic fits.
This approach allows you to use standard estimation and hypothesis testing tools and does not require any special regression fitting functions, once you create the design matrix. Much more information is at Handouts under http://biostat.mc.vanderbilt.edu/CourseBios330. Once you fit the restricted cubic spline you can plot it along with confidence bands (which are obtained using standard methods also) and see slope changes. If you have special knowledge of regions of volatility you can put two knots closer together in that pre-specified region of $X$. | Piecewise linear regression with knots as parameters
Making the knots free parameters in the model turns the problem into a complex one not amenable to using standard estimation software. Computation of standard errors becomes very complex. Linear spl |
25,578 | Piecewise linear regression with knots as parameters | Frank Harell suggested interesting alternatives. There are cases where however one might be interested in estimating a piecewise linear model:
Interest in the knot location per se: knot location can represent a tipping point, discontinuity point, that one wants to know.
Reduced number of parameters
I assume here that you are interested in finding the location of the knots. This is known as segmented regression and threshold regression in some literature, which are general cases of the changepoint, structural break regressions (the 𝑋= calendar time in Frank answer). Note that in these models, the lines are not necessarily restricted to pass by the knots (i.e. you fit intercept and slope separately in each regime).
This literature answers your two questions:
Estimation: this is done usually with non-linear least square (NLS). A simple algorithm is over a grid searching for every knot, then picking the one with lowest LS error. With multiple knots, this algorithm would require a 2D grid, 3D grid etc... which becomes infeasible, but luckily much more efficient solutions have been suggested (see Killick et al 2012 as one example). Several R packages allow this, for example segmented, or seglm
An alternative estimation is to use a LASSO-like estimator using one coefficient for each observation, and penalizing difference between coefficients (i.e. use penalty $|\beta_k - \beta_{k-1}|$). Knots are the location where $\beta_k \neq \beta_{k-1}$. Advantage is that there exist efficient estimators in this case, see for example Tibshirani and Taylor (2011). This is furthermore implemented in R package genlasso.
Inference Bad news is that inference is very complicated in these models (assuming there are a few large break points). See for example Hansen (1996, 2000)
References:
Hansen, B. E., March 1996. Inference when a nuisance parameter is not identified
under the null hypothesis. Econometrica 64 (2), 413–30.
Hansen, B. E., May 2000. Sample splitting and threshold estimation. Econo-
metrica 68 (3), 575–604.
Killick, R., Fearnhead, P. and Eckley, I. A. (2012), ‘Optimal Detection of Changepoints with a Linear Computational Cost’, Journal of the American Statistical Association 107(500), 1590–1598.
Tibshirani, R. J., and Taylor, J. (2011), “The Solution Path of the Generalized Lasso,” Annals of Statistics, 39,
1335–1371. [843] | Piecewise linear regression with knots as parameters | Frank Harell suggested interesting alternatives. There are cases where however one might be interested in estimating a piecewise linear model:
Interest in the knot location per se: knot location can | Piecewise linear regression with knots as parameters
Frank Harell suggested interesting alternatives. There are cases where however one might be interested in estimating a piecewise linear model:
Interest in the knot location per se: knot location can represent a tipping point, discontinuity point, that one wants to know.
Reduced number of parameters
I assume here that you are interested in finding the location of the knots. This is known as segmented regression and threshold regression in some literature, which are general cases of the changepoint, structural break regressions (the 𝑋= calendar time in Frank answer). Note that in these models, the lines are not necessarily restricted to pass by the knots (i.e. you fit intercept and slope separately in each regime).
This literature answers your two questions:
Estimation: this is done usually with non-linear least square (NLS). A simple algorithm is over a grid searching for every knot, then picking the one with lowest LS error. With multiple knots, this algorithm would require a 2D grid, 3D grid etc... which becomes infeasible, but luckily much more efficient solutions have been suggested (see Killick et al 2012 as one example). Several R packages allow this, for example segmented, or seglm
An alternative estimation is to use a LASSO-like estimator using one coefficient for each observation, and penalizing difference between coefficients (i.e. use penalty $|\beta_k - \beta_{k-1}|$). Knots are the location where $\beta_k \neq \beta_{k-1}$. Advantage is that there exist efficient estimators in this case, see for example Tibshirani and Taylor (2011). This is furthermore implemented in R package genlasso.
Inference Bad news is that inference is very complicated in these models (assuming there are a few large break points). See for example Hansen (1996, 2000)
References:
Hansen, B. E., March 1996. Inference when a nuisance parameter is not identified
under the null hypothesis. Econometrica 64 (2), 413–30.
Hansen, B. E., May 2000. Sample splitting and threshold estimation. Econo-
metrica 68 (3), 575–604.
Killick, R., Fearnhead, P. and Eckley, I. A. (2012), ‘Optimal Detection of Changepoints with a Linear Computational Cost’, Journal of the American Statistical Association 107(500), 1590–1598.
Tibshirani, R. J., and Taylor, J. (2011), “The Solution Path of the Generalized Lasso,” Annals of Statistics, 39,
1335–1371. [843] | Piecewise linear regression with knots as parameters
Frank Harell suggested interesting alternatives. There are cases where however one might be interested in estimating a piecewise linear model:
Interest in the knot location per se: knot location can |
25,579 | Piecewise linear regression with knots as parameters | I made the R package mcp exactly because there is a lack of packages quantifying the uncertainty (e.g., SE) about the inferred change point locations. Change point problems are conceptually simple in a Bayesian framework, and computationally accessible using variants of Gibbs sampling (read more in this preprint).
mcp includes a dataset with three linear segments:
> head(ex_demo)
time response
1 68.35820 32.842651
2 87.29038 -1.160003
3 69.01173 27.564248
4 11.59361 10.062971
5 19.50091 14.056859
6 46.12009 18.292640
Let's fit a piecewise linear regression with three segments. In mcp you do this as a list one formula per segment:
library(mcp)
# Define the model
model = list(
response ~ 1, # plateau
~ 0 + time, # joined slope
~ 1 + time # disjoined slope
)
# Fit it.
fit = mcp(model, data = ex_demo)
Let's visualize it first:
plot(fit)
The blue curves on the x-axis are the posteriors of the change points. You can see them more directly using plot_pars(fit). Note that they rarely conform to any "clean" known density like the normal distribution.
See summaries using summary(fit). mcp includes functions to test parameter values, model comparison, etc. Read more on the mcp website. | Piecewise linear regression with knots as parameters | I made the R package mcp exactly because there is a lack of packages quantifying the uncertainty (e.g., SE) about the inferred change point locations. Change point problems are conceptually simple in | Piecewise linear regression with knots as parameters
I made the R package mcp exactly because there is a lack of packages quantifying the uncertainty (e.g., SE) about the inferred change point locations. Change point problems are conceptually simple in a Bayesian framework, and computationally accessible using variants of Gibbs sampling (read more in this preprint).
mcp includes a dataset with three linear segments:
> head(ex_demo)
time response
1 68.35820 32.842651
2 87.29038 -1.160003
3 69.01173 27.564248
4 11.59361 10.062971
5 19.50091 14.056859
6 46.12009 18.292640
Let's fit a piecewise linear regression with three segments. In mcp you do this as a list one formula per segment:
library(mcp)
# Define the model
model = list(
response ~ 1, # plateau
~ 0 + time, # joined slope
~ 1 + time # disjoined slope
)
# Fit it.
fit = mcp(model, data = ex_demo)
Let's visualize it first:
plot(fit)
The blue curves on the x-axis are the posteriors of the change points. You can see them more directly using plot_pars(fit). Note that they rarely conform to any "clean" known density like the normal distribution.
See summaries using summary(fit). mcp includes functions to test parameter values, model comparison, etc. Read more on the mcp website. | Piecewise linear regression with knots as parameters
I made the R package mcp exactly because there is a lack of packages quantifying the uncertainty (e.g., SE) about the inferred change point locations. Change point problems are conceptually simple in |
25,580 | Piecewise linear regression with knots as parameters | MARS (Multivariate Adaptive Regression splines) is yet another approach which might be closer to what you're aiming for. Here's a link to the original paper and a python library implementing it: https://github.com/scikit-learn-contrib/py-earth. | Piecewise linear regression with knots as parameters | MARS (Multivariate Adaptive Regression splines) is yet another approach which might be closer to what you're aiming for. Here's a link to the original paper and a python library implementing it: https | Piecewise linear regression with knots as parameters
MARS (Multivariate Adaptive Regression splines) is yet another approach which might be closer to what you're aiming for. Here's a link to the original paper and a python library implementing it: https://github.com/scikit-learn-contrib/py-earth. | Piecewise linear regression with knots as parameters
MARS (Multivariate Adaptive Regression splines) is yet another approach which might be closer to what you're aiming for. Here's a link to the original paper and a python library implementing it: https |
25,581 | Why is there a sharp elbow in my ROC curves? | A perfect ROC "curve" will be shaped with a sharp bend. The performance you have there is very near perfect separation. In addition, it looks like you have a scarcity of points making the curve. | Why is there a sharp elbow in my ROC curves? | A perfect ROC "curve" will be shaped with a sharp bend. The performance you have there is very near perfect separation. In addition, it looks like you have a scarcity of points making the curve. | Why is there a sharp elbow in my ROC curves?
A perfect ROC "curve" will be shaped with a sharp bend. The performance you have there is very near perfect separation. In addition, it looks like you have a scarcity of points making the curve. | Why is there a sharp elbow in my ROC curves?
A perfect ROC "curve" will be shaped with a sharp bend. The performance you have there is very near perfect separation. In addition, it looks like you have a scarcity of points making the curve. |
25,582 | Why is there a sharp elbow in my ROC curves? | Although this question was asked about 3 years ago, I find it useful to answer it here after coming across it and getting puzzled by it for some time. When your ground truth output is 0,1 and your prediction is 0,1, you get an angle-shape elbow. If your prediction or ground truth are confidence values or probabilities (say in the range [0,1]), then you will get curved elbow. | Why is there a sharp elbow in my ROC curves? | Although this question was asked about 3 years ago, I find it useful to answer it here after coming across it and getting puzzled by it for some time. When your ground truth output is 0,1 and your pre | Why is there a sharp elbow in my ROC curves?
Although this question was asked about 3 years ago, I find it useful to answer it here after coming across it and getting puzzled by it for some time. When your ground truth output is 0,1 and your prediction is 0,1, you get an angle-shape elbow. If your prediction or ground truth are confidence values or probabilities (say in the range [0,1]), then you will get curved elbow. | Why is there a sharp elbow in my ROC curves?
Although this question was asked about 3 years ago, I find it useful to answer it here after coming across it and getting puzzled by it for some time. When your ground truth output is 0,1 and your pre |
25,583 | Why is there a sharp elbow in my ROC curves? | I agree with John, in that the sharp curve is due to a scarcity of points. Specifically, it appears that you used your model's binary predictions (i.e. 1/0) and the observed labels (i.e. 1/0). Because of this, you have 3 points, one assumes a cutoff of Inf, one assumes a cutoff of 0, and the last assumes a cutoff of 1 which is given to you by your model's TPR and FPR and is located at the sharp angle in your graph.
Instead, you should be using the probabilities of the predicted class (values between 0 and 1) and the observed labels (i.e. 1/0). This will then give you a number of points on the graph that is equal to the number of unique probabilities you have (plus one for Inf). So if you have 100 unique probabilities, you will then 101 points on the graph for each of the various cutoffs. | Why is there a sharp elbow in my ROC curves? | I agree with John, in that the sharp curve is due to a scarcity of points. Specifically, it appears that you used your model's binary predictions (i.e. 1/0) and the observed labels (i.e. 1/0). Because | Why is there a sharp elbow in my ROC curves?
I agree with John, in that the sharp curve is due to a scarcity of points. Specifically, it appears that you used your model's binary predictions (i.e. 1/0) and the observed labels (i.e. 1/0). Because of this, you have 3 points, one assumes a cutoff of Inf, one assumes a cutoff of 0, and the last assumes a cutoff of 1 which is given to you by your model's TPR and FPR and is located at the sharp angle in your graph.
Instead, you should be using the probabilities of the predicted class (values between 0 and 1) and the observed labels (i.e. 1/0). This will then give you a number of points on the graph that is equal to the number of unique probabilities you have (plus one for Inf). So if you have 100 unique probabilities, you will then 101 points on the graph for each of the various cutoffs. | Why is there a sharp elbow in my ROC curves?
I agree with John, in that the sharp curve is due to a scarcity of points. Specifically, it appears that you used your model's binary predictions (i.e. 1/0) and the observed labels (i.e. 1/0). Because |
25,584 | Is there a quick way to convert z-scores into percentile scores? | pnorm(z) will do it.
> pnorm(1.96)
[1] 0.9750021
> pnorm(0)
[1] 0.5
> pnorm(-1)
[1] 0.1586553
Or if you insist on a percentile, boom. Then try
round(pnorm(1.96)*100,0) | Is there a quick way to convert z-scores into percentile scores? | pnorm(z) will do it.
> pnorm(1.96)
[1] 0.9750021
> pnorm(0)
[1] 0.5
> pnorm(-1)
[1] 0.1586553
Or if you insist on a percentile, boom. Then try
round(pnorm(1.96)*100,0) | Is there a quick way to convert z-scores into percentile scores?
pnorm(z) will do it.
> pnorm(1.96)
[1] 0.9750021
> pnorm(0)
[1] 0.5
> pnorm(-1)
[1] 0.1586553
Or if you insist on a percentile, boom. Then try
round(pnorm(1.96)*100,0) | Is there a quick way to convert z-scores into percentile scores?
pnorm(z) will do it.
> pnorm(1.96)
[1] 0.9750021
> pnorm(0)
[1] 0.5
> pnorm(-1)
[1] 0.1586553
Or if you insist on a percentile, boom. Then try
round(pnorm(1.96)*100,0) |
25,585 | Definition of dynamic Bayesian system, and its relation to HMM? | I'd recommend looking through these two excellent review papers:
An Introduction to Hidden Markov Models and Bayesian Networks by
Zoubin Gharamani
Dynamic Bayesian Networks by Kevin
Murphy
HMMs are not equivalent to DBNs, rather they are a special case of DBNs in which the entire state of the world is represented by a single hidden state variable. Other models within the DBN framework generalize the basic HMM, allowing for more hidden state variables (see the second paper above for the many varieties).
Finally, no, DBNs are not always discrete. For example, linear Gaussian state models (Kalman Filters) can be conceived of as continuous valued HMMs, often used to track objects in space. | Definition of dynamic Bayesian system, and its relation to HMM? | I'd recommend looking through these two excellent review papers:
An Introduction to Hidden Markov Models and Bayesian Networks by
Zoubin Gharamani
Dynamic Bayesian Networks by Kevin
Murphy
HMMs a | Definition of dynamic Bayesian system, and its relation to HMM?
I'd recommend looking through these two excellent review papers:
An Introduction to Hidden Markov Models and Bayesian Networks by
Zoubin Gharamani
Dynamic Bayesian Networks by Kevin
Murphy
HMMs are not equivalent to DBNs, rather they are a special case of DBNs in which the entire state of the world is represented by a single hidden state variable. Other models within the DBN framework generalize the basic HMM, allowing for more hidden state variables (see the second paper above for the many varieties).
Finally, no, DBNs are not always discrete. For example, linear Gaussian state models (Kalman Filters) can be conceived of as continuous valued HMMs, often used to track objects in space. | Definition of dynamic Bayesian system, and its relation to HMM?
I'd recommend looking through these two excellent review papers:
An Introduction to Hidden Markov Models and Bayesian Networks by
Zoubin Gharamani
Dynamic Bayesian Networks by Kevin
Murphy
HMMs a |
25,586 | Collaborative filtering through matrix factorization with logistic loss function | We use logistic loss for implicit matrix factorization at Spotify in the context of music recommendations (using play counts). We've just published a paper on our method in an upcoming NIPS 2014 workshop. The paper is titled Logistic Matrix Factorization for Implicit Feedback Data and can be found here http://stanford.edu/~rezab/nips2014workshop/submits/logmat.pdf
Code for the paper can be found on my Github https://github.com/MrChrisJohnson/logistic-mf | Collaborative filtering through matrix factorization with logistic loss function | We use logistic loss for implicit matrix factorization at Spotify in the context of music recommendations (using play counts). We've just published a paper on our method in an upcoming NIPS 2014 works | Collaborative filtering through matrix factorization with logistic loss function
We use logistic loss for implicit matrix factorization at Spotify in the context of music recommendations (using play counts). We've just published a paper on our method in an upcoming NIPS 2014 workshop. The paper is titled Logistic Matrix Factorization for Implicit Feedback Data and can be found here http://stanford.edu/~rezab/nips2014workshop/submits/logmat.pdf
Code for the paper can be found on my Github https://github.com/MrChrisJohnson/logistic-mf | Collaborative filtering through matrix factorization with logistic loss function
We use logistic loss for implicit matrix factorization at Spotify in the context of music recommendations (using play counts). We've just published a paper on our method in an upcoming NIPS 2014 works |
25,587 | Collaborative filtering through matrix factorization with logistic loss function | Most of the papers you'll find on the subject will deal with matrices where the ratings are on a scale [0,5]. In the context of the Netflix Prize for example, matrices have discrete ratings from 1 to 5 (+ the missing values). That's why the squared error is the most spread cost function. Some other error measures such as the Kullback-Leibler divergence can be seen.
Another problem that can occur with standard matrix factorization is that some of the elements of the matrices U and V may be negative (particularly during the first steps). That's a reason why you wouldn't use the log-loss here as your cost function.
However, if you're talking about Non-negative Matrix Factorization you should be able to use the log-loss as your cost function. You are in a similar case than Logistic Regression where log-loss is used as the cost function: your observed values are 0's and 1's and you predict a number (probability) between 0 and 1. | Collaborative filtering through matrix factorization with logistic loss function | Most of the papers you'll find on the subject will deal with matrices where the ratings are on a scale [0,5]. In the context of the Netflix Prize for example, matrices have discrete ratings from 1 to | Collaborative filtering through matrix factorization with logistic loss function
Most of the papers you'll find on the subject will deal with matrices where the ratings are on a scale [0,5]. In the context of the Netflix Prize for example, matrices have discrete ratings from 1 to 5 (+ the missing values). That's why the squared error is the most spread cost function. Some other error measures such as the Kullback-Leibler divergence can be seen.
Another problem that can occur with standard matrix factorization is that some of the elements of the matrices U and V may be negative (particularly during the first steps). That's a reason why you wouldn't use the log-loss here as your cost function.
However, if you're talking about Non-negative Matrix Factorization you should be able to use the log-loss as your cost function. You are in a similar case than Logistic Regression where log-loss is used as the cost function: your observed values are 0's and 1's and you predict a number (probability) between 0 and 1. | Collaborative filtering through matrix factorization with logistic loss function
Most of the papers you'll find on the subject will deal with matrices where the ratings are on a scale [0,5]. In the context of the Netflix Prize for example, matrices have discrete ratings from 1 to |
25,588 | Mean residual life | You have a nonnegative random variable $X\newcommand{\E}{\mathbb{E}}$ with distribution function $F$. The mean residual life is defined as
$$
m(t) = \E\left[X - t\mid X >t\right] = \frac{\E\left[ (X-t) I_{\{X>t\}}\right]}{P\{X>t\}} = \frac{1}{1-F(t)} \int_t^\infty (x-t)\,dF(x) \, ,
$$
for $t>0$. But
$$
\int_t^\infty (x-t)\,dF(x) = \int_t^\infty \left(\int_t^x du\right)dF(x) \, . \quad (*)
$$
Tonellis' Theorem says that you can change the order of integration in $(*)$, but you have to be careful about the integration limits. Look at the following figure.
The original domain of integration is interpreted like this: $x$ varies from $t$ to $\infty$. For some fixed $x$, $u$ varies from $t$ to $x$, determining the filled region in the figure. Now, reverse the order: $u$ varies from $t$ to $\infty$. For a fixed $u$, $x$ varies from $u$ to $\infty$. Hence,
$$
\int_t^\infty \left(\int_t^x du\right)dF(x) = \int_t^\infty \left(\int_u^\infty dF(x)\right)du
$$
$$
= \int_t^\infty P\{X > u\}\,du = \int_t^\infty \left(1 - F(u)\right)\,du \, .
$$
Therefore,
$$
m(t) = \frac{1}{1-F(t)} \int_t^\infty \left(1 - F(u)\right)\,du \, .
$$ | Mean residual life | You have a nonnegative random variable $X\newcommand{\E}{\mathbb{E}}$ with distribution function $F$. The mean residual life is defined as
$$
m(t) = \E\left[X - t\mid X >t\right] = \frac{\E\left[ (X | Mean residual life
You have a nonnegative random variable $X\newcommand{\E}{\mathbb{E}}$ with distribution function $F$. The mean residual life is defined as
$$
m(t) = \E\left[X - t\mid X >t\right] = \frac{\E\left[ (X-t) I_{\{X>t\}}\right]}{P\{X>t\}} = \frac{1}{1-F(t)} \int_t^\infty (x-t)\,dF(x) \, ,
$$
for $t>0$. But
$$
\int_t^\infty (x-t)\,dF(x) = \int_t^\infty \left(\int_t^x du\right)dF(x) \, . \quad (*)
$$
Tonellis' Theorem says that you can change the order of integration in $(*)$, but you have to be careful about the integration limits. Look at the following figure.
The original domain of integration is interpreted like this: $x$ varies from $t$ to $\infty$. For some fixed $x$, $u$ varies from $t$ to $x$, determining the filled region in the figure. Now, reverse the order: $u$ varies from $t$ to $\infty$. For a fixed $u$, $x$ varies from $u$ to $\infty$. Hence,
$$
\int_t^\infty \left(\int_t^x du\right)dF(x) = \int_t^\infty \left(\int_u^\infty dF(x)\right)du
$$
$$
= \int_t^\infty P\{X > u\}\,du = \int_t^\infty \left(1 - F(u)\right)\,du \, .
$$
Therefore,
$$
m(t) = \frac{1}{1-F(t)} \int_t^\infty \left(1 - F(u)\right)\,du \, .
$$ | Mean residual life
You have a nonnegative random variable $X\newcommand{\E}{\mathbb{E}}$ with distribution function $F$. The mean residual life is defined as
$$
m(t) = \E\left[X - t\mid X >t\right] = \frac{\E\left[ (X |
25,589 | Mean residual life | Here is another way you can think about the problem.
\begin{align*}
& \mathbb{E}[X - t \, | \, X> t] = \int_0^{\infty} \mathbb{P}(X - t > s \, | \, X > t)ds = \int_0^{\infty} \dfrac{\mathbb{P}(X - t > s, \, X > t)}{\mathbb{P}(X > t)}ds \\
= & \dfrac{1}{\mathbb{P}(X > t)}\int_0^{\infty} \mathbb{P}(X > t + s, \, X > t)ds = \dfrac{1}{1 - F(t)} \int_0^{\infty} (1 - F(t+s))ds \\= & \dfrac{1}{1 - F(t)} \int_t^{\infty} (1 - F(s))ds
\end{align*} | Mean residual life | Here is another way you can think about the problem.
\begin{align*}
& \mathbb{E}[X - t \, | \, X> t] = \int_0^{\infty} \mathbb{P}(X - t > s \, | \, X > t)ds = \int_0^{\infty} \dfrac{\mathbb{P}(X - t | Mean residual life
Here is another way you can think about the problem.
\begin{align*}
& \mathbb{E}[X - t \, | \, X> t] = \int_0^{\infty} \mathbb{P}(X - t > s \, | \, X > t)ds = \int_0^{\infty} \dfrac{\mathbb{P}(X - t > s, \, X > t)}{\mathbb{P}(X > t)}ds \\
= & \dfrac{1}{\mathbb{P}(X > t)}\int_0^{\infty} \mathbb{P}(X > t + s, \, X > t)ds = \dfrac{1}{1 - F(t)} \int_0^{\infty} (1 - F(t+s))ds \\= & \dfrac{1}{1 - F(t)} \int_t^{\infty} (1 - F(s))ds
\end{align*} | Mean residual life
Here is another way you can think about the problem.
\begin{align*}
& \mathbb{E}[X - t \, | \, X> t] = \int_0^{\infty} \mathbb{P}(X - t > s \, | \, X > t)ds = \int_0^{\infty} \dfrac{\mathbb{P}(X - t |
25,590 | Where can I find datasets usefull for testing my own Machine Learning implementations? [closed] | From the UC Irvine Machine Learning Repository:
We currently maintain 223 data sets as a service to the machine learning community. You may view all data sets through our searchable interface. Our old web site is still available, for those who prefer the old format. ... If you wish to donate a data set, please consult our donation policy. ... We have also set up a mirror site for the Repository.
Also, the following MIAS dataset has been widely used and studied:
When benchmarking an algorithm it is recommendable to use a standard test database (data set) for researchers to be able to directly compare the results. Most of the mammographic databases are not publicly available. The most easily accessed databases and therefore the most commonly used databases are the Mammographic Image Analysis Society (MIAS) database and the Digital Database for Screening Mammography (DDSM). Besides, there are currently few projects developing new mammographic image databases as well as several old projects. | Where can I find datasets usefull for testing my own Machine Learning implementations? [closed] | From the UC Irvine Machine Learning Repository:
We currently maintain 223 data sets as a service to the machine learning community. You may view all data sets through our searchable interface. Our ol | Where can I find datasets usefull for testing my own Machine Learning implementations? [closed]
From the UC Irvine Machine Learning Repository:
We currently maintain 223 data sets as a service to the machine learning community. You may view all data sets through our searchable interface. Our old web site is still available, for those who prefer the old format. ... If you wish to donate a data set, please consult our donation policy. ... We have also set up a mirror site for the Repository.
Also, the following MIAS dataset has been widely used and studied:
When benchmarking an algorithm it is recommendable to use a standard test database (data set) for researchers to be able to directly compare the results. Most of the mammographic databases are not publicly available. The most easily accessed databases and therefore the most commonly used databases are the Mammographic Image Analysis Society (MIAS) database and the Digital Database for Screening Mammography (DDSM). Besides, there are currently few projects developing new mammographic image databases as well as several old projects. | Where can I find datasets usefull for testing my own Machine Learning implementations? [closed]
From the UC Irvine Machine Learning Repository:
We currently maintain 223 data sets as a service to the machine learning community. You may view all data sets through our searchable interface. Our ol |
25,591 | Where can I find datasets usefull for testing my own Machine Learning implementations? [closed] | The UCI repository mentioned by Bashar is probably the largest, nevertheless I wanted to add a couple of smaller collections I came across:
Datasets from the Mulan Java library
Datasets from the Auton lab of Carnegie Mellon University's School of Computer Science
Datasets used in the Book Elements of Statistical Learning
Several datasets from KDD Cup competitions
Datasets at the Department of Statistics, University of Munich | Where can I find datasets usefull for testing my own Machine Learning implementations? [closed] | The UCI repository mentioned by Bashar is probably the largest, nevertheless I wanted to add a couple of smaller collections I came across:
Datasets from the Mulan Java library
Datasets from the Auto | Where can I find datasets usefull for testing my own Machine Learning implementations? [closed]
The UCI repository mentioned by Bashar is probably the largest, nevertheless I wanted to add a couple of smaller collections I came across:
Datasets from the Mulan Java library
Datasets from the Auton lab of Carnegie Mellon University's School of Computer Science
Datasets used in the Book Elements of Statistical Learning
Several datasets from KDD Cup competitions
Datasets at the Department of Statistics, University of Munich | Where can I find datasets usefull for testing my own Machine Learning implementations? [closed]
The UCI repository mentioned by Bashar is probably the largest, nevertheless I wanted to add a couple of smaller collections I came across:
Datasets from the Mulan Java library
Datasets from the Auto |
25,592 | Probability of a relation on the uniform distribution of points over 2D space | There are at least two interpretations: one concerns the actual points generated by this process and the other concerns the process itself.
If a realization of the Poisson process is given and pairs of points are to be chosen from that realization, then there is nothing to be done except systematically compare all distances to all other distances (a double loop over the points).
Otherwise, if the procedure is intended to consist of (i) creating a realization of the process and then (ii) selecting a pair of points at random, then the assumptions imply the two points are selected uniformly and independently from the circle. The calculation for this situation can be performed once and for all.
Notice that the squared distances $r_1 = d_1^2$ and $r_2 = d_2^2$ are uniformly distributed, whence the desired probability is
$$p(a,b) = \Pr\left(d_1^2 \lt \frac{d_2^2}{a(1 + b d_2^2)}\right) = \int_0^1 d r_2 \int_0^{\max(0, \min(1, r_2 / (a(1 + b r_2))))} d r_1.$$
The $\max$ and $\min$ can be handled by breaking into cases. Some special values of $a$ and $b$ have to be handled. Because the integration is a square window over a region generically bounded by lines and lobes of a hyperbola (with vertical axis at $1/(ab)$ and horizontal axis at $-1/b$), the result is straightforward but messy; it should involve rational expressions in $a$ and $b$ and some inverse hyperbolic functions (that is, natural logarithms). I had Mathematica write it out:
$$\begin{array}{ll}
\frac{b+1}{b} & \left(-1\leq a<0\land \frac{1}{a}-b\leq 1\land b<-1\right)\\ &\lor \left(a<-1\land \frac{1}{a}-b<1\land b<-1\right) \\
-\frac{1}{b (a b-1)} & \frac{1}{a}-b=1\land a<-1 \\
\frac{a^2 b+2 a b+a-2}{2 (a b-1)} & b=0\land a>0\land \frac{1}{a}-b>1 \\
\frac{b-\log (b+1)}{a b^2} & a>0\land \frac{1}{a}-b\leq 1\land b>-1 \\
\frac{a b^2+a b-a b \log (b+1)-b+\log (b+1)}{a b^2 (a b-1)} & a>0\land \frac{1}{a}-b\leq 1\land b\leq -1 \\
\frac{\log (1-a b)}{a b^2} & a>0\land \frac{1}{a}-b>1\land b\leq -1 \\
\frac{a b^2+a b+\log (1-a b)}{a b^2} & \left(-1<b<0\land a>0\land \frac{1}{a}-b>1\right) \\
& \lor \left(b>0\land a>0\land \frac{1}{a}-b>1\right) \\
\frac{b-\log ((-b-1) (a b-1))}{a b^2} & a<0\land \frac{1}{a}-b>1
\end{array}$$
Numeric integration and simulation over the ranges $-2 \le a \le 2$ and $-5 \le b \le 5$ confirm these results.
Edit
The modified question asks to replace $d_i^2$ by $d_i^\alpha$ and assumes $a$ and $b$ both positive. Upon making a substitution $r_i = d_i^\alpha$, the region of integration remains the same and integrand becomes $(2/\alpha)^2(r_1 r_2)^{2/\alpha-1}$ instead of $1$. Writing $\theta = \alpha/2$, we obtain
$$\frac{1}{2} a^{-1/\theta } \, _2F_1\left(\frac{1}{\theta },\frac{2}{\theta };\frac{\theta +2}{\theta };-b\right)$$
when $(a>0\land a<1\land a b+a\geq 1)$ or $a\geq 1$ and otherwise the result is
$$-a^{\frac{1}{\theta }} \left(\frac{1}{1-a b}\right)^{\frac{1}{\theta }}+\frac{1}{2} a^{\frac{1}{\theta }} (1-a b)^{-2/\theta } \, _2F_1\left(\frac{1}{\theta },\frac{2}{\theta };\frac{\theta +2}{\theta };1+\frac{1}{a b-1}\right)+1.$$
Here, $_2F_1$ is the hypergeometric function. The original case of $\alpha=2$ corresponds to $\theta=1$ and then these formulae reduce to the fourth and seventh of the eight previous cases. I have checked this result with a simulation, letting $\theta$ range from $1$ through $3$ and covering substantial ranges of $a$ and $b$. | Probability of a relation on the uniform distribution of points over 2D space | There are at least two interpretations: one concerns the actual points generated by this process and the other concerns the process itself.
If a realization of the Poisson process is given and pairs o | Probability of a relation on the uniform distribution of points over 2D space
There are at least two interpretations: one concerns the actual points generated by this process and the other concerns the process itself.
If a realization of the Poisson process is given and pairs of points are to be chosen from that realization, then there is nothing to be done except systematically compare all distances to all other distances (a double loop over the points).
Otherwise, if the procedure is intended to consist of (i) creating a realization of the process and then (ii) selecting a pair of points at random, then the assumptions imply the two points are selected uniformly and independently from the circle. The calculation for this situation can be performed once and for all.
Notice that the squared distances $r_1 = d_1^2$ and $r_2 = d_2^2$ are uniformly distributed, whence the desired probability is
$$p(a,b) = \Pr\left(d_1^2 \lt \frac{d_2^2}{a(1 + b d_2^2)}\right) = \int_0^1 d r_2 \int_0^{\max(0, \min(1, r_2 / (a(1 + b r_2))))} d r_1.$$
The $\max$ and $\min$ can be handled by breaking into cases. Some special values of $a$ and $b$ have to be handled. Because the integration is a square window over a region generically bounded by lines and lobes of a hyperbola (with vertical axis at $1/(ab)$ and horizontal axis at $-1/b$), the result is straightforward but messy; it should involve rational expressions in $a$ and $b$ and some inverse hyperbolic functions (that is, natural logarithms). I had Mathematica write it out:
$$\begin{array}{ll}
\frac{b+1}{b} & \left(-1\leq a<0\land \frac{1}{a}-b\leq 1\land b<-1\right)\\ &\lor \left(a<-1\land \frac{1}{a}-b<1\land b<-1\right) \\
-\frac{1}{b (a b-1)} & \frac{1}{a}-b=1\land a<-1 \\
\frac{a^2 b+2 a b+a-2}{2 (a b-1)} & b=0\land a>0\land \frac{1}{a}-b>1 \\
\frac{b-\log (b+1)}{a b^2} & a>0\land \frac{1}{a}-b\leq 1\land b>-1 \\
\frac{a b^2+a b-a b \log (b+1)-b+\log (b+1)}{a b^2 (a b-1)} & a>0\land \frac{1}{a}-b\leq 1\land b\leq -1 \\
\frac{\log (1-a b)}{a b^2} & a>0\land \frac{1}{a}-b>1\land b\leq -1 \\
\frac{a b^2+a b+\log (1-a b)}{a b^2} & \left(-1<b<0\land a>0\land \frac{1}{a}-b>1\right) \\
& \lor \left(b>0\land a>0\land \frac{1}{a}-b>1\right) \\
\frac{b-\log ((-b-1) (a b-1))}{a b^2} & a<0\land \frac{1}{a}-b>1
\end{array}$$
Numeric integration and simulation over the ranges $-2 \le a \le 2$ and $-5 \le b \le 5$ confirm these results.
Edit
The modified question asks to replace $d_i^2$ by $d_i^\alpha$ and assumes $a$ and $b$ both positive. Upon making a substitution $r_i = d_i^\alpha$, the region of integration remains the same and integrand becomes $(2/\alpha)^2(r_1 r_2)^{2/\alpha-1}$ instead of $1$. Writing $\theta = \alpha/2$, we obtain
$$\frac{1}{2} a^{-1/\theta } \, _2F_1\left(\frac{1}{\theta },\frac{2}{\theta };\frac{\theta +2}{\theta };-b\right)$$
when $(a>0\land a<1\land a b+a\geq 1)$ or $a\geq 1$ and otherwise the result is
$$-a^{\frac{1}{\theta }} \left(\frac{1}{1-a b}\right)^{\frac{1}{\theta }}+\frac{1}{2} a^{\frac{1}{\theta }} (1-a b)^{-2/\theta } \, _2F_1\left(\frac{1}{\theta },\frac{2}{\theta };\frac{\theta +2}{\theta };1+\frac{1}{a b-1}\right)+1.$$
Here, $_2F_1$ is the hypergeometric function. The original case of $\alpha=2$ corresponds to $\theta=1$ and then these formulae reduce to the fourth and seventh of the eight previous cases. I have checked this result with a simulation, letting $\theta$ range from $1$ through $3$ and covering substantial ranges of $a$ and $b$. | Probability of a relation on the uniform distribution of points over 2D space
There are at least two interpretations: one concerns the actual points generated by this process and the other concerns the process itself.
If a realization of the Poisson process is given and pairs o |
25,593 | Probability of a relation on the uniform distribution of points over 2D space | This problem can be solved by decomposing into parts and using the properties of a Poisson process.
It helps to recall how to generate a Poisson point process of intensity $\rho$ on a bounded subset $\newcommand{\A}{\mathcal A}$ of $\mathbb R^2$. We first generate a Poisson random variable $N$ with rate $\rho |\mathcal A|$ where $|\cdot|$ denotes Lebesgue measure, and then we sprinkle these $N$ points uniformly at random inside of $\A$.
This immediately tells us that as long as $N \geq 2$, if we choose two points (without replacement) at random, then these two points will be independent and uniformly distributed on $\A$. When $N < 2$, we have to do something and one natural choice is to define the desired probability as zero. Note that this happens with probability
$$\renewcommand{\Pr}{\mathbb P}\Pr(N < 2) = (1+\rho|\A|) e^{-\rho|\A|} \>.
$$
This is the only part of the problem that depends on the Poisson process intensity.
Probability conditional on $\{N \geq 2\}$
We are interested in the probability
$$
p(A,B,r) := \Pr\Big( d_1^2 \leq \frac{d_2^2}{A(1+B d_2^2)}\Big) \>,
$$
where $A > 0$, $B > 0$ and $\A = \{x : \|x\|_2 \leq r\}$. Here $d_1$ and $d_2$ are the radii of two of our uniformly distributed points that fall in $\A$.
Note that for a point randomly distributed in the disc of radius $r$, the distribution of the distance from the origin is $\Pr(D \leq d) = (d/r)^2$, from which we can see that $D^2$ has the same distribution as $r^2 U$ where $U \sim \mathcal U(0,1)$. From this, we can restate the probability of interest as
$$
p(A,B,r) = \Pr\Big( U_1 \leq \frac{U_2}{A(1+B r^2 U_2)}\Big) = \iint 1_{(0 < x < 1)} 1_{(0 < y < 1)} 1_{(0 < y < x/(A+ABr^2 x))} \,\mathrm dy \, \mathrm dx \>.
$$
This integral splits into two cases. To calculate it, we need the general integral
$$
\int_0^t \frac{x}{a+bx} \,\mathrm d x = \frac{1}{b} (t - \frac{a}{b} \log(1+bt/a)) \>.
$$
Case 1: $A(1+B r^2) \geq 1$.
Here we see that $u \leq A(1+B r^2 u)$ for $u \in [0,1]$, so
$$
p(A,B,r) = \frac{1}{ABr^2}\Big(1 - \frac{\log(1+B r^2)}{B r^2}\Big) \>.
$$
Case 2: $A(1+B r^2) < 1$.
Here the integral for $p(A,B,r)$ splits into two pieces since $u \geq A(1+B r^2 u)$ on $[A/(1-ABr^2),1]$. Hence we integrate up to $t = A/(1-A B r^2)$ using the general integral and then tack on an addition area of $1-A/(1-ABr^2)$ for the second piece. So, we get
$$
\begin{align}
p(A,B,r) &= \frac{1}{B r^2} \Big(\frac{1}{1-A B r^2} + \frac{\log(1-AB r^2)}{A B r^2}\Big) + 1 - \frac{A}{1 - A B r^2} \\
&= 1 + \frac{1}{B r^2} \Big(1 + \frac{\log(1-AB r^2)}{A B r^2}\Big) \>.
\end{align}
$$
Oftentimes a picture helps; here is one that shows an example of the integration region for each case. Note that $U_1$ is on the $y$-axis and $U_2$ on the $x$-axis.
The final probability of interest is then, of course, $(1 - (1+\rho\pi r^2) e^{-\rho\pi r^2} ) p(A,B,r)$.
An easy generalization
We can easily generalize the result to use a different shaped ball. In fact, for any arbitrary norm on $\mathbb R^2$, the conditional probability $p(A,B,r)$ is invariant as long as we use the ball induced by the norm instead of the circle!
This is because no matter what norm we choose, the squared radius is uniformly distributed. To see why, let $\delta(\cdot)$ be a norm on $\mathbb R^2$ and $B_\delta(r) = \{x: \delta(x) \leq r\}$ the ball of radius $r$ under the norm $\delta$. Note that $rx \in B_\delta(r)$ if and only if $x \in B_\delta(1)$. The scaling up or down of the unit ball is a linear transformation and by a standard fact about Lebesgue measure, the measure of a linear transformation $T$ of $B_\delta(1)$ is
$$
|B_\delta(r)| = |T B_\delta(1)| = |\det(T)| |B_{\delta(1)}| = r^2 |B_\delta(1)| \>,
$$
since $T(x) = r x = (r x_1, r x_2)$ in this case.
This shows that if $D = \delta(X)$ for $X$ uniformly distributed in $B_\delta(r)$, then
$$
\Pr(D \leq d) = \frac{|B_\delta(d)|}{|B_\delta(r)|} = (d/r)^2 \>.
$$
The eagle-eyed reader will note that we've only used the homogeneity of the norm here, and so a similar result will hold in general for uniform distributions on classes of sets closed under a homogeneous transformation.
Here is a picture with two points selected. The norms shown are the Euclidean norm, $\ell_1$ norm, $\sup$ norm, and the $\ell^p$ norm for $p = 5$. Each unit ball is outlined in black, and the largest ball within which the two randomly selected points lie is drawn in the corresponding color.
The conditional probability $p(A,B,r)$ is the same for each picture when the distance is measured using the corresponding norm. | Probability of a relation on the uniform distribution of points over 2D space | This problem can be solved by decomposing into parts and using the properties of a Poisson process.
It helps to recall how to generate a Poisson point process of intensity $\rho$ on a bounded subset $ | Probability of a relation on the uniform distribution of points over 2D space
This problem can be solved by decomposing into parts and using the properties of a Poisson process.
It helps to recall how to generate a Poisson point process of intensity $\rho$ on a bounded subset $\newcommand{\A}{\mathcal A}$ of $\mathbb R^2$. We first generate a Poisson random variable $N$ with rate $\rho |\mathcal A|$ where $|\cdot|$ denotes Lebesgue measure, and then we sprinkle these $N$ points uniformly at random inside of $\A$.
This immediately tells us that as long as $N \geq 2$, if we choose two points (without replacement) at random, then these two points will be independent and uniformly distributed on $\A$. When $N < 2$, we have to do something and one natural choice is to define the desired probability as zero. Note that this happens with probability
$$\renewcommand{\Pr}{\mathbb P}\Pr(N < 2) = (1+\rho|\A|) e^{-\rho|\A|} \>.
$$
This is the only part of the problem that depends on the Poisson process intensity.
Probability conditional on $\{N \geq 2\}$
We are interested in the probability
$$
p(A,B,r) := \Pr\Big( d_1^2 \leq \frac{d_2^2}{A(1+B d_2^2)}\Big) \>,
$$
where $A > 0$, $B > 0$ and $\A = \{x : \|x\|_2 \leq r\}$. Here $d_1$ and $d_2$ are the radii of two of our uniformly distributed points that fall in $\A$.
Note that for a point randomly distributed in the disc of radius $r$, the distribution of the distance from the origin is $\Pr(D \leq d) = (d/r)^2$, from which we can see that $D^2$ has the same distribution as $r^2 U$ where $U \sim \mathcal U(0,1)$. From this, we can restate the probability of interest as
$$
p(A,B,r) = \Pr\Big( U_1 \leq \frac{U_2}{A(1+B r^2 U_2)}\Big) = \iint 1_{(0 < x < 1)} 1_{(0 < y < 1)} 1_{(0 < y < x/(A+ABr^2 x))} \,\mathrm dy \, \mathrm dx \>.
$$
This integral splits into two cases. To calculate it, we need the general integral
$$
\int_0^t \frac{x}{a+bx} \,\mathrm d x = \frac{1}{b} (t - \frac{a}{b} \log(1+bt/a)) \>.
$$
Case 1: $A(1+B r^2) \geq 1$.
Here we see that $u \leq A(1+B r^2 u)$ for $u \in [0,1]$, so
$$
p(A,B,r) = \frac{1}{ABr^2}\Big(1 - \frac{\log(1+B r^2)}{B r^2}\Big) \>.
$$
Case 2: $A(1+B r^2) < 1$.
Here the integral for $p(A,B,r)$ splits into two pieces since $u \geq A(1+B r^2 u)$ on $[A/(1-ABr^2),1]$. Hence we integrate up to $t = A/(1-A B r^2)$ using the general integral and then tack on an addition area of $1-A/(1-ABr^2)$ for the second piece. So, we get
$$
\begin{align}
p(A,B,r) &= \frac{1}{B r^2} \Big(\frac{1}{1-A B r^2} + \frac{\log(1-AB r^2)}{A B r^2}\Big) + 1 - \frac{A}{1 - A B r^2} \\
&= 1 + \frac{1}{B r^2} \Big(1 + \frac{\log(1-AB r^2)}{A B r^2}\Big) \>.
\end{align}
$$
Oftentimes a picture helps; here is one that shows an example of the integration region for each case. Note that $U_1$ is on the $y$-axis and $U_2$ on the $x$-axis.
The final probability of interest is then, of course, $(1 - (1+\rho\pi r^2) e^{-\rho\pi r^2} ) p(A,B,r)$.
An easy generalization
We can easily generalize the result to use a different shaped ball. In fact, for any arbitrary norm on $\mathbb R^2$, the conditional probability $p(A,B,r)$ is invariant as long as we use the ball induced by the norm instead of the circle!
This is because no matter what norm we choose, the squared radius is uniformly distributed. To see why, let $\delta(\cdot)$ be a norm on $\mathbb R^2$ and $B_\delta(r) = \{x: \delta(x) \leq r\}$ the ball of radius $r$ under the norm $\delta$. Note that $rx \in B_\delta(r)$ if and only if $x \in B_\delta(1)$. The scaling up or down of the unit ball is a linear transformation and by a standard fact about Lebesgue measure, the measure of a linear transformation $T$ of $B_\delta(1)$ is
$$
|B_\delta(r)| = |T B_\delta(1)| = |\det(T)| |B_{\delta(1)}| = r^2 |B_\delta(1)| \>,
$$
since $T(x) = r x = (r x_1, r x_2)$ in this case.
This shows that if $D = \delta(X)$ for $X$ uniformly distributed in $B_\delta(r)$, then
$$
\Pr(D \leq d) = \frac{|B_\delta(d)|}{|B_\delta(r)|} = (d/r)^2 \>.
$$
The eagle-eyed reader will note that we've only used the homogeneity of the norm here, and so a similar result will hold in general for uniform distributions on classes of sets closed under a homogeneous transformation.
Here is a picture with two points selected. The norms shown are the Euclidean norm, $\ell_1$ norm, $\sup$ norm, and the $\ell^p$ norm for $p = 5$. Each unit ball is outlined in black, and the largest ball within which the two randomly selected points lie is drawn in the corresponding color.
The conditional probability $p(A,B,r)$ is the same for each picture when the distance is measured using the corresponding norm. | Probability of a relation on the uniform distribution of points over 2D space
This problem can be solved by decomposing into parts and using the properties of a Poisson process.
It helps to recall how to generate a Poisson point process of intensity $\rho$ on a bounded subset $ |
25,594 | What is hierarchical prior in Bayesian statistics? | A regular Bayesian model has the form $p(\theta |y) \propto p(\theta)p(y|\theta)$. Essentially the posterior is proportional to the product of the likelihood and the prior. Hierarchical models put priors on the prior (called a hyperprior) $p(\theta |y) \propto p(y|\theta)p(\theta |\lambda)p(\lambda)$. We can do this as often as we want.
See Gelman's "Bayesian Data Analysis" for a good explanation. | What is hierarchical prior in Bayesian statistics? | A regular Bayesian model has the form $p(\theta |y) \propto p(\theta)p(y|\theta)$. Essentially the posterior is proportional to the product of the likelihood and the prior. Hierarchical models put pri | What is hierarchical prior in Bayesian statistics?
A regular Bayesian model has the form $p(\theta |y) \propto p(\theta)p(y|\theta)$. Essentially the posterior is proportional to the product of the likelihood and the prior. Hierarchical models put priors on the prior (called a hyperprior) $p(\theta |y) \propto p(y|\theta)p(\theta |\lambda)p(\lambda)$. We can do this as often as we want.
See Gelman's "Bayesian Data Analysis" for a good explanation. | What is hierarchical prior in Bayesian statistics?
A regular Bayesian model has the form $p(\theta |y) \propto p(\theta)p(y|\theta)$. Essentially the posterior is proportional to the product of the likelihood and the prior. Hierarchical models put pri |
25,595 | What is hierarchical prior in Bayesian statistics? | When you have a hierarchical Bayesian model (also called multilevel model), you get priors for the priors and they are called hierarchical priors.
Consider for example:
$z = \beta_0+\beta_1{y}+\epsilon, \\
\epsilon \mathtt{\sim} N(0,σ)\\
\beta_0\mathtt{\sim} N(\alpha_0,σ_0),
\beta_1\mathtt{\sim} N(\alpha_1,σ_1),
\beta_2\mathtt{\sim} N(\alpha_2,σ_2)\\
\alpha_0\mathtt{\sim} inverse-\gamma(\alpha_{01},\theta_0)\\
$
In this case, you can say that, $inverse$-$\gamma$ is a hyperprior.
EDIT:
This was very useful to me when I learned about Hierarchical Bayesian Modeling. For an in depth explanation and detail, you may refer to Gelman's Data Analysis Using Regression and Multilevel/Hierarchical Models. | What is hierarchical prior in Bayesian statistics? | When you have a hierarchical Bayesian model (also called multilevel model), you get priors for the priors and they are called hierarchical priors.
Consider for example:
$z = \beta_0+\beta_1{y}+\epsil | What is hierarchical prior in Bayesian statistics?
When you have a hierarchical Bayesian model (also called multilevel model), you get priors for the priors and they are called hierarchical priors.
Consider for example:
$z = \beta_0+\beta_1{y}+\epsilon, \\
\epsilon \mathtt{\sim} N(0,σ)\\
\beta_0\mathtt{\sim} N(\alpha_0,σ_0),
\beta_1\mathtt{\sim} N(\alpha_1,σ_1),
\beta_2\mathtt{\sim} N(\alpha_2,σ_2)\\
\alpha_0\mathtt{\sim} inverse-\gamma(\alpha_{01},\theta_0)\\
$
In this case, you can say that, $inverse$-$\gamma$ is a hyperprior.
EDIT:
This was very useful to me when I learned about Hierarchical Bayesian Modeling. For an in depth explanation and detail, you may refer to Gelman's Data Analysis Using Regression and Multilevel/Hierarchical Models. | What is hierarchical prior in Bayesian statistics?
When you have a hierarchical Bayesian model (also called multilevel model), you get priors for the priors and they are called hierarchical priors.
Consider for example:
$z = \beta_0+\beta_1{y}+\epsil |
25,596 | PACF manual calculation | As you said "The PACF values are the coefficients of an autoregression of the series of interest on lagged values of the series" and I add where the PACF(K) is the coefficient of the last (kth) lag. Thus to compute the PACF of lag 3 for example compute
\begin{equation}
Y_{t}= a_{0}+ a_{1}Y_{t−1}+ a_{2}Y_{t−2}+ a_{3}Y_{t−3}
\end{equation}
and $a_{3}$ is the PACF(3).
Another example. To compute the PACF(5), estimate
\begin{equation}
Y_{t}= a_{0}+ a_{1}Y_{t−1}+ a_{2}Y_{t−2}+ a_{3}Y_{t−3}+ a_{4}Y_{t-4}+ a_{5}Y_{t-5}
\end{equation}
and $a_{5}$ is the PACF(5).
In general the PACF(K) is the KTH order coefficient of a model terminating with lag K. By the way SAS and other software vendors use the Yule-Walker approximation to compute the PACF which will provide slightly different estimates of the PACF. They do this for computational efficiency and in my opinion to duplicate the results in standard textbooks. | PACF manual calculation | As you said "The PACF values are the coefficients of an autoregression of the series of interest on lagged values of the series" and I add where the PACF(K) is the coefficient of the last (kth) lag. T | PACF manual calculation
As you said "The PACF values are the coefficients of an autoregression of the series of interest on lagged values of the series" and I add where the PACF(K) is the coefficient of the last (kth) lag. Thus to compute the PACF of lag 3 for example compute
\begin{equation}
Y_{t}= a_{0}+ a_{1}Y_{t−1}+ a_{2}Y_{t−2}+ a_{3}Y_{t−3}
\end{equation}
and $a_{3}$ is the PACF(3).
Another example. To compute the PACF(5), estimate
\begin{equation}
Y_{t}= a_{0}+ a_{1}Y_{t−1}+ a_{2}Y_{t−2}+ a_{3}Y_{t−3}+ a_{4}Y_{t-4}+ a_{5}Y_{t-5}
\end{equation}
and $a_{5}$ is the PACF(5).
In general the PACF(K) is the KTH order coefficient of a model terminating with lag K. By the way SAS and other software vendors use the Yule-Walker approximation to compute the PACF which will provide slightly different estimates of the PACF. They do this for computational efficiency and in my opinion to duplicate the results in standard textbooks. | PACF manual calculation
As you said "The PACF values are the coefficients of an autoregression of the series of interest on lagged values of the series" and I add where the PACF(K) is the coefficient of the last (kth) lag. T |
25,597 | How to compare 2 non-stationary time series to determine a correlation? | This is a simple situation; let's keep it so. The key is to focus on what matters:
Obtaining a useful description of the data.
Assessing individual deviations from that description.
Assessing the possible role and influence of chance in the interpretation.
Maintaining intellectual integrity and transparency.
There are still many choices and many forms of analysis will be valid and effective. Let's illustrate one approach here that can be recommended for its adherence to these key principles.
To maintain integrity, let's split the data into halves: the observations from 1972 through 1990 and those from 1991 through 2009 (19 years in each). We will fit models to the first half and then see how well the fits work in projecting the second half. This has the added advantage of detecting significant changes that may have occurred during the second half.
To obtain a useful description, we need to (a) find a way to measure the changes and (b) fit the simplest possible model appropriate for those changes, evaluate it, and iteratively fit more complex ones to accommodate deviations from the simple models.
(a) You have many choices: you can look at the raw data; you can look at their annual differences; you can do the same with the logarithms (to assess relative changes); you can assess years of life lost or relative life expectancy (RLE); or many other things. After some thought, I decided to consider RLE, defined as the ratio of life expectancy in Cohort B relative to that of the (reference) Cohort A. Fortunately, as the graphs show, the life expectancy in Cohort A is increasing regularly in a stable fashion over time, so that most of the random-looking variation in the RLE will be due to changes in Cohort B.
(b) The simplest possible model to start with is a linear trend. Let's see how well it works.
The dark blue points in this plot are the data retained for fitting; the light gold points are the subsequent data, not used for the fit. The black line is the fit, with a slope of .009/year. The dashed lines are prediction intervals for individual future values.
Overall, the fit looks good: examination of residuals (see below) shows no important changes in their sizes over time (during the data period 1972-1990). (There is some indication that they tended to be larger early on, when life expectancies were low. We could handle this complication by sacrificing some simplicity, but the benefits for estimating the trend are unlikely to be great.) There is just the tiniest hint of serial correlation (exhibited by some runs of positive and runs of negative residuals), but clearly this is unimportant. There are no outliers, which would be indicated by points beyond the prediction bands.
The one surprise is that in 2001 the values suddenly fell to the lower prediction band and stayed there: something rather sudden and large happened and persisted.
Here are the residuals, which are the deviations from the description mentioned previously.
Because we want to compare the residuals to 0, vertical lines are drawn to the zero level as a visual aid. Again, the blue points show data used for the fit. The light gold ones are the residuals for data falling near the lower prediction limit, post-2000.
From this figure we can estimate that the effect of the 2000-2001 change was about -0.07. This reflects a sudden drop of 0.07 (7%) of a full lifetime within Cohort B. After that drop, the horizontal pattern of residuals shows that the previous trend continued, but at the new lower level. This part of the analysis should be considered exploratory: it was not specifically planned, but came about due to a surprising comparison between the held-out data (1991-2009) and the fit to the rest of the data.
One other thing--even using just the 19 earliest years of data, the standard error of the slope is small: it's only .0009, just one-tenth of the estimated value of .009. The corresponding t-statistic of 10, with 17 degrees of freedom, is extremely significant (the p-value is less than $10^{-7}$); that is, we can be confident the trend is not due to chance. This is one part of our assessment of the role of chance in the analysis. The other parts are the examinations of the residuals.
There appears to be no reason to fit a more complicated model to these data, at least not for the purpose of estimating whether there's a genuine trend in RLE over time: there is one. We could go further and split the data into pre-2001 values and post-2000 values in order to refine our estimates of the trends, but it wouldn't be completely honest to conduct hypothesis tests. The p-values would be artificially low, because the splitting testing were not planned in advance. But as an exploratory exercise, such estimation is fine. Learn all you can from your data! Just be careful not to deceive yourself with overfitting (which is almost sure to happen if you use more than a half dozen parameters or so or use automated fitting techniques), or data snooping: stay alert to the difference between formal confirmation and informal (but valuable) data exploration.
Let's summarize:
By selecting an appropriate measure of life expectancy (the RLE), holding out half the data, fitting a simple model, and testing that model against the remaining data, we have established with high confidence that: there was a consistent trend; it has been close to linear over a long period of time; and there was a sudden persistent drop in RLE in 2001.
Our model is strikingly parsimonious: it requires just two numbers (a slope and intercept) to describe the early data accurately. It needs a third (the date of the break, 2001) to describe an obvious but unexpected departure from this description. There are no outliers relative to this three-parameter description. The model is not going to be substantially improved by characterizing serial correlation (the focus of time-series techniques generally), attempting to describe the small individual deviations (residuals) exhibited, or introducing more complicated fits (such as adding in a quadratic time component or modeling changes in the sizes of the residuals over time).
The trend has been 0.009 RLE per year. This means that with each passing year, the life expectancy within Cohort B has had 0.009 (almost 1%) of a full expected normal lifetime added to it. Over the course of the study (37 years), that would amount to 37*0.009 = 0.34 = one-third of a full lifetime improvement. The setback in 2001 reduced that gain to about 0.28 of a full lifetime from 1972 to 2009 (even though during that period overall life expectancy increased 10%).
Although this model could be improved, it would likely need more parameters and the improvement is unlikely to be great (as the near-random behavior of the residuals attests). On whole, then, we should be content to arrive at such a compact, useful, simple description of the data for so little analytical work. | How to compare 2 non-stationary time series to determine a correlation? | This is a simple situation; let's keep it so. The key is to focus on what matters:
Obtaining a useful description of the data.
Assessing individual deviations from that description.
Assessing the po | How to compare 2 non-stationary time series to determine a correlation?
This is a simple situation; let's keep it so. The key is to focus on what matters:
Obtaining a useful description of the data.
Assessing individual deviations from that description.
Assessing the possible role and influence of chance in the interpretation.
Maintaining intellectual integrity and transparency.
There are still many choices and many forms of analysis will be valid and effective. Let's illustrate one approach here that can be recommended for its adherence to these key principles.
To maintain integrity, let's split the data into halves: the observations from 1972 through 1990 and those from 1991 through 2009 (19 years in each). We will fit models to the first half and then see how well the fits work in projecting the second half. This has the added advantage of detecting significant changes that may have occurred during the second half.
To obtain a useful description, we need to (a) find a way to measure the changes and (b) fit the simplest possible model appropriate for those changes, evaluate it, and iteratively fit more complex ones to accommodate deviations from the simple models.
(a) You have many choices: you can look at the raw data; you can look at their annual differences; you can do the same with the logarithms (to assess relative changes); you can assess years of life lost or relative life expectancy (RLE); or many other things. After some thought, I decided to consider RLE, defined as the ratio of life expectancy in Cohort B relative to that of the (reference) Cohort A. Fortunately, as the graphs show, the life expectancy in Cohort A is increasing regularly in a stable fashion over time, so that most of the random-looking variation in the RLE will be due to changes in Cohort B.
(b) The simplest possible model to start with is a linear trend. Let's see how well it works.
The dark blue points in this plot are the data retained for fitting; the light gold points are the subsequent data, not used for the fit. The black line is the fit, with a slope of .009/year. The dashed lines are prediction intervals for individual future values.
Overall, the fit looks good: examination of residuals (see below) shows no important changes in their sizes over time (during the data period 1972-1990). (There is some indication that they tended to be larger early on, when life expectancies were low. We could handle this complication by sacrificing some simplicity, but the benefits for estimating the trend are unlikely to be great.) There is just the tiniest hint of serial correlation (exhibited by some runs of positive and runs of negative residuals), but clearly this is unimportant. There are no outliers, which would be indicated by points beyond the prediction bands.
The one surprise is that in 2001 the values suddenly fell to the lower prediction band and stayed there: something rather sudden and large happened and persisted.
Here are the residuals, which are the deviations from the description mentioned previously.
Because we want to compare the residuals to 0, vertical lines are drawn to the zero level as a visual aid. Again, the blue points show data used for the fit. The light gold ones are the residuals for data falling near the lower prediction limit, post-2000.
From this figure we can estimate that the effect of the 2000-2001 change was about -0.07. This reflects a sudden drop of 0.07 (7%) of a full lifetime within Cohort B. After that drop, the horizontal pattern of residuals shows that the previous trend continued, but at the new lower level. This part of the analysis should be considered exploratory: it was not specifically planned, but came about due to a surprising comparison between the held-out data (1991-2009) and the fit to the rest of the data.
One other thing--even using just the 19 earliest years of data, the standard error of the slope is small: it's only .0009, just one-tenth of the estimated value of .009. The corresponding t-statistic of 10, with 17 degrees of freedom, is extremely significant (the p-value is less than $10^{-7}$); that is, we can be confident the trend is not due to chance. This is one part of our assessment of the role of chance in the analysis. The other parts are the examinations of the residuals.
There appears to be no reason to fit a more complicated model to these data, at least not for the purpose of estimating whether there's a genuine trend in RLE over time: there is one. We could go further and split the data into pre-2001 values and post-2000 values in order to refine our estimates of the trends, but it wouldn't be completely honest to conduct hypothesis tests. The p-values would be artificially low, because the splitting testing were not planned in advance. But as an exploratory exercise, such estimation is fine. Learn all you can from your data! Just be careful not to deceive yourself with overfitting (which is almost sure to happen if you use more than a half dozen parameters or so or use automated fitting techniques), or data snooping: stay alert to the difference between formal confirmation and informal (but valuable) data exploration.
Let's summarize:
By selecting an appropriate measure of life expectancy (the RLE), holding out half the data, fitting a simple model, and testing that model against the remaining data, we have established with high confidence that: there was a consistent trend; it has been close to linear over a long period of time; and there was a sudden persistent drop in RLE in 2001.
Our model is strikingly parsimonious: it requires just two numbers (a slope and intercept) to describe the early data accurately. It needs a third (the date of the break, 2001) to describe an obvious but unexpected departure from this description. There are no outliers relative to this three-parameter description. The model is not going to be substantially improved by characterizing serial correlation (the focus of time-series techniques generally), attempting to describe the small individual deviations (residuals) exhibited, or introducing more complicated fits (such as adding in a quadratic time component or modeling changes in the sizes of the residuals over time).
The trend has been 0.009 RLE per year. This means that with each passing year, the life expectancy within Cohort B has had 0.009 (almost 1%) of a full expected normal lifetime added to it. Over the course of the study (37 years), that would amount to 37*0.009 = 0.34 = one-third of a full lifetime improvement. The setback in 2001 reduced that gain to about 0.28 of a full lifetime from 1972 to 2009 (even though during that period overall life expectancy increased 10%).
Although this model could be improved, it would likely need more parameters and the improvement is unlikely to be great (as the near-random behavior of the residuals attests). On whole, then, we should be content to arrive at such a compact, useful, simple description of the data for so little analytical work. | How to compare 2 non-stationary time series to determine a correlation?
This is a simple situation; let's keep it so. The key is to focus on what matters:
Obtaining a useful description of the data.
Assessing individual deviations from that description.
Assessing the po |
25,598 | How to compare 2 non-stationary time series to determine a correlation? | I think that whuber's answer is straightforward and a simple one for a non-time series person like me to understand. I base mine on his. My answer is in R not Stata as I don't know stata that well.
I wonder if the question is actually asking us to look at whether the absolute year on year increase is the same in the two cohorts (rather than relative). I think this is important and illustrate it as follows. Consider the following toy example:
a <- 21:40
b <- 41:60
x <- 1:20
plot(y = a, x = x, ylim = c(0, 60))
points(y = b, x = x, pch = 2)
Here we have 2 cohorts, each of which have a steady 1 year per year increase in median survival. So each year both cohorts in this example increase by the same absolute amount, but the RLE gives the following:
rle <- a / b
plot(rle)
Which obviously has an upward trend, and the p value to test the hypothesis that the gradient of the line 0 is 2.2e-16. The fitted straight line (let's ignore that this line looks curved) has a gradient of 0.008. So even though both cohorts have the same absolute increase in a year, the RLE has an upward slope.
So if you use RLE when you want to look for absolute increases, then you'll inappropriately reject the null hypothesis.
Using the supplied data, calculating the absolute difference between the cohorts we get:
Which implies that the absolute difference between median survival is gradually decreasing (i.e. the cohort with the poor survival is gradually getting closer to the cohort with the better survival). | How to compare 2 non-stationary time series to determine a correlation? | I think that whuber's answer is straightforward and a simple one for a non-time series person like me to understand. I base mine on his. My answer is in R not Stata as I don't know stata that well. | How to compare 2 non-stationary time series to determine a correlation?
I think that whuber's answer is straightforward and a simple one for a non-time series person like me to understand. I base mine on his. My answer is in R not Stata as I don't know stata that well.
I wonder if the question is actually asking us to look at whether the absolute year on year increase is the same in the two cohorts (rather than relative). I think this is important and illustrate it as follows. Consider the following toy example:
a <- 21:40
b <- 41:60
x <- 1:20
plot(y = a, x = x, ylim = c(0, 60))
points(y = b, x = x, pch = 2)
Here we have 2 cohorts, each of which have a steady 1 year per year increase in median survival. So each year both cohorts in this example increase by the same absolute amount, but the RLE gives the following:
rle <- a / b
plot(rle)
Which obviously has an upward trend, and the p value to test the hypothesis that the gradient of the line 0 is 2.2e-16. The fitted straight line (let's ignore that this line looks curved) has a gradient of 0.008. So even though both cohorts have the same absolute increase in a year, the RLE has an upward slope.
So if you use RLE when you want to look for absolute increases, then you'll inappropriately reject the null hypothesis.
Using the supplied data, calculating the absolute difference between the cohorts we get:
Which implies that the absolute difference between median survival is gradually decreasing (i.e. the cohort with the poor survival is gradually getting closer to the cohort with the better survival). | How to compare 2 non-stationary time series to determine a correlation?
I think that whuber's answer is straightforward and a simple one for a non-time series person like me to understand. I base mine on his. My answer is in R not Stata as I don't know stata that well. |
25,599 | How to compare 2 non-stationary time series to determine a correlation? | These two time series seems to have a deterministic trend. This is one relation that you obviously want to remove before further analysis. Personally, i would proceed as follows:
1) I would run a regression for each time series against a constant and a time, and compute the residual for each time series.
2) Taking the two residuals series, computed in the step above, i would run a simple linear regression (without a constant term) and look at the t-statistic, p-value, and decided on whether or not there is further dependence between the two series.
This analysis assumes the same set of assumption you make in a linear regression. | How to compare 2 non-stationary time series to determine a correlation? | These two time series seems to have a deterministic trend. This is one relation that you obviously want to remove before further analysis. Personally, i would proceed as follows:
1) I would run a regr | How to compare 2 non-stationary time series to determine a correlation?
These two time series seems to have a deterministic trend. This is one relation that you obviously want to remove before further analysis. Personally, i would proceed as follows:
1) I would run a regression for each time series against a constant and a time, and compute the residual for each time series.
2) Taking the two residuals series, computed in the step above, i would run a simple linear regression (without a constant term) and look at the t-statistic, p-value, and decided on whether or not there is further dependence between the two series.
This analysis assumes the same set of assumption you make in a linear regression. | How to compare 2 non-stationary time series to determine a correlation?
These two time series seems to have a deterministic trend. This is one relation that you obviously want to remove before further analysis. Personally, i would proceed as follows:
1) I would run a regr |
25,600 | How to compare 2 non-stationary time series to determine a correlation? | In some cases one knows a theoretical model which can used to test your hypothesis. In my world tis "knowledge" is often absent and one must resort to statistical techniques that can be classified as exploratory data analysis which summarizes what follows.When analyzing time series data that are non-stationary i.e. has autocorrelative properties simple cross-correlation tests are often misleading insofar as false positives can be easily found. One of the earliest analysis of this is found in Yule, G.U, 1926, "Why do we sometimes get nonsense correlations between time series? A study in sampling and the nature of time series", Journal of the Royal Statistical Society 89, 1–64 . Alternatively when one or more of the series themselves have been effected by exceptional activity ( see whuber "the sudden setback in Cohort B at 2001 ) which can effectively hide significant relationships. Now detecting a relationship between time series extends to examining not only contemporaneous relationships but possible lagged relationships. Continuing, if either series has been effected by anomalies ( one-time events ) then we must robustify our analysis by adjusting for these one-time distortions. The literature of time series points out how to identify the relationship via pre-whitening in order to more clearly identify structure. Pre-whitening adjusts for intra-correlative structure prior to identifying inter-correlative structure. Notice the key word was identifying structure. This approach easily leads to the following "useful model" :
Y(T) = -194.45
+[X1(T)][(+ 1.2396+ 1.6523B** 1)] COHORTA
+[X2(T)][(- 3.3924)] :PULSE 3
+[X3(T)][(- 2.4760)] :LEVEL SHIFT 30 reflecting persistant unusal activity
+[X4(T)][(+ 1.1453)] :PULSE 29
+[X5(T)][(- 2.7249)] :PULSE 11
+[X6(T)][(+ 1.5248)] :PULSE 27
+[X7(T)][(+ 2.1361)] :PULSE 4
+[X8(T)][(+ 1.6395)] :PULSE 13
+[X9(T)][(- 1.6936)] :PULSE 12
+[X10(T)[(- 1.6996)] :PULSE 19
+[X11(T)[(- 1.2749)] :PULSE 10
+[X12(T)[(- 1.2790)] :PULSE 17
+ [A(T)]
which suggests a contemporary relationship of 1.2936 and a lagged effect of 1.6523. Note that there were a number of years where unusual activity was identified viz. (1975,2001,1983,1999,1976,1985,1984,1991 and 1989). The adjustments for the years allows us to more clearly assess the relationship between these two series.
In terms of making a forecast
MODEL EXPRESSED AS AN XARMAX
Y[t] = a[1]Y[t-1] + ... + a[p]Y[t-p]
+ w[0]X[t-0] + ... + w[r]X[t-r]
+ b[1]a[t-1] + ... + b[q]a[t-q]
+ constant
THE RIGHT-HAND SIDE CONSTANT IS: -194.45
COHORTA 0 1.239589 X( 39 ) * 78.228616 = 96.971340
COHORTA 1 1.652332 X( 38 ) * 77.983000 = 128.853835
I~L00030 0 -2.475963 X( 39 ) * 1.000000 = -2.475963
NET PREDICTION FOR Y( 39 )= 28.894826
Four coefficients is all that is required to make a forecast and of course a prediction for CohortA at time period 39 (78.228616) obtained from the ARIMA model for Cohorta. | How to compare 2 non-stationary time series to determine a correlation? | In some cases one knows a theoretical model which can used to test your hypothesis. In my world tis "knowledge" is often absent and one must resort to statistical techniques that can be classified as | How to compare 2 non-stationary time series to determine a correlation?
In some cases one knows a theoretical model which can used to test your hypothesis. In my world tis "knowledge" is often absent and one must resort to statistical techniques that can be classified as exploratory data analysis which summarizes what follows.When analyzing time series data that are non-stationary i.e. has autocorrelative properties simple cross-correlation tests are often misleading insofar as false positives can be easily found. One of the earliest analysis of this is found in Yule, G.U, 1926, "Why do we sometimes get nonsense correlations between time series? A study in sampling and the nature of time series", Journal of the Royal Statistical Society 89, 1–64 . Alternatively when one or more of the series themselves have been effected by exceptional activity ( see whuber "the sudden setback in Cohort B at 2001 ) which can effectively hide significant relationships. Now detecting a relationship between time series extends to examining not only contemporaneous relationships but possible lagged relationships. Continuing, if either series has been effected by anomalies ( one-time events ) then we must robustify our analysis by adjusting for these one-time distortions. The literature of time series points out how to identify the relationship via pre-whitening in order to more clearly identify structure. Pre-whitening adjusts for intra-correlative structure prior to identifying inter-correlative structure. Notice the key word was identifying structure. This approach easily leads to the following "useful model" :
Y(T) = -194.45
+[X1(T)][(+ 1.2396+ 1.6523B** 1)] COHORTA
+[X2(T)][(- 3.3924)] :PULSE 3
+[X3(T)][(- 2.4760)] :LEVEL SHIFT 30 reflecting persistant unusal activity
+[X4(T)][(+ 1.1453)] :PULSE 29
+[X5(T)][(- 2.7249)] :PULSE 11
+[X6(T)][(+ 1.5248)] :PULSE 27
+[X7(T)][(+ 2.1361)] :PULSE 4
+[X8(T)][(+ 1.6395)] :PULSE 13
+[X9(T)][(- 1.6936)] :PULSE 12
+[X10(T)[(- 1.6996)] :PULSE 19
+[X11(T)[(- 1.2749)] :PULSE 10
+[X12(T)[(- 1.2790)] :PULSE 17
+ [A(T)]
which suggests a contemporary relationship of 1.2936 and a lagged effect of 1.6523. Note that there were a number of years where unusual activity was identified viz. (1975,2001,1983,1999,1976,1985,1984,1991 and 1989). The adjustments for the years allows us to more clearly assess the relationship between these two series.
In terms of making a forecast
MODEL EXPRESSED AS AN XARMAX
Y[t] = a[1]Y[t-1] + ... + a[p]Y[t-p]
+ w[0]X[t-0] + ... + w[r]X[t-r]
+ b[1]a[t-1] + ... + b[q]a[t-q]
+ constant
THE RIGHT-HAND SIDE CONSTANT IS: -194.45
COHORTA 0 1.239589 X( 39 ) * 78.228616 = 96.971340
COHORTA 1 1.652332 X( 38 ) * 77.983000 = 128.853835
I~L00030 0 -2.475963 X( 39 ) * 1.000000 = -2.475963
NET PREDICTION FOR Y( 39 )= 28.894826
Four coefficients is all that is required to make a forecast and of course a prediction for CohortA at time period 39 (78.228616) obtained from the ARIMA model for Cohorta. | How to compare 2 non-stationary time series to determine a correlation?
In some cases one knows a theoretical model which can used to test your hypothesis. In my world tis "knowledge" is often absent and one must resort to statistical techniques that can be classified as |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.