idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
24,501
R equivalent to cluster option when using negative binomial regression
This document shows how to get cluster SEs for a glm regression: http://dynaman.net/R/clrob.pdf
R equivalent to cluster option when using negative binomial regression
This document shows how to get cluster SEs for a glm regression: http://dynaman.net/R/clrob.pdf
R equivalent to cluster option when using negative binomial regression This document shows how to get cluster SEs for a glm regression: http://dynaman.net/R/clrob.pdf
R equivalent to cluster option when using negative binomial regression This document shows how to get cluster SEs for a glm regression: http://dynaman.net/R/clrob.pdf
24,502
R equivalent to cluster option when using negative binomial regression
This is not a fully satisfactory answer... I haven't tried it myself but it looks like the glmmADMB package might do what you want. I will shamelessly pinch from @fmark's comment on the question and agree with him that Ben Bolker's notes are useful, as is this earlier question, which is not quite an exact duplicate but covers very similar issues.
R equivalent to cluster option when using negative binomial regression
This is not a fully satisfactory answer... I haven't tried it myself but it looks like the glmmADMB package might do what you want. I will shamelessly pinch from @fmark's comment on the question and
R equivalent to cluster option when using negative binomial regression This is not a fully satisfactory answer... I haven't tried it myself but it looks like the glmmADMB package might do what you want. I will shamelessly pinch from @fmark's comment on the question and agree with him that Ben Bolker's notes are useful, as is this earlier question, which is not quite an exact duplicate but covers very similar issues.
R equivalent to cluster option when using negative binomial regression This is not a fully satisfactory answer... I haven't tried it myself but it looks like the glmmADMB package might do what you want. I will shamelessly pinch from @fmark's comment on the question and
24,503
Is there such a thing as a fair die?
I think the concept of 'fair' is hard to define. Since a given roll of the die will produce a deterministic result (in other words, physics determines what the result is) we can't really say that there is a certain 'probability' of rolling a one. This relates to the mind projection fallacy, which essentially says that probability is a property of one's state of information of a phenomena, not a property of the phenomena itself. Relating to the roll of a dice, the result is not just based upon the die, but also the method in which it is rolled. If we 'know' enough about a given roll (the die's material composition, it's initial orientation, the forces applied to it, the environment it will land in, etc.) we can (theoretically) model all of the motion that occurs in that roll with arbitrary accuracy and instead of finding a 1/6 'probability' of landing on a given side, we will be near certain that it will land on some side. This all is very unrealistic of course, but my point is that the method of rolling is as important as the die's physical makeup. I think a good definition of a 'fair' die would be one in which under reasonable constraints (on computing power, time, accuracy of measurements) it is not possible to predict the result of a roll with some level of confidence. The specifics of these constraints would be dependent on the reasons you are checking if the die is fair or not. Aside: Suppose I tell you I have an 'unfair coin' and I will give you a million dollars if you can correctly guess what side it will land on. Do you choose heads or tails?
Is there such a thing as a fair die?
I think the concept of 'fair' is hard to define. Since a given roll of the die will produce a deterministic result (in other words, physics determines what the result is) we can't really say that ther
Is there such a thing as a fair die? I think the concept of 'fair' is hard to define. Since a given roll of the die will produce a deterministic result (in other words, physics determines what the result is) we can't really say that there is a certain 'probability' of rolling a one. This relates to the mind projection fallacy, which essentially says that probability is a property of one's state of information of a phenomena, not a property of the phenomena itself. Relating to the roll of a dice, the result is not just based upon the die, but also the method in which it is rolled. If we 'know' enough about a given roll (the die's material composition, it's initial orientation, the forces applied to it, the environment it will land in, etc.) we can (theoretically) model all of the motion that occurs in that roll with arbitrary accuracy and instead of finding a 1/6 'probability' of landing on a given side, we will be near certain that it will land on some side. This all is very unrealistic of course, but my point is that the method of rolling is as important as the die's physical makeup. I think a good definition of a 'fair' die would be one in which under reasonable constraints (on computing power, time, accuracy of measurements) it is not possible to predict the result of a roll with some level of confidence. The specifics of these constraints would be dependent on the reasons you are checking if the die is fair or not. Aside: Suppose I tell you I have an 'unfair coin' and I will give you a million dollars if you can correctly guess what side it will land on. Do you choose heads or tails?
Is there such a thing as a fair die? I think the concept of 'fair' is hard to define. Since a given roll of the die will produce a deterministic result (in other words, physics determines what the result is) we can't really say that ther
24,504
Is there such a thing as a fair die?
A little Googling reveals a Wikipedia (gasp!) article on dice. It includes remarks on the precision of dice that mentions the problem of scooping out the dots (they are refilled with material of the same density). Are these going to be exactly fair? How will you define that? How close to 1/6 does each result have to be to qualify?
Is there such a thing as a fair die?
A little Googling reveals a Wikipedia (gasp!) article on dice. It includes remarks on the precision of dice that mentions the problem of scooping out the dots (they are refilled with material of the s
Is there such a thing as a fair die? A little Googling reveals a Wikipedia (gasp!) article on dice. It includes remarks on the precision of dice that mentions the problem of scooping out the dots (they are refilled with material of the same density). Are these going to be exactly fair? How will you define that? How close to 1/6 does each result have to be to qualify?
Is there such a thing as a fair die? A little Googling reveals a Wikipedia (gasp!) article on dice. It includes remarks on the precision of dice that mentions the problem of scooping out the dots (they are refilled with material of the s
24,505
How to conceptualize error in a regression model?
If there are aspects of individuals that have an effect on the resulting y values, then either there is some way at getting at those aspects (in which case they should be part of the predictor x), or there's no way of ever getting at that information. If there's no way of ever getting at this information and there's no way of repeatedly measuring y values for individuals, then it really doesn't matter. If you can measure y repeatedly, and if your data set actually contains repeated measurements for some individuals, then you've got a potential problem on your hands, since the statistical theory assumes independence of the measurement errors/residuals. For example, suppose that you're trying to fit a model of the form $y=\beta_0+\beta_1 x$, and that for each individual, $yind=100+10x+z$, where z depends on the individual and is normally distributed with mean 0 and standard deviation 10. For each repeated measurement of an individual, $ymeas=100+10x+z+e$, where $e$ is normally distributed with mean 0 and standard deviation 0.1. You could try to model this as $y=\beta_0+\beta_1 x+\epsilon$, where $\epsilon$ is normally distributed with mean 0 and standard deviation $\sigma=\sqrt{10^2+0.1^2}=\sqrt{100.01}$. As long as you only have one measurement for each individual, that would be fine. However, if you have multiple measurements for the same individual, then your residuals will no longer be independent! For example, if you have one individual with z=15 (1.5 standard deviations out, so not that unreasonable), and one hundred repeated measurements of that individual, then using $\beta_0=100$ and $\beta_1=10$ (the exact values!) you'd end up with 100 residuals of about +1.5 standard deviations, which would look extremely unlikely. This would effect the $\chi^2$ statistic.
How to conceptualize error in a regression model?
If there are aspects of individuals that have an effect on the resulting y values, then either there is some way at getting at those aspects (in which case they should be part of the predictor x), or
How to conceptualize error in a regression model? If there are aspects of individuals that have an effect on the resulting y values, then either there is some way at getting at those aspects (in which case they should be part of the predictor x), or there's no way of ever getting at that information. If there's no way of ever getting at this information and there's no way of repeatedly measuring y values for individuals, then it really doesn't matter. If you can measure y repeatedly, and if your data set actually contains repeated measurements for some individuals, then you've got a potential problem on your hands, since the statistical theory assumes independence of the measurement errors/residuals. For example, suppose that you're trying to fit a model of the form $y=\beta_0+\beta_1 x$, and that for each individual, $yind=100+10x+z$, where z depends on the individual and is normally distributed with mean 0 and standard deviation 10. For each repeated measurement of an individual, $ymeas=100+10x+z+e$, where $e$ is normally distributed with mean 0 and standard deviation 0.1. You could try to model this as $y=\beta_0+\beta_1 x+\epsilon$, where $\epsilon$ is normally distributed with mean 0 and standard deviation $\sigma=\sqrt{10^2+0.1^2}=\sqrt{100.01}$. As long as you only have one measurement for each individual, that would be fine. However, if you have multiple measurements for the same individual, then your residuals will no longer be independent! For example, if you have one individual with z=15 (1.5 standard deviations out, so not that unreasonable), and one hundred repeated measurements of that individual, then using $\beta_0=100$ and $\beta_1=10$ (the exact values!) you'd end up with 100 residuals of about +1.5 standard deviations, which would look extremely unlikely. This would effect the $\chi^2$ statistic.
How to conceptualize error in a regression model? If there are aspects of individuals that have an effect on the resulting y values, then either there is some way at getting at those aspects (in which case they should be part of the predictor x), or
24,506
How to conceptualize error in a regression model?
I think "error" is best described as "the part of the observations that is unpredtictable given our current information". Trying to think in terms of population vs sample leads to conceptual problems (well it does for me anyway), as does thinking of the errors as "purely random" drawn from some distribution. thinking in terms of prediction and "predictability" makes much more sense to me. I also think the maximum entropy principle provides a neat way to understand why a normal distribution is used. For when modelling we are assigning a distribution to the errors to describe what is known about them. Any joint distribution $p(e_{1},\dots,e_{n})$ could represent a conceivable state of knowledge. However if we specify some structure such as $E(\frac{1}{n}\sum_{i=1}^{n}e_{i}^2)=\sigma^2$ then the most uniform distribution subject to this constraint is the normal distribution with zero mean and constant variance $\sigma^2$. This shows that "independence" and "constant variance" are actually safer than assuming otherwise under this constraint - namely that the average second moment exists and is finite and we expect the general size of the errors to be $\sigma$. So one way to think of this is that we do not necessarily think our assumptions are "correct" but rather "safe" in the sense that we are not injecting a lot of information into the problem (we are imposing just one structural constraint in $n$ dimensions). so we are starting from a safe area - and we can build up from here depending on what specific information we have about the particular case and data set at hand.
How to conceptualize error in a regression model?
I think "error" is best described as "the part of the observations that is unpredtictable given our current information". Trying to think in terms of population vs sample leads to conceptual problems
How to conceptualize error in a regression model? I think "error" is best described as "the part of the observations that is unpredtictable given our current information". Trying to think in terms of population vs sample leads to conceptual problems (well it does for me anyway), as does thinking of the errors as "purely random" drawn from some distribution. thinking in terms of prediction and "predictability" makes much more sense to me. I also think the maximum entropy principle provides a neat way to understand why a normal distribution is used. For when modelling we are assigning a distribution to the errors to describe what is known about them. Any joint distribution $p(e_{1},\dots,e_{n})$ could represent a conceivable state of knowledge. However if we specify some structure such as $E(\frac{1}{n}\sum_{i=1}^{n}e_{i}^2)=\sigma^2$ then the most uniform distribution subject to this constraint is the normal distribution with zero mean and constant variance $\sigma^2$. This shows that "independence" and "constant variance" are actually safer than assuming otherwise under this constraint - namely that the average second moment exists and is finite and we expect the general size of the errors to be $\sigma$. So one way to think of this is that we do not necessarily think our assumptions are "correct" but rather "safe" in the sense that we are not injecting a lot of information into the problem (we are imposing just one structural constraint in $n$ dimensions). so we are starting from a safe area - and we can build up from here depending on what specific information we have about the particular case and data set at hand.
How to conceptualize error in a regression model? I think "error" is best described as "the part of the observations that is unpredtictable given our current information". Trying to think in terms of population vs sample leads to conceptual problems
24,507
How to conceptualize error in a regression model?
Here is very useful link to explain simple linear regression : http://www.dangoldstein.com/dsn/archives/2006/03/every_wonder_ho.html maybe it can help to grasp the "error" concept. F.D.
How to conceptualize error in a regression model?
Here is very useful link to explain simple linear regression : http://www.dangoldstein.com/dsn/archives/2006/03/every_wonder_ho.html maybe it can help to grasp the "error" concept. F.D.
How to conceptualize error in a regression model? Here is very useful link to explain simple linear regression : http://www.dangoldstein.com/dsn/archives/2006/03/every_wonder_ho.html maybe it can help to grasp the "error" concept. F.D.
How to conceptualize error in a regression model? Here is very useful link to explain simple linear regression : http://www.dangoldstein.com/dsn/archives/2006/03/every_wonder_ho.html maybe it can help to grasp the "error" concept. F.D.
24,508
How to conceptualize error in a regression model?
I disagree with the professor's formulation of this. As you say, the idea that the variance is the same for each individual implies that the error term represents only measurement error. This is not usually how the basic multiple regression model is constructed. Also as you say, variance is defined for a group (whether it's a group of individual subjects or a group of measurements). It doesn't apply at the individual level, unless you have repeated measures. A model needs to be complete in that the error term should not contain influences from any variables that are correlated with predictors. The assumption is that the error term is independent of predictors. If some correlated variable is omitted, you will get biased coefficients (this is called omitted variable bias).
How to conceptualize error in a regression model?
I disagree with the professor's formulation of this. As you say, the idea that the variance is the same for each individual implies that the error term represents only measurement error. This is not u
How to conceptualize error in a regression model? I disagree with the professor's formulation of this. As you say, the idea that the variance is the same for each individual implies that the error term represents only measurement error. This is not usually how the basic multiple regression model is constructed. Also as you say, variance is defined for a group (whether it's a group of individual subjects or a group of measurements). It doesn't apply at the individual level, unless you have repeated measures. A model needs to be complete in that the error term should not contain influences from any variables that are correlated with predictors. The assumption is that the error term is independent of predictors. If some correlated variable is omitted, you will get biased coefficients (this is called omitted variable bias).
How to conceptualize error in a regression model? I disagree with the professor's formulation of this. As you say, the idea that the variance is the same for each individual implies that the error term represents only measurement error. This is not u
24,509
Predicting multiple targets or classes?
This is known in the Machine Learning community as "Multi-Label Learning". There are various approaches to the problem, including the ones you describe in your question. Some resources to get you started: Learning from Multi-Label Data Mulan: A Java Library for Multi-Label Learning Machine Learning Journal special issue on Learning from Multi-Label Data (yet to be published I believe) Streaming multi-label classification (video) Combining Instance-Based Learning and Logistic Regression for Multi-Label Classification (video)
Predicting multiple targets or classes?
This is known in the Machine Learning community as "Multi-Label Learning". There are various approaches to the problem, including the ones you describe in your question. Some resources to get you star
Predicting multiple targets or classes? This is known in the Machine Learning community as "Multi-Label Learning". There are various approaches to the problem, including the ones you describe in your question. Some resources to get you started: Learning from Multi-Label Data Mulan: A Java Library for Multi-Label Learning Machine Learning Journal special issue on Learning from Multi-Label Data (yet to be published I believe) Streaming multi-label classification (video) Combining Instance-Based Learning and Logistic Regression for Multi-Label Classification (video)
Predicting multiple targets or classes? This is known in the Machine Learning community as "Multi-Label Learning". There are various approaches to the problem, including the ones you describe in your question. Some resources to get you star
24,510
Predicting multiple targets or classes?
Where you have two variables with the same predictors, and variable B also has variable A as a predictor, you are possibly looking at an optimization problem, where you want to optimize the estimates of A and B simultaneously. It makes no sense to optimize one, if you then get a bad estimate for the second. This would be an operations research problem, and unfortunately outside my realm of expertise.
Predicting multiple targets or classes?
Where you have two variables with the same predictors, and variable B also has variable A as a predictor, you are possibly looking at an optimization problem, where you want to optimize the estimates
Predicting multiple targets or classes? Where you have two variables with the same predictors, and variable B also has variable A as a predictor, you are possibly looking at an optimization problem, where you want to optimize the estimates of A and B simultaneously. It makes no sense to optimize one, if you then get a bad estimate for the second. This would be an operations research problem, and unfortunately outside my realm of expertise.
Predicting multiple targets or classes? Where you have two variables with the same predictors, and variable B also has variable A as a predictor, you are possibly looking at an optimization problem, where you want to optimize the estimates
24,511
Is there a way to explain a prediction from a random forest model?
First idea is just to mimic the knock-out strategy from variable importance and just test how mixing each attribute will degenerate the forest confidence in object classification (on OOB and with some repetitions obviously). This requires some coding, but is certainly achievable. However, I feel it is just a bad idea -- the result will be probably variable like hell (without stabilizing impact of averaging over objects), noisy (for not-so-confident objects the nonsense attributes could have big impacts) and hard to interpret (two or more attribute cooperative rules will probably result in random impacts of each contributing attributes). Not to leave you with negative answer, I would rather try to look at the proximity matrix and the possible archetypes it may reveal -- this seems much more stable and straightforward.
Is there a way to explain a prediction from a random forest model?
First idea is just to mimic the knock-out strategy from variable importance and just test how mixing each attribute will degenerate the forest confidence in object classification (on OOB and with some
Is there a way to explain a prediction from a random forest model? First idea is just to mimic the knock-out strategy from variable importance and just test how mixing each attribute will degenerate the forest confidence in object classification (on OOB and with some repetitions obviously). This requires some coding, but is certainly achievable. However, I feel it is just a bad idea -- the result will be probably variable like hell (without stabilizing impact of averaging over objects), noisy (for not-so-confident objects the nonsense attributes could have big impacts) and hard to interpret (two or more attribute cooperative rules will probably result in random impacts of each contributing attributes). Not to leave you with negative answer, I would rather try to look at the proximity matrix and the possible archetypes it may reveal -- this seems much more stable and straightforward.
Is there a way to explain a prediction from a random forest model? First idea is just to mimic the knock-out strategy from variable importance and just test how mixing each attribute will degenerate the forest confidence in object classification (on OOB and with some
24,512
Is there a way to explain a prediction from a random forest model?
I would try with the lime framework. It works with many models (including random forest). It can be used for local interpretation (that is, explaining a single prediction) or for global interpretation (that is, explaining a whole model). Quoting from the asbtract In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. It has packages both for R and python, and many examples if you google it.
Is there a way to explain a prediction from a random forest model?
I would try with the lime framework. It works with many models (including random forest). It can be used for local interpretation (that is, explaining a single prediction) or for global interpretatio
Is there a way to explain a prediction from a random forest model? I would try with the lime framework. It works with many models (including random forest). It can be used for local interpretation (that is, explaining a single prediction) or for global interpretation (that is, explaining a whole model). Quoting from the asbtract In this work, we propose LIME, a novel explanation technique that explains the predictions of any classifier in an interpretable and faithful manner, by learning an interpretable model locally around the prediction. We also propose a method to explain models by presenting representative individual predictions and their explanations in a non-redundant way, framing the task as a submodular optimization problem. It has packages both for R and python, and many examples if you google it.
Is there a way to explain a prediction from a random forest model? I would try with the lime framework. It works with many models (including random forest). It can be used for local interpretation (that is, explaining a single prediction) or for global interpretatio
24,513
Online, scalable statistical methods
You might look into the Vowpal Wabbit project, from John Langford at Yahoo! Research . It is an online learner that does specialized gradient descent on a few loss functions. VW has some killer features: Installs on Ubuntu trivially, with "sudo apt-get install vowpal-wabbit". Uses the hashing trick for seriously huge feature spaces. Feature-specific adaptive weights. Most importantly, there is an active mailing list and community plugging away on the project. The Bianchi & Lugosi book Prediction, Learning and Games gives a solid, theoretical foundation to online learning. A heavy read, but worth it!
Online, scalable statistical methods
You might look into the Vowpal Wabbit project, from John Langford at Yahoo! Research . It is an online learner that does specialized gradient descent on a few loss functions. VW has some killer featur
Online, scalable statistical methods You might look into the Vowpal Wabbit project, from John Langford at Yahoo! Research . It is an online learner that does specialized gradient descent on a few loss functions. VW has some killer features: Installs on Ubuntu trivially, with "sudo apt-get install vowpal-wabbit". Uses the hashing trick for seriously huge feature spaces. Feature-specific adaptive weights. Most importantly, there is an active mailing list and community plugging away on the project. The Bianchi & Lugosi book Prediction, Learning and Games gives a solid, theoretical foundation to online learning. A heavy read, but worth it!
Online, scalable statistical methods You might look into the Vowpal Wabbit project, from John Langford at Yahoo! Research . It is an online learner that does specialized gradient descent on a few loss functions. VW has some killer featur
24,514
What are the advantages (if any) of the Kolmogorov-Smirnov test over other tests?
Bear in mind some general deficiencies of hypothesis tests of "nowhere dense" sets To give some context to the many intelligent answers I expect this question will attract, I'm going to start by pointing out a big disadvantage of all tests of this general class, including the KS test, the AD test, and many other variants that test whether or not data comes from a specific distribution or parametric distributional family. These are all tests of a null hypothesis that is "nowhere dense", meaning that we are testing an extremely specific class of distributions in the null hypothesis compared to much larger class of distributions in the altnerative hypothesis. In most cases where we have data from a process, there may be some theoretical reason to think that a particular distributional form might hold approximately, but it is extremely rare to have reason to believe that a narrow distributional form would hold exactly. Even if we have good theoretical reason to think that a certain distributional form would hold (e.g., mean of IID data leading to a normal distribution via the CLT) the assumptions made in the theoretical derivation are usually only approximations to reality, and so the exact distribution at issue is usually still slightly different to the theoretical distribution. Typically this means that the null hypothesis is "always false" and so the test effectively becomes one where the p-value will converge to zero with enough data. This is the primary problem that motivated the ASA statement on p-value and the general aversion that the statistics community has to classical hypothesis tests with a point-null hypothesis. Many statisticians are averse to this type of test because it is a test of a hypothesis that is so specific that it is a priori impossible, and so if the null hypothesis is not rejected, that is only because we lack enough data for adequate test power. For this reason, many statisticians prefer to use interval estimation methods like confidence intervals and credibility intervals that give an estimate for a range of values of the unknown quantity/parameter/distribution of interest, rather than testing against a specific case. The KS test, AD test, etc., are not easily emenable to conversion to a confidence interval over the function space of possible distribution functions. It is possible to get pointwise confidence bands using reasoning analogous to the KS test, but these usually only give confidence intervals for specific points, rather than confidence sets over the space of distribution functions. You can of course draw distinctions between distributional tests of this general class, in terms of their power function against specific kinds of alternatives. As you've pointed out in your question, some of the tests are more powerful against certain kinds of alternatives than others, and it is possible to do a deep dive into this by comparing alternative tests with simulation analysis, etc. While this is possible ---and it is useful in understanding the relative merits of all these tests--- this does not get past the primary problem that occurs when we test a point-null hypothesis or a "nowhere dense" hypothesis in a large space. If we test a hypothesis that is so specific that it is "always false" then the test operates as it should, rejecting the null with enough data. The test therefore becomes primarily a test of how much data we have, which we already know. It is worth noting that it is possible to amend any test of a point-null hypothesis or a "nowhere dense" hypothesis by imposing a non-zero "tolerance" for deviation from the stipulated class within the null hypothesis and amending the test statistic accordingly. With a bit of work the KS test can be amended in this way, as can the AD test and other distributional tests. This solves the problem of testing against a "nowhere dense" region (and is how I would recommend dealing with these types of tests) but it then means that there is some additional arbitrarity in how large you make your "tolerance" in the null hypothesis. Confidence intervals and other region-based estimators sidestep this deficiency in the first place by looking for a region-based estimator of the unknown object of interest rather than looking at the level of evidence of deviation of a stipulated set of values of that object.
What are the advantages (if any) of the Kolmogorov-Smirnov test over other tests?
Bear in mind some general deficiencies of hypothesis tests of "nowhere dense" sets To give some context to the many intelligent answers I expect this question will attract, I'm going to start by point
What are the advantages (if any) of the Kolmogorov-Smirnov test over other tests? Bear in mind some general deficiencies of hypothesis tests of "nowhere dense" sets To give some context to the many intelligent answers I expect this question will attract, I'm going to start by pointing out a big disadvantage of all tests of this general class, including the KS test, the AD test, and many other variants that test whether or not data comes from a specific distribution or parametric distributional family. These are all tests of a null hypothesis that is "nowhere dense", meaning that we are testing an extremely specific class of distributions in the null hypothesis compared to much larger class of distributions in the altnerative hypothesis. In most cases where we have data from a process, there may be some theoretical reason to think that a particular distributional form might hold approximately, but it is extremely rare to have reason to believe that a narrow distributional form would hold exactly. Even if we have good theoretical reason to think that a certain distributional form would hold (e.g., mean of IID data leading to a normal distribution via the CLT) the assumptions made in the theoretical derivation are usually only approximations to reality, and so the exact distribution at issue is usually still slightly different to the theoretical distribution. Typically this means that the null hypothesis is "always false" and so the test effectively becomes one where the p-value will converge to zero with enough data. This is the primary problem that motivated the ASA statement on p-value and the general aversion that the statistics community has to classical hypothesis tests with a point-null hypothesis. Many statisticians are averse to this type of test because it is a test of a hypothesis that is so specific that it is a priori impossible, and so if the null hypothesis is not rejected, that is only because we lack enough data for adequate test power. For this reason, many statisticians prefer to use interval estimation methods like confidence intervals and credibility intervals that give an estimate for a range of values of the unknown quantity/parameter/distribution of interest, rather than testing against a specific case. The KS test, AD test, etc., are not easily emenable to conversion to a confidence interval over the function space of possible distribution functions. It is possible to get pointwise confidence bands using reasoning analogous to the KS test, but these usually only give confidence intervals for specific points, rather than confidence sets over the space of distribution functions. You can of course draw distinctions between distributional tests of this general class, in terms of their power function against specific kinds of alternatives. As you've pointed out in your question, some of the tests are more powerful against certain kinds of alternatives than others, and it is possible to do a deep dive into this by comparing alternative tests with simulation analysis, etc. While this is possible ---and it is useful in understanding the relative merits of all these tests--- this does not get past the primary problem that occurs when we test a point-null hypothesis or a "nowhere dense" hypothesis in a large space. If we test a hypothesis that is so specific that it is "always false" then the test operates as it should, rejecting the null with enough data. The test therefore becomes primarily a test of how much data we have, which we already know. It is worth noting that it is possible to amend any test of a point-null hypothesis or a "nowhere dense" hypothesis by imposing a non-zero "tolerance" for deviation from the stipulated class within the null hypothesis and amending the test statistic accordingly. With a bit of work the KS test can be amended in this way, as can the AD test and other distributional tests. This solves the problem of testing against a "nowhere dense" region (and is how I would recommend dealing with these types of tests) but it then means that there is some additional arbitrarity in how large you make your "tolerance" in the null hypothesis. Confidence intervals and other region-based estimators sidestep this deficiency in the first place by looking for a region-based estimator of the unknown object of interest rather than looking at the level of evidence of deviation of a stipulated set of values of that object.
What are the advantages (if any) of the Kolmogorov-Smirnov test over other tests? Bear in mind some general deficiencies of hypothesis tests of "nowhere dense" sets To give some context to the many intelligent answers I expect this question will attract, I'm going to start by point
24,515
What are the advantages (if any) of the Kolmogorov-Smirnov test over other tests?
Note: I do not intend to accept my own answer, just wanted to provide info from an interesting relevant reference I just came across, which may interest people viewing this question. The paper N. Razali and Y.B. Wah (2011), "Power comparisons of Shapiro–Wilk, Kolmogorov–Smirnov, Lilliefors and Anderson–Darling tests", Journal of Statistical Modeling and Analytics, 2 (1): 21–33 Shows "Power comparisons of these four tests were obtained via Monte Carlo simulation of sample data generated from alternative distributions that follow symmetric and asymmetric distributions. ... Results show that Shapiro-Wilk test is the most powerful normality test, followed by Anderson-Darling test, Lilliefors test and Kolmogorov-Smirnov test. However, the power of all four tests is still low for small sample size."
What are the advantages (if any) of the Kolmogorov-Smirnov test over other tests?
Note: I do not intend to accept my own answer, just wanted to provide info from an interesting relevant reference I just came across, which may interest people viewing this question. The paper N. Raza
What are the advantages (if any) of the Kolmogorov-Smirnov test over other tests? Note: I do not intend to accept my own answer, just wanted to provide info from an interesting relevant reference I just came across, which may interest people viewing this question. The paper N. Razali and Y.B. Wah (2011), "Power comparisons of Shapiro–Wilk, Kolmogorov–Smirnov, Lilliefors and Anderson–Darling tests", Journal of Statistical Modeling and Analytics, 2 (1): 21–33 Shows "Power comparisons of these four tests were obtained via Monte Carlo simulation of sample data generated from alternative distributions that follow symmetric and asymmetric distributions. ... Results show that Shapiro-Wilk test is the most powerful normality test, followed by Anderson-Darling test, Lilliefors test and Kolmogorov-Smirnov test. However, the power of all four tests is still low for small sample size."
What are the advantages (if any) of the Kolmogorov-Smirnov test over other tests? Note: I do not intend to accept my own answer, just wanted to provide info from an interesting relevant reference I just came across, which may interest people viewing this question. The paper N. Raza
24,516
Should AstraZeneca's results be discounted?
My opinion is no. I felt that AZ's primary publication was high quality: reporting the planned analyses, presenting a very important unplanned analysis, explaining the unexpected results of that analysis, and claiming that more time and data will be needed to confirm any findings. The question boils down to intent-to-treat versus per-protocol analysis. Every trial has unexplained deviations, and sometimes it even affects manufacturing so that a large fraction of trial participants are affected by a non-randomized condition. This can be so severe that the planned analyses need to be reframed to report any generalizable result. Unfortunately for AZ who seem to have a fairly good vaccine, all of the above happened with their ChAdOx trial. This cost them vital time and "vitaler" money in a highly competitive environment. Deviations and unexpected findings can at times generate hypotheses that are so striking they need to be evaluated in subsequent studies. But the operative word is "subsequent": you can't use the same data that generated a hypothesis to confirm the hypothesis. Minoxidil, for instance, was a vasodilator piloted as a heart medication but unintentionally was found to cause hair grown in those with male pattern baldness. They conducted a secondary study and showed a statistically significant effect. For AZ, if they truly believe the low dose or "priming low dose" was effective, they need a major protocol amendment, and an analysis plan that excludes results from the prior analyses; a very costly endeavor. I recall this was investigated at length, the priming dose hypothesis was ultimately rejected. Now ChAdOx has emergency use authorization from WHO and is being used globally (outside the US). In general, when reporting out the results of a clinical trial, it is better to verge on the side of full disclosure. According to my reading of the primary publication, they report correctly the intent-to-treat (so called "average" effectiveness) of 70%. The "per protocol" effectiveness of the AZ drug was also reported (among those who received both full doses) as 62·1% (95% CI 41·0–75.4) which is still a promising result. They correctly report the results of a rather serious post-hoc finding: that people receiving an "accidental" dose had statistically better efficacy that those receiving the planned dose. In participants who received two standard doses, vaccine efficacy was 62·1% (95% CI 41·0–75·7; 27 [0·6%] of 4440 in the ChAdOx1 nCoV-19 group vs71 [1·6%] of 4455 in the control group) and in participants who received a low dose followed by a standard dose, efficacy was 90·0% (67·4–97·0; three [0·2%] of 1367 vs 30 [2·2%] of 1374; pinteraction=0·010). Overall vaccine efficacy across both groups was 70·4% (95·8% CI 54·8–80·6; 30 [0·5%] of 5807 vs 101 [1·7%] of 5829). The article goes farther to state that the results are unexpected, that they performed other subgroup analysis to see if possible confounding could explain the issue, and no statistical test identified a difference. They clarify further that the results, while promising, will require later read out of the data and results from other studies to understand any difference if not due to chance. The issue had to do with manufacturing. The primary publication is open access and you can read it for yourself: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(20)32661-1/fulltext#tbl2 (open access). The low dosing issue was one of lack of oversight from AZ. Two dosage groups were included in COV002: participants who received a low dose of the vaccine (2·2 × 1010 viral particles) as their first dose and were boosted with a standard dose (in the LD/SD group), and subsequent cohorts who were vaccinated with two standard-dose vaccines (SD/SD group). Initial dosing in COV002 was with a batch manufactured at a contract manufacturing organisation using chromatographic purification. During quality control of this second batch, differences were observed between the quantification methods (spectrophotometry and quantitative PCR [qPCR]) prioritised by different manufacturing sites. In consultation with the national regulator (Medicines and Healthcare products Regulatory Agency), we selected a dose of 5 × 1010 viral particles by spectrophotometer (2·2 × 1010 viral particles by qPCR), in order to be consistent with the use of spectrophotometry in the phase 1 study (COV001),5 and to ensure the dose was within a safe and immunogenic range according to measurements by both methods. A lower-than-anticipated reactogenicity profile was noted in the trial, and unexpected interference of an excipient with the spectrophotometry assay was identified. After review and approval by the regulator, it was concluded that the qPCR (low-dose) reading was more accurate and further doses were adjusted to the standard dose (5 × 1010 viral particles) using a qPCR assay. The protocol was amended on June 5, 2020, resulting in enrolment of two distinct groups with different dosing regimens with no pause in enrolment (version 6.0; appendix 2 p 330). A suite of assays has now been developed for characterisation of concentration (which confirmed the low and standard dosing), and future batches are all released with a specification dose of 3·5–6·5 × 1010 viral particles, and this was used for the booster doses in the efficacy analysis presented here.
Should AstraZeneca's results be discounted?
My opinion is no. I felt that AZ's primary publication was high quality: reporting the planned analyses, presenting a very important unplanned analysis, explaining the unexpected results of that analy
Should AstraZeneca's results be discounted? My opinion is no. I felt that AZ's primary publication was high quality: reporting the planned analyses, presenting a very important unplanned analysis, explaining the unexpected results of that analysis, and claiming that more time and data will be needed to confirm any findings. The question boils down to intent-to-treat versus per-protocol analysis. Every trial has unexplained deviations, and sometimes it even affects manufacturing so that a large fraction of trial participants are affected by a non-randomized condition. This can be so severe that the planned analyses need to be reframed to report any generalizable result. Unfortunately for AZ who seem to have a fairly good vaccine, all of the above happened with their ChAdOx trial. This cost them vital time and "vitaler" money in a highly competitive environment. Deviations and unexpected findings can at times generate hypotheses that are so striking they need to be evaluated in subsequent studies. But the operative word is "subsequent": you can't use the same data that generated a hypothesis to confirm the hypothesis. Minoxidil, for instance, was a vasodilator piloted as a heart medication but unintentionally was found to cause hair grown in those with male pattern baldness. They conducted a secondary study and showed a statistically significant effect. For AZ, if they truly believe the low dose or "priming low dose" was effective, they need a major protocol amendment, and an analysis plan that excludes results from the prior analyses; a very costly endeavor. I recall this was investigated at length, the priming dose hypothesis was ultimately rejected. Now ChAdOx has emergency use authorization from WHO and is being used globally (outside the US). In general, when reporting out the results of a clinical trial, it is better to verge on the side of full disclosure. According to my reading of the primary publication, they report correctly the intent-to-treat (so called "average" effectiveness) of 70%. The "per protocol" effectiveness of the AZ drug was also reported (among those who received both full doses) as 62·1% (95% CI 41·0–75.4) which is still a promising result. They correctly report the results of a rather serious post-hoc finding: that people receiving an "accidental" dose had statistically better efficacy that those receiving the planned dose. In participants who received two standard doses, vaccine efficacy was 62·1% (95% CI 41·0–75·7; 27 [0·6%] of 4440 in the ChAdOx1 nCoV-19 group vs71 [1·6%] of 4455 in the control group) and in participants who received a low dose followed by a standard dose, efficacy was 90·0% (67·4–97·0; three [0·2%] of 1367 vs 30 [2·2%] of 1374; pinteraction=0·010). Overall vaccine efficacy across both groups was 70·4% (95·8% CI 54·8–80·6; 30 [0·5%] of 5807 vs 101 [1·7%] of 5829). The article goes farther to state that the results are unexpected, that they performed other subgroup analysis to see if possible confounding could explain the issue, and no statistical test identified a difference. They clarify further that the results, while promising, will require later read out of the data and results from other studies to understand any difference if not due to chance. The issue had to do with manufacturing. The primary publication is open access and you can read it for yourself: https://www.thelancet.com/journals/lancet/article/PIIS0140-6736(20)32661-1/fulltext#tbl2 (open access). The low dosing issue was one of lack of oversight from AZ. Two dosage groups were included in COV002: participants who received a low dose of the vaccine (2·2 × 1010 viral particles) as their first dose and were boosted with a standard dose (in the LD/SD group), and subsequent cohorts who were vaccinated with two standard-dose vaccines (SD/SD group). Initial dosing in COV002 was with a batch manufactured at a contract manufacturing organisation using chromatographic purification. During quality control of this second batch, differences were observed between the quantification methods (spectrophotometry and quantitative PCR [qPCR]) prioritised by different manufacturing sites. In consultation with the national regulator (Medicines and Healthcare products Regulatory Agency), we selected a dose of 5 × 1010 viral particles by spectrophotometer (2·2 × 1010 viral particles by qPCR), in order to be consistent with the use of spectrophotometry in the phase 1 study (COV001),5 and to ensure the dose was within a safe and immunogenic range according to measurements by both methods. A lower-than-anticipated reactogenicity profile was noted in the trial, and unexpected interference of an excipient with the spectrophotometry assay was identified. After review and approval by the regulator, it was concluded that the qPCR (low-dose) reading was more accurate and further doses were adjusted to the standard dose (5 × 1010 viral particles) using a qPCR assay. The protocol was amended on June 5, 2020, resulting in enrolment of two distinct groups with different dosing regimens with no pause in enrolment (version 6.0; appendix 2 p 330). A suite of assays has now been developed for characterisation of concentration (which confirmed the low and standard dosing), and future batches are all released with a specification dose of 3·5–6·5 × 1010 viral particles, and this was used for the booster doses in the efficacy analysis presented here.
Should AstraZeneca's results be discounted? My opinion is no. I felt that AZ's primary publication was high quality: reporting the planned analyses, presenting a very important unplanned analysis, explaining the unexpected results of that analy
24,517
How is EMSE derived for causal trees in Athey and Imbens (PNAS 2016)?
I guess (3) is another typo in the paper, it should be $-\mathbb{E}_{(Y_i, X_i), S^{est}}[(Y_i - \hat{\mu}(X_i; S^{est}, \Pi))^2 - Y_i^2]$. Then OP's derivations show that this is indeed $\mathbb{E}_{S^{est}, S^{te}}[MSE(S^{te}, S^{est}, \Pi)]$. And my answer 2. shows how to get from this to (4) As for question 2, the answer is "almost", first, notice the typo: should be $\mathbb{E}_{X_i, \mathcal{S}^{est}}[\mathbb{V}[\hat{\mu}(X_i, \mathcal{S}^{est}, \Pi)]]$ (not $\hat{\mu}^2$). Second, I think the notation $\mathbb{E}_{X_i, \mathcal{S}^{est}}[\mathbb{V}[\hat{\mu}(X_i, \mathcal{S}^{est}, \Pi)]]$ is a bit confusing since it seems like the inner variance (as usual variance) is constant, so the outter expectation does not make any difference. In fact, what they meant by this expression is $$\mathbb{E}_{X_i}[\mathbb{V}_{\mathcal{S}^{est}}[\hat{\mu}(X_i, \mathcal{S}^{est}, \Pi)]]$$ With this notation, it is clear that the inner variance is a random variable (as a function of $X_i$), so the outter expectation is taken with respect to $X_i$. To get from (3) to (4) add and subtract $\mu(X_i; \Pi)$: $$(Y_i - \mu(X_i; \Pi)+\mu(X_i; \Pi) - \hat{\mu}(X_i; \mathcal{S}, \Pi))^2 - Y_i^2$$ then open up the brackets, you'll get three terms: $(Y_i - \mu(X_i; \Pi))^2 - Y_i^2$ $(\hat{\mu}(X_i; \mathcal{S}, \Pi) - \mu(X_i; \Pi))^2$ $2(Y_i - \mu(X_i; \Pi))(\mu(X_i; \Pi) - \hat{\mu}(X_i; \mathcal{S}, \Pi))$ The first term then is simplified to $\mathbb{E}[\mu^2(X_i; \Pi) - 2Y_i \mu(X_i; \Pi)]$, the second to $\mathbb{E}_{X_i}[\mathbb{V}_{\mathcal{S}^{est}}[\hat{\mu}(X_i, \mathcal{S}^{est}, \Pi)]]$, the third (using the fact that $\mathbb{E}_{\mathcal{S}}[\hat{\mu}(x; \mathcal{S}, \Pi)] = \mu(x; \Pi)$ and that $(X_i, Y_i)\ \perp\ \mathcal{S}^{est}$) to $2 \mathbb{E}_{(Y_i, X_i)}[Y_i \mu(X_i; \Pi)] - 2\mathbb{E}_{X_i}[ \mu(X_i) \mu(X_i; \Pi)]$. Finally, using the law of total probability (partition = leaves) and the fact that $\mu(x; \Pi) = \mathbb{E}[\mu(X_i)|X_i \in \ell(x; \Pi)]$ it can be shown that $2\mathbb{E}_{X_i}[ \mu(X_i) \mu(X_i; \Pi)] = 2 \mathbb{E}_{X_i}[\mu^2(X_i; \Pi)]$ Then, combining all together you get the formula.
How is EMSE derived for causal trees in Athey and Imbens (PNAS 2016)?
I guess (3) is another typo in the paper, it should be $-\mathbb{E}_{(Y_i, X_i), S^{est}}[(Y_i - \hat{\mu}(X_i; S^{est}, \Pi))^2 - Y_i^2]$. Then OP's derivations show that this is indeed $\mathbb{E}_{
How is EMSE derived for causal trees in Athey and Imbens (PNAS 2016)? I guess (3) is another typo in the paper, it should be $-\mathbb{E}_{(Y_i, X_i), S^{est}}[(Y_i - \hat{\mu}(X_i; S^{est}, \Pi))^2 - Y_i^2]$. Then OP's derivations show that this is indeed $\mathbb{E}_{S^{est}, S^{te}}[MSE(S^{te}, S^{est}, \Pi)]$. And my answer 2. shows how to get from this to (4) As for question 2, the answer is "almost", first, notice the typo: should be $\mathbb{E}_{X_i, \mathcal{S}^{est}}[\mathbb{V}[\hat{\mu}(X_i, \mathcal{S}^{est}, \Pi)]]$ (not $\hat{\mu}^2$). Second, I think the notation $\mathbb{E}_{X_i, \mathcal{S}^{est}}[\mathbb{V}[\hat{\mu}(X_i, \mathcal{S}^{est}, \Pi)]]$ is a bit confusing since it seems like the inner variance (as usual variance) is constant, so the outter expectation does not make any difference. In fact, what they meant by this expression is $$\mathbb{E}_{X_i}[\mathbb{V}_{\mathcal{S}^{est}}[\hat{\mu}(X_i, \mathcal{S}^{est}, \Pi)]]$$ With this notation, it is clear that the inner variance is a random variable (as a function of $X_i$), so the outter expectation is taken with respect to $X_i$. To get from (3) to (4) add and subtract $\mu(X_i; \Pi)$: $$(Y_i - \mu(X_i; \Pi)+\mu(X_i; \Pi) - \hat{\mu}(X_i; \mathcal{S}, \Pi))^2 - Y_i^2$$ then open up the brackets, you'll get three terms: $(Y_i - \mu(X_i; \Pi))^2 - Y_i^2$ $(\hat{\mu}(X_i; \mathcal{S}, \Pi) - \mu(X_i; \Pi))^2$ $2(Y_i - \mu(X_i; \Pi))(\mu(X_i; \Pi) - \hat{\mu}(X_i; \mathcal{S}, \Pi))$ The first term then is simplified to $\mathbb{E}[\mu^2(X_i; \Pi) - 2Y_i \mu(X_i; \Pi)]$, the second to $\mathbb{E}_{X_i}[\mathbb{V}_{\mathcal{S}^{est}}[\hat{\mu}(X_i, \mathcal{S}^{est}, \Pi)]]$, the third (using the fact that $\mathbb{E}_{\mathcal{S}}[\hat{\mu}(x; \mathcal{S}, \Pi)] = \mu(x; \Pi)$ and that $(X_i, Y_i)\ \perp\ \mathcal{S}^{est}$) to $2 \mathbb{E}_{(Y_i, X_i)}[Y_i \mu(X_i; \Pi)] - 2\mathbb{E}_{X_i}[ \mu(X_i) \mu(X_i; \Pi)]$. Finally, using the law of total probability (partition = leaves) and the fact that $\mu(x; \Pi) = \mathbb{E}[\mu(X_i)|X_i \in \ell(x; \Pi)]$ it can be shown that $2\mathbb{E}_{X_i}[ \mu(X_i) \mu(X_i; \Pi)] = 2 \mathbb{E}_{X_i}[\mu^2(X_i; \Pi)]$ Then, combining all together you get the formula.
How is EMSE derived for causal trees in Athey and Imbens (PNAS 2016)? I guess (3) is another typo in the paper, it should be $-\mathbb{E}_{(Y_i, X_i), S^{est}}[(Y_i - \hat{\mu}(X_i; S^{est}, \Pi))^2 - Y_i^2]$. Then OP's derivations show that this is indeed $\mathbb{E}_{
24,518
How to find the sample points that have statistically meaningful large outlier ratios between two values of the point?
I think the Wilson score confidence interval can be applied directly to your problem. The score used in the blog was a lower bound of the confidence interval instead of an expected value. Another method for such problem is to correct (bias) our estimation towards some prior knowledge we have, for example the overall view/rep ratio. Say the number of views per reputation follows a normal distribution $v \sim N(\mu,\sigma) $, then the view/rep ratio can be interpreted as the maximum likelihood estimation (MLE) of the distribution mean $\mu$. The problem is, as you mentioned, when there is insufficient data, the MLE result is of high variance (overfitting). A simple solution for this is to introduce a prior distribution of $\mu$ and do maximum a posteriori estimation (MAP). The prior distribution $p(\mu)$ can be another normal distribution estimated from all the samples we have. In practice, this is essentially a weighted average of the overall view/rep ratio and the user view/rep ratio, $$μ_{MAP}=\frac{n\mu_{MLE}+c\mu_0}{n+c}$$ where $n$ is the number of reps a user have, $c$ is a constant, $\mu_{MLE}$ is the view/rep ratio of the user and $\mu_0$ the overall view/rep ratio. To compare the two methods (Wilson score confidence interval lower bound and MAP), they both give an accurate estimation when there are sufficient data (reps), when the number of reps is small, Wilson lower bound method will bias towards zero and MAP will bias towards the mean.
How to find the sample points that have statistically meaningful large outlier ratios between two va
I think the Wilson score confidence interval can be applied directly to your problem. The score used in the blog was a lower bound of the confidence interval instead of an expected value. Another meth
How to find the sample points that have statistically meaningful large outlier ratios between two values of the point? I think the Wilson score confidence interval can be applied directly to your problem. The score used in the blog was a lower bound of the confidence interval instead of an expected value. Another method for such problem is to correct (bias) our estimation towards some prior knowledge we have, for example the overall view/rep ratio. Say the number of views per reputation follows a normal distribution $v \sim N(\mu,\sigma) $, then the view/rep ratio can be interpreted as the maximum likelihood estimation (MLE) of the distribution mean $\mu$. The problem is, as you mentioned, when there is insufficient data, the MLE result is of high variance (overfitting). A simple solution for this is to introduce a prior distribution of $\mu$ and do maximum a posteriori estimation (MAP). The prior distribution $p(\mu)$ can be another normal distribution estimated from all the samples we have. In practice, this is essentially a weighted average of the overall view/rep ratio and the user view/rep ratio, $$μ_{MAP}=\frac{n\mu_{MLE}+c\mu_0}{n+c}$$ where $n$ is the number of reps a user have, $c$ is a constant, $\mu_{MLE}$ is the view/rep ratio of the user and $\mu_0$ the overall view/rep ratio. To compare the two methods (Wilson score confidence interval lower bound and MAP), they both give an accurate estimation when there are sufficient data (reps), when the number of reps is small, Wilson lower bound method will bias towards zero and MAP will bias towards the mean.
How to find the sample points that have statistically meaningful large outlier ratios between two va I think the Wilson score confidence interval can be applied directly to your problem. The score used in the blog was a lower bound of the confidence interval instead of an expected value. Another meth
24,519
Why does the redundant mean parameterization speed up Gibbs MCMC?
The correlation to be avoided is the one between $\mu$ and the $\gamma_j$ and $\delta_k$. By replacing $\gamma_j$ and $\delta_k$ in the computational model with alternative parameters that center around $\mu$ the correlation is reduced. See for a very clear description section 25.1 'What is hierarchical centering?' in the (freely available) book 'MCMC estimation in MLwiN' by William J. Browne and others. http://www.bristol.ac.uk/cmm/software/mlwin/download/manuals.html
Why does the redundant mean parameterization speed up Gibbs MCMC?
The correlation to be avoided is the one between $\mu$ and the $\gamma_j$ and $\delta_k$. By replacing $\gamma_j$ and $\delta_k$ in the computational model with alternative parameters that center aro
Why does the redundant mean parameterization speed up Gibbs MCMC? The correlation to be avoided is the one between $\mu$ and the $\gamma_j$ and $\delta_k$. By replacing $\gamma_j$ and $\delta_k$ in the computational model with alternative parameters that center around $\mu$ the correlation is reduced. See for a very clear description section 25.1 'What is hierarchical centering?' in the (freely available) book 'MCMC estimation in MLwiN' by William J. Browne and others. http://www.bristol.ac.uk/cmm/software/mlwin/download/manuals.html
Why does the redundant mean parameterization speed up Gibbs MCMC? The correlation to be avoided is the one between $\mu$ and the $\gamma_j$ and $\delta_k$. By replacing $\gamma_j$ and $\delta_k$ in the computational model with alternative parameters that center aro
24,520
Difference between MLE and Baum Welch on HMM fitting
Refer to one of the answers (by Masterfool) from the question link you provided, Morat's answer is false on one point: Baum-Welch is an Expectation-Maximization algorithm, used to train an HMM's parameters. It uses the forward-backward algorithm during each iteration. The forward-backward algorithm really is just a combination of the forward and backward algorithms: one forward pass, one backward pass. And I agree with PierreE's answer here, Baum–Welch algorithm is used to solve maximum likelihood in HHM. If the states are known (supervised, labeled sequence), then other method maximizing MLE is used (maybe like, simply count the frequency of each emission and transition observed in the training data, see the slides provided by Franck Dernoncourt). In the setting of MLE for HMM, I don't think you can just use gradient descent, since the likelihood (or, log-likelihood) doesn't have a closed-form solution and must be solved iteratively, same as the case in mixture models so then we turn to EM. (See more details in Bishop, Pattern Recognition book, chapter 13.2.1 Pg614)
Difference between MLE and Baum Welch on HMM fitting
Refer to one of the answers (by Masterfool) from the question link you provided, Morat's answer is false on one point: Baum-Welch is an Expectation-Maximization algorithm, used to train an HMM's par
Difference between MLE and Baum Welch on HMM fitting Refer to one of the answers (by Masterfool) from the question link you provided, Morat's answer is false on one point: Baum-Welch is an Expectation-Maximization algorithm, used to train an HMM's parameters. It uses the forward-backward algorithm during each iteration. The forward-backward algorithm really is just a combination of the forward and backward algorithms: one forward pass, one backward pass. And I agree with PierreE's answer here, Baum–Welch algorithm is used to solve maximum likelihood in HHM. If the states are known (supervised, labeled sequence), then other method maximizing MLE is used (maybe like, simply count the frequency of each emission and transition observed in the training data, see the slides provided by Franck Dernoncourt). In the setting of MLE for HMM, I don't think you can just use gradient descent, since the likelihood (or, log-likelihood) doesn't have a closed-form solution and must be solved iteratively, same as the case in mixture models so then we turn to EM. (See more details in Bishop, Pattern Recognition book, chapter 13.2.1 Pg614)
Difference between MLE and Baum Welch on HMM fitting Refer to one of the answers (by Masterfool) from the question link you provided, Morat's answer is false on one point: Baum-Welch is an Expectation-Maximization algorithm, used to train an HMM's par
24,521
Difference between MLE and Baum Welch on HMM fitting
This question has been here for a few months but this answer might help new readers, as a complement to David Batista's comment. The Baulm-Welch algorithm (BM) is an expectation maximization algorithm to solve maximum likelihood estimation (MLE) in order to train your HMM when the states are unknown/hidden (unsupervised training). But if you know the states, you can use an MLE method (which will not be the BM) to fit your model to the pair data/states in a supervised fashion.
Difference between MLE and Baum Welch on HMM fitting
This question has been here for a few months but this answer might help new readers, as a complement to David Batista's comment. The Baulm-Welch algorithm (BM) is an expectation maximization algorithm
Difference between MLE and Baum Welch on HMM fitting This question has been here for a few months but this answer might help new readers, as a complement to David Batista's comment. The Baulm-Welch algorithm (BM) is an expectation maximization algorithm to solve maximum likelihood estimation (MLE) in order to train your HMM when the states are unknown/hidden (unsupervised training). But if you know the states, you can use an MLE method (which will not be the BM) to fit your model to the pair data/states in a supervised fashion.
Difference between MLE and Baum Welch on HMM fitting This question has been here for a few months but this answer might help new readers, as a complement to David Batista's comment. The Baulm-Welch algorithm (BM) is an expectation maximization algorithm
24,522
Difference between MLE and Baum Welch on HMM fitting
So, what's the relationship between MLE and Baum–Welch algorithm? Expectation maximization(EM) algorithm is more general and Baum-Welch algorithm is simply an instantiation of it, and EM is an iterative algorithm for maximum likelihood(ML). Then Baum-Welch algorithm is also an iterative algorithm for maximum likelihood. There normally are three optimization algorithms for maximum likelihood estimation(a frequentist approach): 1) gradient descent; 2) Markov Chain Monte Carlo; 3) expectation maximization.
Difference between MLE and Baum Welch on HMM fitting
So, what's the relationship between MLE and Baum–Welch algorithm? Expectation maximization(EM) algorithm is more general and Baum-Welch algorithm is simply an instantiation of it, and EM is an iterat
Difference between MLE and Baum Welch on HMM fitting So, what's the relationship between MLE and Baum–Welch algorithm? Expectation maximization(EM) algorithm is more general and Baum-Welch algorithm is simply an instantiation of it, and EM is an iterative algorithm for maximum likelihood(ML). Then Baum-Welch algorithm is also an iterative algorithm for maximum likelihood. There normally are three optimization algorithms for maximum likelihood estimation(a frequentist approach): 1) gradient descent; 2) Markov Chain Monte Carlo; 3) expectation maximization.
Difference between MLE and Baum Welch on HMM fitting So, what's the relationship between MLE and Baum–Welch algorithm? Expectation maximization(EM) algorithm is more general and Baum-Welch algorithm is simply an instantiation of it, and EM is an iterat
24,523
Exact difference between two-part models (e.g., Cragg) and Tobit type 2 models (e.g., Heckman)
Thanks for asking, Mark. In context of my data I ended up using the double hurdle model proposed by Blundell (the first bullet of my suggested solutions). Based on the feedback I received on academic conferences this seems to be a viable approach. I also ended up using the R-package mhurdle. Weights simply do not work - the rest of the code seems to be very solid. Regarding my specific questions; I do not have a finite answer to all of them, but let me try to summarise what I learnt: Is my statement about the three models correct? It appears so - yes Are the sources of zeros the only/main decision criteria? They are certainly not the only decision criteria, but in context of data with a mass point at zero, spending significant time on understanding how the zeros are generated is tremendously important. What are the key decision criteria I should consider/discuss when deciding about what type of model to use? Besides the obvious questions regarding type of dependent variable and its distribution, the two main questions involving data with a mass point at zero are: Do you want to distinguish your results by the two different stages or is it sufficient to report one set of coeffcients? If so you may use a Tobit model; otherwise you need a two-part model where the discussion about the different sources of zeros comes into play. Is there more than 'just' the source of the zeros? Yep - there is. At least two: observed/true zeros and unobserved/false zeros (the latter actually being either NAs or so small values that are recoded as 0) Hope this helps you a bit! Jan
Exact difference between two-part models (e.g., Cragg) and Tobit type 2 models (e.g., Heckman)
Thanks for asking, Mark. In context of my data I ended up using the double hurdle model proposed by Blundell (the first bullet of my suggested solutions). Based on the feedback I received on academic
Exact difference between two-part models (e.g., Cragg) and Tobit type 2 models (e.g., Heckman) Thanks for asking, Mark. In context of my data I ended up using the double hurdle model proposed by Blundell (the first bullet of my suggested solutions). Based on the feedback I received on academic conferences this seems to be a viable approach. I also ended up using the R-package mhurdle. Weights simply do not work - the rest of the code seems to be very solid. Regarding my specific questions; I do not have a finite answer to all of them, but let me try to summarise what I learnt: Is my statement about the three models correct? It appears so - yes Are the sources of zeros the only/main decision criteria? They are certainly not the only decision criteria, but in context of data with a mass point at zero, spending significant time on understanding how the zeros are generated is tremendously important. What are the key decision criteria I should consider/discuss when deciding about what type of model to use? Besides the obvious questions regarding type of dependent variable and its distribution, the two main questions involving data with a mass point at zero are: Do you want to distinguish your results by the two different stages or is it sufficient to report one set of coeffcients? If so you may use a Tobit model; otherwise you need a two-part model where the discussion about the different sources of zeros comes into play. Is there more than 'just' the source of the zeros? Yep - there is. At least two: observed/true zeros and unobserved/false zeros (the latter actually being either NAs or so small values that are recoded as 0) Hope this helps you a bit! Jan
Exact difference between two-part models (e.g., Cragg) and Tobit type 2 models (e.g., Heckman) Thanks for asking, Mark. In context of my data I ended up using the double hurdle model proposed by Blundell (the first bullet of my suggested solutions). Based on the feedback I received on academic
24,524
In what situation would Wilcoxon's Signed-Rank Test be preferable to either t-Test or Sign Test?
Consider a distribution of pair-differences that is somewhat heavier tailed than normal, but not especially "peaky"; then often the signed rank test will tend to be more powerful than the t-test, but also more powerful than the sign test. For example, at the logistic distribution, the asymptotic relative efficiency of the signed rank test relative to the t-test is 1.097 so the signed rank test should be more powerful than the t (at least in larger samples), but the asymptotic relative efficiency of the sign test relative to the t-test is 0.822, so the sign test would be less powerful than the t (again, at least in larger samples). As we move to heavier-tailed distributions (while still avoiding overly-peaky ones), the t will tend to perform relatively worse, while the sign-test should improve somewhat, and both sign and signed-rank will outperform the t in detecting small effects by substantial margins (i.e. will require much smaller sample sizes to detect an effect). There will be a large class of distributions for which the signed-rank test is the best of the three. Here's one example -- the $t_3$ distribution. Power was simulated at n=100 for the three tests, for a 5% significance level. The power for the $t$ test is marked in black, that for the Wilcoxon signed rank in red and the sign test is marked in green. The sign test's available significance levels didn't include any especially near 5% so in that case a randomized test was used to get close to the right significance level. The x-axis is the $\delta$ parameter which represents the shift from the null case (the tests were all two-sided, so the actual power curve would be symmetric about 0). As we see in the plot, the signed rank test has more power than the sign test, which in turn has more power than the t-test.
In what situation would Wilcoxon's Signed-Rank Test be preferable to either t-Test or Sign Test?
Consider a distribution of pair-differences that is somewhat heavier tailed than normal, but not especially "peaky"; then often the signed rank test will tend to be more powerful than the t-test, but
In what situation would Wilcoxon's Signed-Rank Test be preferable to either t-Test or Sign Test? Consider a distribution of pair-differences that is somewhat heavier tailed than normal, but not especially "peaky"; then often the signed rank test will tend to be more powerful than the t-test, but also more powerful than the sign test. For example, at the logistic distribution, the asymptotic relative efficiency of the signed rank test relative to the t-test is 1.097 so the signed rank test should be more powerful than the t (at least in larger samples), but the asymptotic relative efficiency of the sign test relative to the t-test is 0.822, so the sign test would be less powerful than the t (again, at least in larger samples). As we move to heavier-tailed distributions (while still avoiding overly-peaky ones), the t will tend to perform relatively worse, while the sign-test should improve somewhat, and both sign and signed-rank will outperform the t in detecting small effects by substantial margins (i.e. will require much smaller sample sizes to detect an effect). There will be a large class of distributions for which the signed-rank test is the best of the three. Here's one example -- the $t_3$ distribution. Power was simulated at n=100 for the three tests, for a 5% significance level. The power for the $t$ test is marked in black, that for the Wilcoxon signed rank in red and the sign test is marked in green. The sign test's available significance levels didn't include any especially near 5% so in that case a randomized test was used to get close to the right significance level. The x-axis is the $\delta$ parameter which represents the shift from the null case (the tests were all two-sided, so the actual power curve would be symmetric about 0). As we see in the plot, the signed rank test has more power than the sign test, which in turn has more power than the t-test.
In what situation would Wilcoxon's Signed-Rank Test be preferable to either t-Test or Sign Test? Consider a distribution of pair-differences that is somewhat heavier tailed than normal, but not especially "peaky"; then often the signed rank test will tend to be more powerful than the t-test, but
24,525
Time series prediction using ARIMA vs LSTM
A comparison of artificial neural network and time series models for forecasting commodity prices compares the performance of ANN and ARIMA in predicting financial time series. I think it is a good starting point for your literature review. In many cases, neural networks tend to outperform AR-based models. However, I think that one major drawback (which is not discussed that much in the academic litterature) with more advanced machine learning methods is that they use black boxes. This is a big problem if you had to explain how the model works to someone who doesn't know that much of these models (for example in a corporation). But if you are doing this analysis just as a school work, I don't think this is going to be an issue. But like the previous commentator said usually the best way is to form an ensemble estimator in which you combine two or more models.
Time series prediction using ARIMA vs LSTM
A comparison of artificial neural network and time series models for forecasting commodity prices compares the performance of ANN and ARIMA in predicting financial time series. I think it is a good st
Time series prediction using ARIMA vs LSTM A comparison of artificial neural network and time series models for forecasting commodity prices compares the performance of ANN and ARIMA in predicting financial time series. I think it is a good starting point for your literature review. In many cases, neural networks tend to outperform AR-based models. However, I think that one major drawback (which is not discussed that much in the academic litterature) with more advanced machine learning methods is that they use black boxes. This is a big problem if you had to explain how the model works to someone who doesn't know that much of these models (for example in a corporation). But if you are doing this analysis just as a school work, I don't think this is going to be an issue. But like the previous commentator said usually the best way is to form an ensemble estimator in which you combine two or more models.
Time series prediction using ARIMA vs LSTM A comparison of artificial neural network and time series models for forecasting commodity prices compares the performance of ANN and ARIMA in predicting financial time series. I think it is a good st
24,526
Hierarchical clustering with categorical variables
Yes of course, categorical data are frequently a subject of cluster analysis, especially hierarchical. A lot of proximity measures exist for binary variables (including dummy sets which are the litter of categorical variables); also entropy measures. Clusters of cases will be the frequent combinations of attributes, and various measures give their specific spice for the frequency reckoning. One problem with clustering categorical data is stability of solutions. And this recent question puts forward the issue of variable correlation.
Hierarchical clustering with categorical variables
Yes of course, categorical data are frequently a subject of cluster analysis, especially hierarchical. A lot of proximity measures exist for binary variables (including dummy sets which are the li
Hierarchical clustering with categorical variables Yes of course, categorical data are frequently a subject of cluster analysis, especially hierarchical. A lot of proximity measures exist for binary variables (including dummy sets which are the litter of categorical variables); also entropy measures. Clusters of cases will be the frequent combinations of attributes, and various measures give their specific spice for the frequency reckoning. One problem with clustering categorical data is stability of solutions. And this recent question puts forward the issue of variable correlation.
Hierarchical clustering with categorical variables Yes of course, categorical data are frequently a subject of cluster analysis, especially hierarchical. A lot of proximity measures exist for binary variables (including dummy sets which are the li
24,527
Bayesian modeling of train wait times: The model definition
I will tell you first what I would do and then I'll answer the specific questions you had. What I would do (at least initially) Here is what I gather from your post, you have training waiting times for 19 observations and you are interested in inference about the expected waiting time. I will define $W_i$ for $i=1,\ldots,19$ to be the waiting time for train $i$. I see no reason that these waiting times should be integers, so I will assume they are positive continuous quantities, i.e. $W_i\in\mathbb{R}^+$. I'm assuming all the waiting times are actually observed. There are several possible model assumptions that could be used and with 19 observations it may be hard to determine which model is more reasonable. Some examples are log-normal, gamma, exponential, Weibull. As a first model, I would suggest modeling $Y_i=\log(W_i)$ and then assume $$ Y_i \stackrel{ind}{\sim} N(\mu,\sigma^2). $$ With this choice, you have access to wealth of the normal theory that exists, e.g. a conjugate prior. The conjugate prior is a normal-inverse-gamma distribution, i.e. $$ \mu|\sigma^2 \sim N(m,\sigma^2 C) \quad \sigma^2 \sim IG(a,b) $$ where $IG$ is the inverse gamma distribution. Alternatively, you could use the default prior $p(\mu,\sigma^2)\propto 1/\sigma^2$ in which case the posterior is also a normal-inverse-gamma distribution. Since $E[W_i] = e^{\mu+\sigma^/2}$, we can answer questions about the expected waiting time by drawing joint samples of $\mu$ and $\sigma^2$ from their posterior distribution, which is a normal-inverse-gamma distribution, and then calculating $e^{\mu+\sigma^/2}$ for each of these samples. This samples from the posterior for the expected waiting time. Answering your questions Is this model reasonable for the task (several possible ways to model?)? A Poisson does not seem appropriate for data that could be non-integer valued. You only have a single $\lambda$ and therefore you cannot learn the parameters of the gamma distribution you have assigned to $\lambda$. Another way to say this is that you have built a hierarchical model, but there is no hierarchical structure in the data. Did I make any beginner mistakes? See previous comment. Also, it will really help if your math and your code agree, e.g. where is $\lambda$ in your MCMC results? what is sd and rate in your code? Your prior should not depend on the data. Can the model be simplified (I tend to complicate simple things)? Yes and it should. See my modeling approach. How can I verify if the posterior for the rate parameter ($\rho$) is actually fitting the data? Isn't $\rho$ supposed to be your data? Do you mean $\lambda$? One thing to check is to make sure the sample average wait time makes sense relative to your posterior distribution on the average wait time. Unless you have a bizarre prior, the sample average should be near the peak of hte posterior distribution. How can I draw some samples from the fitted Poisson distribution to see the samples? I believe you want a posterior predictive distribution. For each iteration in your MCMC, you plug in the parameter values for that iteration and take a sample.
Bayesian modeling of train wait times: The model definition
I will tell you first what I would do and then I'll answer the specific questions you had. What I would do (at least initially) Here is what I gather from your post, you have training waiting times f
Bayesian modeling of train wait times: The model definition I will tell you first what I would do and then I'll answer the specific questions you had. What I would do (at least initially) Here is what I gather from your post, you have training waiting times for 19 observations and you are interested in inference about the expected waiting time. I will define $W_i$ for $i=1,\ldots,19$ to be the waiting time for train $i$. I see no reason that these waiting times should be integers, so I will assume they are positive continuous quantities, i.e. $W_i\in\mathbb{R}^+$. I'm assuming all the waiting times are actually observed. There are several possible model assumptions that could be used and with 19 observations it may be hard to determine which model is more reasonable. Some examples are log-normal, gamma, exponential, Weibull. As a first model, I would suggest modeling $Y_i=\log(W_i)$ and then assume $$ Y_i \stackrel{ind}{\sim} N(\mu,\sigma^2). $$ With this choice, you have access to wealth of the normal theory that exists, e.g. a conjugate prior. The conjugate prior is a normal-inverse-gamma distribution, i.e. $$ \mu|\sigma^2 \sim N(m,\sigma^2 C) \quad \sigma^2 \sim IG(a,b) $$ where $IG$ is the inverse gamma distribution. Alternatively, you could use the default prior $p(\mu,\sigma^2)\propto 1/\sigma^2$ in which case the posterior is also a normal-inverse-gamma distribution. Since $E[W_i] = e^{\mu+\sigma^/2}$, we can answer questions about the expected waiting time by drawing joint samples of $\mu$ and $\sigma^2$ from their posterior distribution, which is a normal-inverse-gamma distribution, and then calculating $e^{\mu+\sigma^/2}$ for each of these samples. This samples from the posterior for the expected waiting time. Answering your questions Is this model reasonable for the task (several possible ways to model?)? A Poisson does not seem appropriate for data that could be non-integer valued. You only have a single $\lambda$ and therefore you cannot learn the parameters of the gamma distribution you have assigned to $\lambda$. Another way to say this is that you have built a hierarchical model, but there is no hierarchical structure in the data. Did I make any beginner mistakes? See previous comment. Also, it will really help if your math and your code agree, e.g. where is $\lambda$ in your MCMC results? what is sd and rate in your code? Your prior should not depend on the data. Can the model be simplified (I tend to complicate simple things)? Yes and it should. See my modeling approach. How can I verify if the posterior for the rate parameter ($\rho$) is actually fitting the data? Isn't $\rho$ supposed to be your data? Do you mean $\lambda$? One thing to check is to make sure the sample average wait time makes sense relative to your posterior distribution on the average wait time. Unless you have a bizarre prior, the sample average should be near the peak of hte posterior distribution. How can I draw some samples from the fitted Poisson distribution to see the samples? I believe you want a posterior predictive distribution. For each iteration in your MCMC, you plug in the parameter values for that iteration and take a sample.
Bayesian modeling of train wait times: The model definition I will tell you first what I would do and then I'll answer the specific questions you had. What I would do (at least initially) Here is what I gather from your post, you have training waiting times f
24,528
Special probability distribution
Preliminaries Write $$\mathcal{I}_p(\epsilon) = \int_0^\infty p(x) \log\left(\frac{p(x)}{(1+\epsilon)p(x(1+\epsilon))}\right)\, dx.$$ The logarithms and the relationship between $p(x)$ and $p(x(1+\epsilon))$ suggest expressing both $p$ and its argument as exponentials. To that end, define $$q(y) = \log(p(e^y))$$ for all real $y$ for which the right hand side is defined and equal to $-\infty$ wherever $p(e^y)=0$. Notice that the change of variables $x=e^y$ entails $dx=e^y dy$ and (taking $p$ to be the density of a distribution) that the Law of Total Probability can thereby be expressed as $$1 = \int_0^\infty p(x)dx = \int_\mathbb{R} e^{q(y)+y} dy.\tag{1}$$ Let us assume $e^{q(y)+y}\to 0$ when $y\to\pm\infty$. This rules out probability distributions $p$ with infinitely many spikes in density near $0$ or $\infty$. In particular, if the tails of $p$ are eventually monotonic, $(1)$ implies this assumption, showing it is not a severe one. To make working with the logarithms easier, also observe that $$1+\epsilon = e^\epsilon + O(\epsilon^2).$$ Because the following calculations will be performed up to multiples of $\epsilon^2$, define $$\delta = \log(1+\epsilon).$$ We might as well replace $1+\epsilon$ by $e^\delta$, with $\delta=0$ corresponding to $\epsilon=0$ and positive $\delta$ corresponding to positive $\epsilon$. Analysis One obvious way in which the inequality can fail would be for the integral $\mathcal{I}_p(\epsilon)$ to diverge for some $\epsilon \in (0, 1]$. This would happen if, for instance, there were to be any proper interval $[u, v]$ of positive numbers, no matter how small, in which $p$ were identically zero but $p$ were not zero on the interval $[u-\epsilon, v-\epsilon]$. That would cause the integrand to be infinite with positive probability. Because the question is unspecific concerning the nature of $p$, we could get bogged down in technical issues concerning how smooth $p$ might be. Let's avoid such issues, still hoping to gain some insight, by assuming that $q$ everywhere has as many derivatives as we might care to use. (Two will suffice if $q^{\prime\prime}$ is continuous.) Because that guarantees $q$ remains bounded on any bounded set, it implies that $p(x)$ is never zero when $x \gt 0$. Note that the question really concerns the behavior of $\mathcal{I}_p(\epsilon)$ as $\epsilon$ approaches zero from above. Since this integral is a continuous function of $\epsilon$ in the interval $(0,1]$, it attains some maximum $M_p(a)$ when $\epsilon$ is restricted to any positive interval $[a,1]$, enabling us to choose $c = M_p(a)/a^2$, because obviously $$c\epsilon^2 = M_p(a) \left(\frac{\epsilon}{a}\right)^2 \ge M_p(a) \ge \mathcal{I}_p(\epsilon)$$ makes the inequality work. This is why we need only be concerned with the calculation modulo $\epsilon^2$. Solution Using the changes of variable from $x$ to $y$, from $p$ to $q$, and $\epsilon$ to $\delta$, let's calculate $\mathcal{I}_p(\epsilon)$ through second order in $\epsilon$ (or $\delta$) in the hope of achieving a simplification. To that end define $$\mathcal{R}(y, \delta) \delta^2 = q(y+\delta) - q(y) - \delta q^\prime(y)$$ to be the order-$2$ remainder in the Taylor expansion of $q$ around $y$. $$\eqalign{ \mathcal{I}_p(\epsilon) &= \int_\mathbb{R}e^{q(y) + y} \left(q(y) - q(y+\delta) - \delta\right)\, dy \\ &=-\int_\mathbb{R}e^{q(y) + y} \left(\delta + \delta q^\prime(y) + \mathcal{R}(y, \delta) \delta^2 \right)\, dy \\ &= -\delta\int_\mathbb{R}e^{q(y) + y} \left(1+q^\prime(y)\right)\, dy -\delta^2\int_\mathbb{R}e^{q(y) + y} \mathcal{R}(y, \delta)\, dy. }$$ Changing variables to $q(y)+y$ in the left hand integral shows it must vanish, as remarked in the assumption following $(1)$. Changing variables back to $x=e^y$ in the right hand integral gives $$\mathcal{I}_p(\epsilon) = - \delta^2 \int_\mathbb{R} p(x) \mathcal{R}(\log(x), \delta)\, dy = -\delta^2 \mathbb{E}_p\left(\mathcal{R}(\log(x), \delta)\right).$$ The inequality holds (under our various technical assumptions) if and only if the coefficient of $\delta^2$ on the right hand side is finite. Interpretation This is a good point to stop, because it appears to uncover the essential issue: $\mathcal{I}_p(\epsilon)$ is bounded by a quadratic function of $\epsilon$ precisely when the quadratic error in the Taylor expansion of $q$ doesn't explode (relative to the distribution) as $y$ approaches $\pm\infty$. Let's check some of the cases mentioned in the question: the Exponential and Gamma distributions. (The Exponential is a special case of the Gamma.) We never have to worry about scale parameters, because they merely change the units of measurement. Only non-scale parameters matter. Here, because $p(x) = x^k e^{-x}$ for $k \gt -1$, $$q(y) = -e^y + k y - \log\Gamma(k+1).$$ The Taylor expansion around an arbitrary $y$ is $$\text{Constant} + (k-e^y)\delta - \frac{e^y}{2}\delta^2 + \cdots.$$ Taylor's Theorem with Remainder implies $\mathcal{R}(\log(x),\delta)$ is dominated by $e^{y+\delta}/2 \lt x$ for sufficiently small $\delta$. Since the expectation of $x$ is finite, the inequality holds for Gamma distributions. Similar calculations imply the inequality for Weibull distributions, Half-Normal distributions, Lognormal distributions, etc. In fact, to obtain counterexamples we would need to violate at least one assumption, forcing us to look at distributions where $p$ vanishes on some interval, or is not continuously twice differentiable, or has infinitely many modes. These are easy tests to apply to any family of distributions commonly used in statistical modeling.
Special probability distribution
Preliminaries Write $$\mathcal{I}_p(\epsilon) = \int_0^\infty p(x) \log\left(\frac{p(x)}{(1+\epsilon)p(x(1+\epsilon))}\right)\, dx.$$ The logarithms and the relationship between $p(x)$ and $p(x(1+\eps
Special probability distribution Preliminaries Write $$\mathcal{I}_p(\epsilon) = \int_0^\infty p(x) \log\left(\frac{p(x)}{(1+\epsilon)p(x(1+\epsilon))}\right)\, dx.$$ The logarithms and the relationship between $p(x)$ and $p(x(1+\epsilon))$ suggest expressing both $p$ and its argument as exponentials. To that end, define $$q(y) = \log(p(e^y))$$ for all real $y$ for which the right hand side is defined and equal to $-\infty$ wherever $p(e^y)=0$. Notice that the change of variables $x=e^y$ entails $dx=e^y dy$ and (taking $p$ to be the density of a distribution) that the Law of Total Probability can thereby be expressed as $$1 = \int_0^\infty p(x)dx = \int_\mathbb{R} e^{q(y)+y} dy.\tag{1}$$ Let us assume $e^{q(y)+y}\to 0$ when $y\to\pm\infty$. This rules out probability distributions $p$ with infinitely many spikes in density near $0$ or $\infty$. In particular, if the tails of $p$ are eventually monotonic, $(1)$ implies this assumption, showing it is not a severe one. To make working with the logarithms easier, also observe that $$1+\epsilon = e^\epsilon + O(\epsilon^2).$$ Because the following calculations will be performed up to multiples of $\epsilon^2$, define $$\delta = \log(1+\epsilon).$$ We might as well replace $1+\epsilon$ by $e^\delta$, with $\delta=0$ corresponding to $\epsilon=0$ and positive $\delta$ corresponding to positive $\epsilon$. Analysis One obvious way in which the inequality can fail would be for the integral $\mathcal{I}_p(\epsilon)$ to diverge for some $\epsilon \in (0, 1]$. This would happen if, for instance, there were to be any proper interval $[u, v]$ of positive numbers, no matter how small, in which $p$ were identically zero but $p$ were not zero on the interval $[u-\epsilon, v-\epsilon]$. That would cause the integrand to be infinite with positive probability. Because the question is unspecific concerning the nature of $p$, we could get bogged down in technical issues concerning how smooth $p$ might be. Let's avoid such issues, still hoping to gain some insight, by assuming that $q$ everywhere has as many derivatives as we might care to use. (Two will suffice if $q^{\prime\prime}$ is continuous.) Because that guarantees $q$ remains bounded on any bounded set, it implies that $p(x)$ is never zero when $x \gt 0$. Note that the question really concerns the behavior of $\mathcal{I}_p(\epsilon)$ as $\epsilon$ approaches zero from above. Since this integral is a continuous function of $\epsilon$ in the interval $(0,1]$, it attains some maximum $M_p(a)$ when $\epsilon$ is restricted to any positive interval $[a,1]$, enabling us to choose $c = M_p(a)/a^2$, because obviously $$c\epsilon^2 = M_p(a) \left(\frac{\epsilon}{a}\right)^2 \ge M_p(a) \ge \mathcal{I}_p(\epsilon)$$ makes the inequality work. This is why we need only be concerned with the calculation modulo $\epsilon^2$. Solution Using the changes of variable from $x$ to $y$, from $p$ to $q$, and $\epsilon$ to $\delta$, let's calculate $\mathcal{I}_p(\epsilon)$ through second order in $\epsilon$ (or $\delta$) in the hope of achieving a simplification. To that end define $$\mathcal{R}(y, \delta) \delta^2 = q(y+\delta) - q(y) - \delta q^\prime(y)$$ to be the order-$2$ remainder in the Taylor expansion of $q$ around $y$. $$\eqalign{ \mathcal{I}_p(\epsilon) &= \int_\mathbb{R}e^{q(y) + y} \left(q(y) - q(y+\delta) - \delta\right)\, dy \\ &=-\int_\mathbb{R}e^{q(y) + y} \left(\delta + \delta q^\prime(y) + \mathcal{R}(y, \delta) \delta^2 \right)\, dy \\ &= -\delta\int_\mathbb{R}e^{q(y) + y} \left(1+q^\prime(y)\right)\, dy -\delta^2\int_\mathbb{R}e^{q(y) + y} \mathcal{R}(y, \delta)\, dy. }$$ Changing variables to $q(y)+y$ in the left hand integral shows it must vanish, as remarked in the assumption following $(1)$. Changing variables back to $x=e^y$ in the right hand integral gives $$\mathcal{I}_p(\epsilon) = - \delta^2 \int_\mathbb{R} p(x) \mathcal{R}(\log(x), \delta)\, dy = -\delta^2 \mathbb{E}_p\left(\mathcal{R}(\log(x), \delta)\right).$$ The inequality holds (under our various technical assumptions) if and only if the coefficient of $\delta^2$ on the right hand side is finite. Interpretation This is a good point to stop, because it appears to uncover the essential issue: $\mathcal{I}_p(\epsilon)$ is bounded by a quadratic function of $\epsilon$ precisely when the quadratic error in the Taylor expansion of $q$ doesn't explode (relative to the distribution) as $y$ approaches $\pm\infty$. Let's check some of the cases mentioned in the question: the Exponential and Gamma distributions. (The Exponential is a special case of the Gamma.) We never have to worry about scale parameters, because they merely change the units of measurement. Only non-scale parameters matter. Here, because $p(x) = x^k e^{-x}$ for $k \gt -1$, $$q(y) = -e^y + k y - \log\Gamma(k+1).$$ The Taylor expansion around an arbitrary $y$ is $$\text{Constant} + (k-e^y)\delta - \frac{e^y}{2}\delta^2 + \cdots.$$ Taylor's Theorem with Remainder implies $\mathcal{R}(\log(x),\delta)$ is dominated by $e^{y+\delta}/2 \lt x$ for sufficiently small $\delta$. Since the expectation of $x$ is finite, the inequality holds for Gamma distributions. Similar calculations imply the inequality for Weibull distributions, Half-Normal distributions, Lognormal distributions, etc. In fact, to obtain counterexamples we would need to violate at least one assumption, forcing us to look at distributions where $p$ vanishes on some interval, or is not continuously twice differentiable, or has infinitely many modes. These are easy tests to apply to any family of distributions commonly used in statistical modeling.
Special probability distribution Preliminaries Write $$\mathcal{I}_p(\epsilon) = \int_0^\infty p(x) \log\left(\frac{p(x)}{(1+\epsilon)p(x(1+\epsilon))}\right)\, dx.$$ The logarithms and the relationship between $p(x)$ and $p(x(1+\eps
24,529
Is it correct to use plus or minus symbol before standard deviation?
Yes! you can represent standard deviation as "±SD". For Example:- $\bar x\pm 2\times SD$, it just shows the lower and upper limit for most of individual output $x_i$ of Normal data. ($\mu ~rather~than~ \bar x$) and, $\bar x\pm 2\times SE~of~mean$, shows lower and upper limit of population mean.
Is it correct to use plus or minus symbol before standard deviation?
Yes! you can represent standard deviation as "±SD". For Example:- $\bar x\pm 2\times SD$, it just shows the lower and upper limit for most of individual output $x_i$ of Normal data. ($\mu ~rather~than
Is it correct to use plus or minus symbol before standard deviation? Yes! you can represent standard deviation as "±SD". For Example:- $\bar x\pm 2\times SD$, it just shows the lower and upper limit for most of individual output $x_i$ of Normal data. ($\mu ~rather~than~ \bar x$) and, $\bar x\pm 2\times SE~of~mean$, shows lower and upper limit of population mean.
Is it correct to use plus or minus symbol before standard deviation? Yes! you can represent standard deviation as "±SD". For Example:- $\bar x\pm 2\times SD$, it just shows the lower and upper limit for most of individual output $x_i$ of Normal data. ($\mu ~rather~than
24,530
Understanding the output of a bootstrap performed in R (tsboot, MannKendall)
Having run into the same question and explored it with a controlled data set – model y = ax + b with N(0, sig) errors, I find that the Kendall package may not be working as advertised. The x in my case was 1:100, and y = x, with sig = 100 (variance of error term). The regression looks good, and so does Kendall's tau. There is no autocorrelation here other than that induced by the linear model. Running the Kendall test as advertised with block lengths of 1, 3, 5 and 10 yields very large bias values, and boot.ci reports no trend. Subsequently, I hand-coded the bootstrap of the data with these block lengths, and with my control series, I get reasonable results as to the mean of the bootstrap samples and their spread. Hence, it is possible that something has gone awry with the package Kendall with regard to the block bootstrap.
Understanding the output of a bootstrap performed in R (tsboot, MannKendall)
Having run into the same question and explored it with a controlled data set – model y = ax + b with N(0, sig) errors, I find that the Kendall package may not be working as advertised. The x in my cas
Understanding the output of a bootstrap performed in R (tsboot, MannKendall) Having run into the same question and explored it with a controlled data set – model y = ax + b with N(0, sig) errors, I find that the Kendall package may not be working as advertised. The x in my case was 1:100, and y = x, with sig = 100 (variance of error term). The regression looks good, and so does Kendall's tau. There is no autocorrelation here other than that induced by the linear model. Running the Kendall test as advertised with block lengths of 1, 3, 5 and 10 yields very large bias values, and boot.ci reports no trend. Subsequently, I hand-coded the bootstrap of the data with these block lengths, and with my control series, I get reasonable results as to the mean of the bootstrap samples and their spread. Hence, it is possible that something has gone awry with the package Kendall with regard to the block bootstrap.
Understanding the output of a bootstrap performed in R (tsboot, MannKendall) Having run into the same question and explored it with a controlled data set – model y = ax + b with N(0, sig) errors, I find that the Kendall package may not be working as advertised. The x in my cas
24,531
Understanding measure concentration inequalities
The use of exponential moments is a common step in the process of proving concentration of measure inequalities. My understanding is as follows 1) By using $\mathbb{E}e^X$ rather than $\mathbb{E} X$, one captures all the moments of $X$, rather than just the first moment. Hence, it is always advantageous to bound $\mathbb{E} e^X$, rather than bound $\mathbb{E}X$, as there is more information in $\mathbb{E} e^X$. Why does $\mathbb{E} e^X$ have more information? An informal explanation is given by the fact that a Taylor expansion of $e^{X}=1+X+\frac{X^2}{2}+\frac{X^3}{6}+\ldots$. As you can see all the powers of $X$ are involved. Hence, when you take $\mathbb{E}X$, you essentially end up bounding all moments of $X$.
Understanding measure concentration inequalities
The use of exponential moments is a common step in the process of proving concentration of measure inequalities. My understanding is as follows 1) By using $\mathbb{E}e^X$ rather than $\mathbb{E} X$,
Understanding measure concentration inequalities The use of exponential moments is a common step in the process of proving concentration of measure inequalities. My understanding is as follows 1) By using $\mathbb{E}e^X$ rather than $\mathbb{E} X$, one captures all the moments of $X$, rather than just the first moment. Hence, it is always advantageous to bound $\mathbb{E} e^X$, rather than bound $\mathbb{E}X$, as there is more information in $\mathbb{E} e^X$. Why does $\mathbb{E} e^X$ have more information? An informal explanation is given by the fact that a Taylor expansion of $e^{X}=1+X+\frac{X^2}{2}+\frac{X^3}{6}+\ldots$. As you can see all the powers of $X$ are involved. Hence, when you take $\mathbb{E}X$, you essentially end up bounding all moments of $X$.
Understanding measure concentration inequalities The use of exponential moments is a common step in the process of proving concentration of measure inequalities. My understanding is as follows 1) By using $\mathbb{E}e^X$ rather than $\mathbb{E} X$,
24,532
How to interpret a negative linear regression coefficient for a logged outcome variable?
You should not take the absolute value of the coefficient--although this would let you know the effect of a 1-unit decrease in X. Think of it this way: Using the original negative coefficient, this equation shows the percentage change in Y for a 1-unit increase in X: (exp[−0.0564*1]−1)⋅100=−5.48 Your "absolute value" equation actually shows the percentage change in Y for a 1-unit decrease in X: (exp[-0.0564*-1]−1)⋅100=5.80 You can use a percentage change calculator to see how both of these percentages map onto a 1-unit change in X. Imagine that a 1-unit change in X were associated with a 58-unit change in linear Y: Our linear version of Y going from 1,000 to 1,058 is a 5.8% increase. Our linear version of Y going from 1,058 to 1,000 is a 5.482% decrease.
How to interpret a negative linear regression coefficient for a logged outcome variable?
You should not take the absolute value of the coefficient--although this would let you know the effect of a 1-unit decrease in X. Think of it this way: Using the original negative coefficient, this eq
How to interpret a negative linear regression coefficient for a logged outcome variable? You should not take the absolute value of the coefficient--although this would let you know the effect of a 1-unit decrease in X. Think of it this way: Using the original negative coefficient, this equation shows the percentage change in Y for a 1-unit increase in X: (exp[−0.0564*1]−1)⋅100=−5.48 Your "absolute value" equation actually shows the percentage change in Y for a 1-unit decrease in X: (exp[-0.0564*-1]−1)⋅100=5.80 You can use a percentage change calculator to see how both of these percentages map onto a 1-unit change in X. Imagine that a 1-unit change in X were associated with a 58-unit change in linear Y: Our linear version of Y going from 1,000 to 1,058 is a 5.8% increase. Our linear version of Y going from 1,058 to 1,000 is a 5.482% decrease.
How to interpret a negative linear regression coefficient for a logged outcome variable? You should not take the absolute value of the coefficient--although this would let you know the effect of a 1-unit decrease in X. Think of it this way: Using the original negative coefficient, this eq
24,533
How to test for overdispersion in Poisson GLMM with lmer() in R?
Among many other useful tidbits on GLMM with lmer() and other GLMM fitting software, check out the section on the following web page called How can I deal with overdispersion in GLMMs? http://glmm.wikidot.com/faq
How to test for overdispersion in Poisson GLMM with lmer() in R?
Among many other useful tidbits on GLMM with lmer() and other GLMM fitting software, check out the section on the following web page called How can I deal with overdispersion in GLMMs? http://glmm.wik
How to test for overdispersion in Poisson GLMM with lmer() in R? Among many other useful tidbits on GLMM with lmer() and other GLMM fitting software, check out the section on the following web page called How can I deal with overdispersion in GLMMs? http://glmm.wikidot.com/faq
How to test for overdispersion in Poisson GLMM with lmer() in R? Among many other useful tidbits on GLMM with lmer() and other GLMM fitting software, check out the section on the following web page called How can I deal with overdispersion in GLMMs? http://glmm.wik
24,534
How to test for overdispersion in Poisson GLMM with lmer() in R?
The package AER (p.33) has the Cameron & Trivedi test of the assumption of equidispersion that can be used with GLMs. AER::dispersiontest(model1)
How to test for overdispersion in Poisson GLMM with lmer() in R?
The package AER (p.33) has the Cameron & Trivedi test of the assumption of equidispersion that can be used with GLMs. AER::dispersiontest(model1)
How to test for overdispersion in Poisson GLMM with lmer() in R? The package AER (p.33) has the Cameron & Trivedi test of the assumption of equidispersion that can be used with GLMs. AER::dispersiontest(model1)
How to test for overdispersion in Poisson GLMM with lmer() in R? The package AER (p.33) has the Cameron & Trivedi test of the assumption of equidispersion that can be used with GLMs. AER::dispersiontest(model1)
24,535
Which test to use to compare proportions between 3 groups?
In a table like this you can partition the G-statistic produced by a G-test, rather than calculating the ORs or by running a logistic regression. Although you have to decide how you're going to partition it. Here the G-statistic, which is similar to Pearson's X^2 and also follows a X^2 distribution, is: G = 2 * sum(OBS * ln(OBS/EXP)). You first calculate that for the overall table, in this case: G = 76.42, on 2 df, which is highly significant (p < 0.0001). That is to say that return rate depends on the group (A, B, or C). Then, because you have 2 df, you can perform two smaller 1 df (2x2) G-tests. After performing the first one, however, you have to collapse the rows of the two levels used in the first test, and then use those values to test them against the third level. Here, let's say you test B against C first. Obs Rec Ret Total B 17530 717 18247 C 42408 1618 44026 Exp Rec Ret Total B 17562.8 684.2 18247 C 42375.2 1650.8 44026 This produces a G-stat of 2.29 on 1 df, which is not significant (p = 0.1300). Then make a new table, combining rows B and C. Now test A against B+C. Obs Rec Ret Total A 16895 934 17829 B+C 59938 2335 62273 Exp Rec Ret Total A 17101.4 727.6 17829 B+C 59731.6 2541.4 62273 This produces a G-stat of 74.13, on 1 df, which is also highly significant (p < 0.0001). You can check your work by adding the two smaller test statistics, which should equal the larger test statistic. It does: 2.29 + 74.13 = 76.42 The story here is that your B and C groups are not significantly different, but that group A has a higher return rate than B and C combined. Hope that helps! You could also have partitioned the G-stat differently by comparing A to B first, then C to A+B, or by comparing A to C, then B to A+C. Additionally, you can expand this to 4 or more groups, but after each test you have to collapse the two rows that you just tested, with a maximum number of tests equal to the df in your original table. There are other ways to partition with more complicated tables. Agresti's book, "Categorical Data Analysis", should have the details. Specifically, his chapter on inference for two-way contingency tables.
Which test to use to compare proportions between 3 groups?
In a table like this you can partition the G-statistic produced by a G-test, rather than calculating the ORs or by running a logistic regression. Although you have to decide how you're going to parti
Which test to use to compare proportions between 3 groups? In a table like this you can partition the G-statistic produced by a G-test, rather than calculating the ORs or by running a logistic regression. Although you have to decide how you're going to partition it. Here the G-statistic, which is similar to Pearson's X^2 and also follows a X^2 distribution, is: G = 2 * sum(OBS * ln(OBS/EXP)). You first calculate that for the overall table, in this case: G = 76.42, on 2 df, which is highly significant (p < 0.0001). That is to say that return rate depends on the group (A, B, or C). Then, because you have 2 df, you can perform two smaller 1 df (2x2) G-tests. After performing the first one, however, you have to collapse the rows of the two levels used in the first test, and then use those values to test them against the third level. Here, let's say you test B against C first. Obs Rec Ret Total B 17530 717 18247 C 42408 1618 44026 Exp Rec Ret Total B 17562.8 684.2 18247 C 42375.2 1650.8 44026 This produces a G-stat of 2.29 on 1 df, which is not significant (p = 0.1300). Then make a new table, combining rows B and C. Now test A against B+C. Obs Rec Ret Total A 16895 934 17829 B+C 59938 2335 62273 Exp Rec Ret Total A 17101.4 727.6 17829 B+C 59731.6 2541.4 62273 This produces a G-stat of 74.13, on 1 df, which is also highly significant (p < 0.0001). You can check your work by adding the two smaller test statistics, which should equal the larger test statistic. It does: 2.29 + 74.13 = 76.42 The story here is that your B and C groups are not significantly different, but that group A has a higher return rate than B and C combined. Hope that helps! You could also have partitioned the G-stat differently by comparing A to B first, then C to A+B, or by comparing A to C, then B to A+C. Additionally, you can expand this to 4 or more groups, but after each test you have to collapse the two rows that you just tested, with a maximum number of tests equal to the df in your original table. There are other ways to partition with more complicated tables. Agresti's book, "Categorical Data Analysis", should have the details. Specifically, his chapter on inference for two-way contingency tables.
Which test to use to compare proportions between 3 groups? In a table like this you can partition the G-statistic produced by a G-test, rather than calculating the ORs or by running a logistic regression. Although you have to decide how you're going to parti
24,536
Which test to use to compare proportions between 3 groups?
I would simply calculate odds (or risk) ratios between group A and B, between B and C, and between A and C and see if they statistically different. I don't see a reason to do a "omnibus" proportions test in this case since you only have three groups. Three chi-square tests could do the trick as well. As some of the individuals have outlined in the comments below, and logistic regression with planned contrasts would work well too.
Which test to use to compare proportions between 3 groups?
I would simply calculate odds (or risk) ratios between group A and B, between B and C, and between A and C and see if they statistically different. I don't see a reason to do a "omnibus" proportions t
Which test to use to compare proportions between 3 groups? I would simply calculate odds (or risk) ratios between group A and B, between B and C, and between A and C and see if they statistically different. I don't see a reason to do a "omnibus" proportions test in this case since you only have three groups. Three chi-square tests could do the trick as well. As some of the individuals have outlined in the comments below, and logistic regression with planned contrasts would work well too.
Which test to use to compare proportions between 3 groups? I would simply calculate odds (or risk) ratios between group A and B, between B and C, and between A and C and see if they statistically different. I don't see a reason to do a "omnibus" proportions t
24,537
Why are eigen and svd decompositions of a covariance matrix based on sparse data yielding different results?
You need to do the sum of the absolute value of eigen values i.e, sum(abs(Eg$values)) and compare it with the sum of the singular values. They would be equal. The reason is that if you multiply the rows or columns that correspond to the negative eigenvalues with $-1$, then the eigen-value of the new matrix becomes positive and the orthogonality of the eigen-vectors is not disturbed. The proof of the converse of this beautiful theorem appeared in The algebra of hyperboloids of revolution, Javier F. Cabrera, Linear Algebra and its Applications, Princeton University (now at Rutgers). Another way to reason this is by the fact that sqrt(eigen(t(Cg) %*% Cg)) are equal to the singular values of Cg. But when the eigenvalues are negative, the data must be represented in a hermitian form with the complex plane taken into account, which is what was missed in the original formulation, i.e the data formed by the symmetric square-root of the matrix with negative eigen values would have complex entries.
Why are eigen and svd decompositions of a covariance matrix based on sparse data yielding different
You need to do the sum of the absolute value of eigen values i.e, sum(abs(Eg$values)) and compare it with the sum of the singular values. They would be equal. The reason is that if you multiply the ro
Why are eigen and svd decompositions of a covariance matrix based on sparse data yielding different results? You need to do the sum of the absolute value of eigen values i.e, sum(abs(Eg$values)) and compare it with the sum of the singular values. They would be equal. The reason is that if you multiply the rows or columns that correspond to the negative eigenvalues with $-1$, then the eigen-value of the new matrix becomes positive and the orthogonality of the eigen-vectors is not disturbed. The proof of the converse of this beautiful theorem appeared in The algebra of hyperboloids of revolution, Javier F. Cabrera, Linear Algebra and its Applications, Princeton University (now at Rutgers). Another way to reason this is by the fact that sqrt(eigen(t(Cg) %*% Cg)) are equal to the singular values of Cg. But when the eigenvalues are negative, the data must be represented in a hermitian form with the complex plane taken into account, which is what was missed in the original formulation, i.e the data formed by the symmetric square-root of the matrix with negative eigen values would have complex entries.
Why are eigen and svd decompositions of a covariance matrix based on sparse data yielding different You need to do the sum of the absolute value of eigen values i.e, sum(abs(Eg$values)) and compare it with the sum of the singular values. They would be equal. The reason is that if you multiply the ro
24,538
Multivariate normal distribution of regression coefficient?
Not $\beta$ has a distribution but $\hat\beta$, as indicated by Taylor. The distribution of $\hat\beta$ stems from the fact that you get different $\hat\beta$ for different samples.---You can estimate this distribution based on the single $\hat\beta$ received from your single sample on condition that you have some information concerning the distribution of the underlying data.
Multivariate normal distribution of regression coefficient?
Not $\beta$ has a distribution but $\hat\beta$, as indicated by Taylor. The distribution of $\hat\beta$ stems from the fact that you get different $\hat\beta$ for different samples.---You can estimate
Multivariate normal distribution of regression coefficient? Not $\beta$ has a distribution but $\hat\beta$, as indicated by Taylor. The distribution of $\hat\beta$ stems from the fact that you get different $\hat\beta$ for different samples.---You can estimate this distribution based on the single $\hat\beta$ received from your single sample on condition that you have some information concerning the distribution of the underlying data.
Multivariate normal distribution of regression coefficient? Not $\beta$ has a distribution but $\hat\beta$, as indicated by Taylor. The distribution of $\hat\beta$ stems from the fact that you get different $\hat\beta$ for different samples.---You can estimate
24,539
Determining an optimal discretization of data from a continuous distribution
I'm going to share the solution I came up with to this problem a while back - this is not a formal statistical test but may provide a useful heuristic. Consider the general case where you have continuous observations $Y_{1}, Y_{2}, ..., Y_{n}$; without loss of generality suppose the sample space of each observations is the interval $[0,1]$. A categorization scheme will depend on a number of categories, $m$, and the locations thresholds which divide the categories, $0 < \lambda_{1} < \lambda_{2} < \cdots < \lambda_{m-1} < 1$. Denote the categorized version of $Y_{i}$ by $Z_{i}(m, {\boldsymbol \lambda})$, where ${\boldsymbol \lambda} = \{ \lambda_{1}, \lambda_{2}, \cdots, \lambda_{m-1} \}$. Thinking of the discretization of the data as a partitioning of the original data into classes, the variance of $Y_{i}$ can be thought of as a combination of variation within and between groups, for a fixed value of $m, {\boldsymbol \lambda}$: \begin{equation} {\rm var}(Y_{i}) = {\rm var} \Big( E(Y_{i} | Z_{i}(m, {\boldsymbol \lambda})) \Big) + E \Big( {\rm var}(Y_{i} | Z_{i}(m, {\boldsymbol \lambda})) \Big). \end{equation} A given categorization is successful at producing homogenous groups if there is relatively little within group variance, quantified by $E( {\rm var}(Y_{i} | Z_{i}(m, {\boldsymbol \lambda}) )$. Therefore, we seek a parsimonious grouping that confers most of the variation in $Y_{i}$ to the ${\rm var}( E(Y_{i} | Z_{i}(m, {\boldsymbol \lambda}) )$ term. In particular, we want to choose $m$ so that by adding additional levels, we do not add significantly to the within group homogeneity. With this is mind, we define the optimal ${\boldsymbol \lambda}$ for a fixed value of $m$ to be \begin{equation} {\boldsymbol \lambda}^{\star}_{m} = {\rm argmin}_{\boldsymbol \lambda} E \Big( {\rm var}(Y_{i} | Z_{i}(m, {\boldsymbol \lambda})) \Big) \end{equation} A rough diagnostic for determining what choice of $m$ is adequate is to look at the dropoff in $E \Big( {\rm var}(Y_{i} | Z_{i}(m, {\boldsymbol \lambda}^{\star}_{m} )) \Big)$ as a function of $m$ - this trajectory is monotonically non-increasing and after it decreases sharply, then you can see that you're gaining relatively less precision by including more categories. This heuristic is similar in spirit how a "Scree Plot" is sometimes used to see how many principal components explain "enough" of the variation.
Determining an optimal discretization of data from a continuous distribution
I'm going to share the solution I came up with to this problem a while back - this is not a formal statistical test but may provide a useful heuristic. Consider the general case where you have contin
Determining an optimal discretization of data from a continuous distribution I'm going to share the solution I came up with to this problem a while back - this is not a formal statistical test but may provide a useful heuristic. Consider the general case where you have continuous observations $Y_{1}, Y_{2}, ..., Y_{n}$; without loss of generality suppose the sample space of each observations is the interval $[0,1]$. A categorization scheme will depend on a number of categories, $m$, and the locations thresholds which divide the categories, $0 < \lambda_{1} < \lambda_{2} < \cdots < \lambda_{m-1} < 1$. Denote the categorized version of $Y_{i}$ by $Z_{i}(m, {\boldsymbol \lambda})$, where ${\boldsymbol \lambda} = \{ \lambda_{1}, \lambda_{2}, \cdots, \lambda_{m-1} \}$. Thinking of the discretization of the data as a partitioning of the original data into classes, the variance of $Y_{i}$ can be thought of as a combination of variation within and between groups, for a fixed value of $m, {\boldsymbol \lambda}$: \begin{equation} {\rm var}(Y_{i}) = {\rm var} \Big( E(Y_{i} | Z_{i}(m, {\boldsymbol \lambda})) \Big) + E \Big( {\rm var}(Y_{i} | Z_{i}(m, {\boldsymbol \lambda})) \Big). \end{equation} A given categorization is successful at producing homogenous groups if there is relatively little within group variance, quantified by $E( {\rm var}(Y_{i} | Z_{i}(m, {\boldsymbol \lambda}) )$. Therefore, we seek a parsimonious grouping that confers most of the variation in $Y_{i}$ to the ${\rm var}( E(Y_{i} | Z_{i}(m, {\boldsymbol \lambda}) )$ term. In particular, we want to choose $m$ so that by adding additional levels, we do not add significantly to the within group homogeneity. With this is mind, we define the optimal ${\boldsymbol \lambda}$ for a fixed value of $m$ to be \begin{equation} {\boldsymbol \lambda}^{\star}_{m} = {\rm argmin}_{\boldsymbol \lambda} E \Big( {\rm var}(Y_{i} | Z_{i}(m, {\boldsymbol \lambda})) \Big) \end{equation} A rough diagnostic for determining what choice of $m$ is adequate is to look at the dropoff in $E \Big( {\rm var}(Y_{i} | Z_{i}(m, {\boldsymbol \lambda}^{\star}_{m} )) \Big)$ as a function of $m$ - this trajectory is monotonically non-increasing and after it decreases sharply, then you can see that you're gaining relatively less precision by including more categories. This heuristic is similar in spirit how a "Scree Plot" is sometimes used to see how many principal components explain "enough" of the variation.
Determining an optimal discretization of data from a continuous distribution I'm going to share the solution I came up with to this problem a while back - this is not a formal statistical test but may provide a useful heuristic. Consider the general case where you have contin
24,540
Determining an optimal discretization of data from a continuous distribution
Determining an "optimal" discretization scheme probably depends on the context of the problem. Generally, this question and Calculating optimal number of bins in a histogram sound like two sides of the same coin. Here is a summary (to date) of the answers there: MDL histogram density estimation discretizes data by fitting a normalized maximum likelihood distribution. This method is appealing to me as it is backed by sound principles in information theory. Freedman-Diaconis, prescribes a formula for determining a bin width that minimizes the integral of squared error between the resulting histogram and "underlying/true" probability distribution. Shimazaki-Shinomoto is similar in flavour to Freedman-Diaconis. Variations on the histogram is a bit different from the two above, and is related to creating histograms with "equal-area" per bin. They cut the empirical CDF diagonally. Bayesian blocks has a similar flavour to the above. The bin widths are variable. See more methods summarized on Wiki/Histogram
Determining an optimal discretization of data from a continuous distribution
Determining an "optimal" discretization scheme probably depends on the context of the problem. Generally, this question and Calculating optimal number of bins in a histogram sound like two sides of th
Determining an optimal discretization of data from a continuous distribution Determining an "optimal" discretization scheme probably depends on the context of the problem. Generally, this question and Calculating optimal number of bins in a histogram sound like two sides of the same coin. Here is a summary (to date) of the answers there: MDL histogram density estimation discretizes data by fitting a normalized maximum likelihood distribution. This method is appealing to me as it is backed by sound principles in information theory. Freedman-Diaconis, prescribes a formula for determining a bin width that minimizes the integral of squared error between the resulting histogram and "underlying/true" probability distribution. Shimazaki-Shinomoto is similar in flavour to Freedman-Diaconis. Variations on the histogram is a bit different from the two above, and is related to creating histograms with "equal-area" per bin. They cut the empirical CDF diagonally. Bayesian blocks has a similar flavour to the above. The bin widths are variable. See more methods summarized on Wiki/Histogram
Determining an optimal discretization of data from a continuous distribution Determining an "optimal" discretization scheme probably depends on the context of the problem. Generally, this question and Calculating optimal number of bins in a histogram sound like two sides of th
24,541
Finding precision of Monte Carlo simulation estimate
There is one general and "in-universe" criterion for goodness of Monte Carlo -- convergence. Stick to one M and check how the PG behaves with the number of juries -- it should converge, so will show you a number of repetitions for which you will have a reasonable (for your application) number of significant digits. Repeat this benchmark for few other Ms to be sure you wasn't lucky with M selection, then proceed to the whole simulation.
Finding precision of Monte Carlo simulation estimate
There is one general and "in-universe" criterion for goodness of Monte Carlo -- convergence. Stick to one M and check how the PG behaves with the number of juries -- it should converge, so will show
Finding precision of Monte Carlo simulation estimate There is one general and "in-universe" criterion for goodness of Monte Carlo -- convergence. Stick to one M and check how the PG behaves with the number of juries -- it should converge, so will show you a number of repetitions for which you will have a reasonable (for your application) number of significant digits. Repeat this benchmark for few other Ms to be sure you wasn't lucky with M selection, then proceed to the whole simulation.
Finding precision of Monte Carlo simulation estimate There is one general and "in-universe" criterion for goodness of Monte Carlo -- convergence. Stick to one M and check how the PG behaves with the number of juries -- it should converge, so will show
24,542
Finding precision of Monte Carlo simulation estimate
It seems to me that the problem here is whether the model is too complex to look out without using Monte Carlo simulation. If the model is all relatively simple then it should be possible to look at it through conventioanl statistics and derive a solution to the question being asked, without re-running the model multiple times. This is a bit of an over simplication, but if all your model did was produce points based on a normal distribution, then you could easily derive the sort of answers you are looking for. Of course, if the model is this simple then you are unlikely to need to do a Monte Carlo simulation find your answers. If the problem is complex and it is not possible to break it down to more elementary, the Monte-Carlo is the right type of model to use, but I don't think there is any way of defining confidence limits without runing the model. Ultimately to get the type of confidence limits described he model would have to be run a number of times, a probability distribution would have to be fit to the outputs and from there the confidnce limits could be defined. One of the challenges with Monte-Carlo simulation is that models give good and regular answers for distributions in the mid range but the tails often give much more variable results, which ultimately means more runs to define the shape of the outputs at the 2.5% and 97.5% percentiles.
Finding precision of Monte Carlo simulation estimate
It seems to me that the problem here is whether the model is too complex to look out without using Monte Carlo simulation. If the model is all relatively simple then it should be possible to look at i
Finding precision of Monte Carlo simulation estimate It seems to me that the problem here is whether the model is too complex to look out without using Monte Carlo simulation. If the model is all relatively simple then it should be possible to look at it through conventioanl statistics and derive a solution to the question being asked, without re-running the model multiple times. This is a bit of an over simplication, but if all your model did was produce points based on a normal distribution, then you could easily derive the sort of answers you are looking for. Of course, if the model is this simple then you are unlikely to need to do a Monte Carlo simulation find your answers. If the problem is complex and it is not possible to break it down to more elementary, the Monte-Carlo is the right type of model to use, but I don't think there is any way of defining confidence limits without runing the model. Ultimately to get the type of confidence limits described he model would have to be run a number of times, a probability distribution would have to be fit to the outputs and from there the confidnce limits could be defined. One of the challenges with Monte-Carlo simulation is that models give good and regular answers for distributions in the mid range but the tails often give much more variable results, which ultimately means more runs to define the shape of the outputs at the 2.5% and 97.5% percentiles.
Finding precision of Monte Carlo simulation estimate It seems to me that the problem here is whether the model is too complex to look out without using Monte Carlo simulation. If the model is all relatively simple then it should be possible to look at i
24,543
Deciding between Decoder-only or Encoder-only Transformers (BERT, GPT)
BERT just need the encoder part of the Transformer, this is true but the concept of masking is different than the Transformer. You mask just a single word (token). So it will provide you the way to spell check your text for instance by predicting if the word is more relevant than the wrd in the next sentence. My next <mask> will be different. The GPT-2 is very similar to the decoder-only transformer you are true again, but again not quite. I would argue these are text related models, but since you mentioned images I recall someone told me BERT is conceptually VAE. So you may use BERT like models and they will have the hidden $h$ state you may use to say about the weather. I would use GPT-2 or similar models to predict new images based on some start pixels. However for what you need you need both the encode and the decode ~ transformer, because you wold like to encode background to latent state and than to decode it to the text rain. Such nets exist and they can annotate the images. But you don't need transformer just simple text and image VAE can work.
Deciding between Decoder-only or Encoder-only Transformers (BERT, GPT)
BERT just need the encoder part of the Transformer, this is true but the concept of masking is different than the Transformer. You mask just a single word (token). So it will provide you the way to sp
Deciding between Decoder-only or Encoder-only Transformers (BERT, GPT) BERT just need the encoder part of the Transformer, this is true but the concept of masking is different than the Transformer. You mask just a single word (token). So it will provide you the way to spell check your text for instance by predicting if the word is more relevant than the wrd in the next sentence. My next <mask> will be different. The GPT-2 is very similar to the decoder-only transformer you are true again, but again not quite. I would argue these are text related models, but since you mentioned images I recall someone told me BERT is conceptually VAE. So you may use BERT like models and they will have the hidden $h$ state you may use to say about the weather. I would use GPT-2 or similar models to predict new images based on some start pixels. However for what you need you need both the encode and the decode ~ transformer, because you wold like to encode background to latent state and than to decode it to the text rain. Such nets exist and they can annotate the images. But you don't need transformer just simple text and image VAE can work.
Deciding between Decoder-only or Encoder-only Transformers (BERT, GPT) BERT just need the encoder part of the Transformer, this is true but the concept of masking is different than the Transformer. You mask just a single word (token). So it will provide you the way to sp
24,544
Why L1 regularization can "zero out the weights" and therefore leads to sparse models? [duplicate]
Consider their metaphor whereby L1 regularization is "a force that subtracts some constant from the weight every time". First of all, it's only analogous to a physical force if the friction term is very high since forces induce acceleration, and acceleration is integrated to form velocity, which is integrated to form position. If friction is high, then the velocity never persists and the integral of the force over time is roughly the total change in position. So, consider that every time step, the position $x$ (weight or whatever you're regularizing) experiences a force that applies a total acceleration over that time step of $-k\, \textrm{sgn}(x)$. Suppose $x$ is smaller than $k$. It seems like applying the force would make $x$ overshoot zero. However, if you subdivide the time step into smaller time steps and $k$ into smaller total accelerations (since the force is integrated over smaller periods), in the limit of subdivision $x$ simply goes to zero. If this were the L2 norm, you could ask the same question about overshooting. Only now there is a simple physical metaphor: an overdamped pendulum (in mud), which does not overshoot.
Why L1 regularization can "zero out the weights" and therefore leads to sparse models? [duplicate]
Consider their metaphor whereby L1 regularization is "a force that subtracts some constant from the weight every time". First of all, it's only analogous to a physical force if the friction term is ve
Why L1 regularization can "zero out the weights" and therefore leads to sparse models? [duplicate] Consider their metaphor whereby L1 regularization is "a force that subtracts some constant from the weight every time". First of all, it's only analogous to a physical force if the friction term is very high since forces induce acceleration, and acceleration is integrated to form velocity, which is integrated to form position. If friction is high, then the velocity never persists and the integral of the force over time is roughly the total change in position. So, consider that every time step, the position $x$ (weight or whatever you're regularizing) experiences a force that applies a total acceleration over that time step of $-k\, \textrm{sgn}(x)$. Suppose $x$ is smaller than $k$. It seems like applying the force would make $x$ overshoot zero. However, if you subdivide the time step into smaller time steps and $k$ into smaller total accelerations (since the force is integrated over smaller periods), in the limit of subdivision $x$ simply goes to zero. If this were the L2 norm, you could ask the same question about overshooting. Only now there is a simple physical metaphor: an overdamped pendulum (in mud), which does not overshoot.
Why L1 regularization can "zero out the weights" and therefore leads to sparse models? [duplicate] Consider their metaphor whereby L1 regularization is "a force that subtracts some constant from the weight every time". First of all, it's only analogous to a physical force if the friction term is ve
24,545
Should degrees of freedom corrections be used for inference on GLM parameters?
Short answer: Not a full answer yet, but you might be interested in the following distributions related to the linked question: It compares z-test (as also used by glm) and t-test layout(matrix(1:2,1,byrow=TRUE)) # trying all 100 possible outcomes if the true value is p=0.7 px <- dbinom(0:100,100,0.7) p_model = rep(0,101) p_model2 = rep(0,101) for (i in 0:100) { xi = c(rep(1,i),rep(0,100-i)) model = glm(xi ~ 1, offset=rep(qlogis(0.7),100), family="binomial") p_model[i+1] = 1-summary(model)$coefficients[4] model2 <- glm(xi ~ 1, family = "binomial") coef <- summary(model2)$coefficients p_model2[i+1] = 1-2*pt(-abs((qlogis(0.7)-coef[1])/coef[2]),99,ncp=0) } # plotting cumulative distribution of outcomes z-test outcomes <- p_model[order(p_model)] cdf <- cumsum(px[order(p_model)]) plot(1-outcomes,1-cdf, ylab="cumulative probability", xlab= "calculated glm p-value", xlim=c(10^-4,1),ylim=c(10^-4,1),col=2,cex=0.5,log="xy") lines(c(0.00001,1),c(0.00001,1)) for (i in 1:100) { lines(1-c(outcomes[i],outcomes[i+1]),1-c(cdf[i+1],cdf[i+1]),col=2) # lines(1-c(outcomes[i],outcomes[i]),1-c(cdf[i],cdf[i+1]),col=2) } title("probability for rejection with z-test \n as function of set alpha level") # plotting cumulative distribution of outcomes t-test outcomes <- p_model2[order(p_model2)] cdf <- cumsum(px[order(p_model2)]) plot(1-outcomes,1-cdf, ylab="cumulative probability", xlab= "calculated glm p-value", xlim=c(10^-4,1),ylim=c(10^-4,1),col=2,cex=0.5,log="xy") lines(c(0.00001,1),c(0.00001,1)) for (i in 1:100) { lines(1-c(outcomes[i],outcomes[i+1]),1-c(cdf[i+1],cdf[i+1]),col=2) # lines(1-c(outcomes[i],outcomes[i]),1-c(cdf[i],cdf[i+1]),col=2) } title("probability for rejection with t-test \n as function of set alpha level") [![p-test vs t-test][1]][1] And there is only a small difference. And also the z-test is actually better (but this might be because both the t-test and z-test are "wrong" and possibly the error of the z-test compensates this error). Long Answer: ...
Should degrees of freedom corrections be used for inference on GLM parameters?
Short answer: Not a full answer yet, but you might be interested in the following distributions related to the linked question: It compares z-test (as also used by glm) and t-test layout(matrix(1:
Should degrees of freedom corrections be used for inference on GLM parameters? Short answer: Not a full answer yet, but you might be interested in the following distributions related to the linked question: It compares z-test (as also used by glm) and t-test layout(matrix(1:2,1,byrow=TRUE)) # trying all 100 possible outcomes if the true value is p=0.7 px <- dbinom(0:100,100,0.7) p_model = rep(0,101) p_model2 = rep(0,101) for (i in 0:100) { xi = c(rep(1,i),rep(0,100-i)) model = glm(xi ~ 1, offset=rep(qlogis(0.7),100), family="binomial") p_model[i+1] = 1-summary(model)$coefficients[4] model2 <- glm(xi ~ 1, family = "binomial") coef <- summary(model2)$coefficients p_model2[i+1] = 1-2*pt(-abs((qlogis(0.7)-coef[1])/coef[2]),99,ncp=0) } # plotting cumulative distribution of outcomes z-test outcomes <- p_model[order(p_model)] cdf <- cumsum(px[order(p_model)]) plot(1-outcomes,1-cdf, ylab="cumulative probability", xlab= "calculated glm p-value", xlim=c(10^-4,1),ylim=c(10^-4,1),col=2,cex=0.5,log="xy") lines(c(0.00001,1),c(0.00001,1)) for (i in 1:100) { lines(1-c(outcomes[i],outcomes[i+1]),1-c(cdf[i+1],cdf[i+1]),col=2) # lines(1-c(outcomes[i],outcomes[i]),1-c(cdf[i],cdf[i+1]),col=2) } title("probability for rejection with z-test \n as function of set alpha level") # plotting cumulative distribution of outcomes t-test outcomes <- p_model2[order(p_model2)] cdf <- cumsum(px[order(p_model2)]) plot(1-outcomes,1-cdf, ylab="cumulative probability", xlab= "calculated glm p-value", xlim=c(10^-4,1),ylim=c(10^-4,1),col=2,cex=0.5,log="xy") lines(c(0.00001,1),c(0.00001,1)) for (i in 1:100) { lines(1-c(outcomes[i],outcomes[i+1]),1-c(cdf[i+1],cdf[i+1]),col=2) # lines(1-c(outcomes[i],outcomes[i]),1-c(cdf[i],cdf[i+1]),col=2) } title("probability for rejection with t-test \n as function of set alpha level") [![p-test vs t-test][1]][1] And there is only a small difference. And also the z-test is actually better (but this might be because both the t-test and z-test are "wrong" and possibly the error of the z-test compensates this error). Long Answer: ...
Should degrees of freedom corrections be used for inference on GLM parameters? Short answer: Not a full answer yet, but you might be interested in the following distributions related to the linked question: It compares z-test (as also used by glm) and t-test layout(matrix(1:
24,546
Nested cross validation vs repeated k-fold
Nested cross-validation and repeated k-fold cross-validation have different aims. The aim of nested cross-validation is to eliminate the bias in the performance estimate due to the use of cross-validation to tune the hyper-parameters. As the "inner" cross-validation has been directly optimised to tune the hyper-parameters it will give an optimistically biased estimate of generalisation performance. The aim of repeated k-fold cross-validation, on the other hand, is to reduce the variance of the performance estimate (to average out the random variation caused by partitioning the data into folds). If you want to reduce bias and variance, there is no reason (other than computational expense) not to combine both, such that repeated k-fold is used for the "outer" cross-validation of a nested cross-validation estimate. Using repeated k-fold cross-validation for the "inner" folds, might also improve the hyper-parameter tuning. If all of the models have only a small number of hyper-parameters (and they are not overly sensitive to the hyper-parameter values) then you can often get away with a non-nested cross-validation to choose the model, and only need nested cross-validation if you need an unbiased performance estimate, see: Jacques Wainer and Gavin Cawley, "Nested cross-validation when selecting classifiers is overzealous for most practical applications", Expert Systems with Applications, Volume 182, 2021 (doi, pdf) If, on the other hand, some models have more hyper-parameters than others, the model choice will be biased towards the models with the most hyper-parameters (which is probably a bad thing as they are the ones most likely to experience over-fitting in model selection). See the comparison of RBF kernels with a single hyper-parameter and Automatic Relevance Determination (ARD) kernels, with one hyper-parameter for each attribute, in section 4.3 my paper (with Mrs Marsupial): GC Cawley and NLC Talbot, "On over-fitting in model selection and subsequent selection bias in performance evaluation", The Journal of Machine Learning Research 11, 2079-2107, 2010 (pdf) The PRESS statistic (which is the inner cross-validation) will almost always select the ARD kernel, despite the RBF kernel giving better generalisation performance in the majority of cases (ten of the thirteen benchmark datasets).
Nested cross validation vs repeated k-fold
Nested cross-validation and repeated k-fold cross-validation have different aims. The aim of nested cross-validation is to eliminate the bias in the performance estimate due to the use of cross-valid
Nested cross validation vs repeated k-fold Nested cross-validation and repeated k-fold cross-validation have different aims. The aim of nested cross-validation is to eliminate the bias in the performance estimate due to the use of cross-validation to tune the hyper-parameters. As the "inner" cross-validation has been directly optimised to tune the hyper-parameters it will give an optimistically biased estimate of generalisation performance. The aim of repeated k-fold cross-validation, on the other hand, is to reduce the variance of the performance estimate (to average out the random variation caused by partitioning the data into folds). If you want to reduce bias and variance, there is no reason (other than computational expense) not to combine both, such that repeated k-fold is used for the "outer" cross-validation of a nested cross-validation estimate. Using repeated k-fold cross-validation for the "inner" folds, might also improve the hyper-parameter tuning. If all of the models have only a small number of hyper-parameters (and they are not overly sensitive to the hyper-parameter values) then you can often get away with a non-nested cross-validation to choose the model, and only need nested cross-validation if you need an unbiased performance estimate, see: Jacques Wainer and Gavin Cawley, "Nested cross-validation when selecting classifiers is overzealous for most practical applications", Expert Systems with Applications, Volume 182, 2021 (doi, pdf) If, on the other hand, some models have more hyper-parameters than others, the model choice will be biased towards the models with the most hyper-parameters (which is probably a bad thing as they are the ones most likely to experience over-fitting in model selection). See the comparison of RBF kernels with a single hyper-parameter and Automatic Relevance Determination (ARD) kernels, with one hyper-parameter for each attribute, in section 4.3 my paper (with Mrs Marsupial): GC Cawley and NLC Talbot, "On over-fitting in model selection and subsequent selection bias in performance evaluation", The Journal of Machine Learning Research 11, 2079-2107, 2010 (pdf) The PRESS statistic (which is the inner cross-validation) will almost always select the ARD kernel, despite the RBF kernel giving better generalisation performance in the majority of cases (ten of the thirteen benchmark datasets).
Nested cross validation vs repeated k-fold Nested cross-validation and repeated k-fold cross-validation have different aims. The aim of nested cross-validation is to eliminate the bias in the performance estimate due to the use of cross-valid
24,547
Nested cross validation vs repeated k-fold
As I understand, the estimated error would be biased in the repeated cross validation example.... However if I am only interested in choosing the best model is nested cross validation really necessary? Short Answer: Your understanding is correct. Nested cross-validation is not necessary to pick the best model (out of the models considered). However, are you sure that this is what you want to do? Long Answer: As discussed here in similar terms, you can simplify your question by thinking of hyperparameter selection and model selection (i.e., Neural Network (NN), KNN, or SVM) as the same question; that is, you can think of selecting a single model from the big set $$\left\{ NN_1, ...., NN_{p_{NN}}, KNN_1,...,KNN_{p_{KNN}}, ... SVM_1,...,SVM_{p_{SVM}} \right\}$$ (where the subscript is used to index the different hyperparameter combinations). This should make it clear why you only need one cross-validation loop to select the best model. As you suspected, nested cross-validation is needed only to obtain an unbiased estimate of error. That said, I think it's very unlikely that you would want to do this in practice (except as a (nested) step in some larger validation procedure), as this leaves you vulnerable to over-fitting (especially if the number of candidate hyperparameters is large relative to the sample size). Regardless of the estimated error, your final model, fit to the whole data, could perform absolutely terribly when confronted with new data -- you would have no way of knowing.
Nested cross validation vs repeated k-fold
As I understand, the estimated error would be biased in the repeated cross validation example.... However if I am only interested in choosing the best model is nested cross validation really necessary
Nested cross validation vs repeated k-fold As I understand, the estimated error would be biased in the repeated cross validation example.... However if I am only interested in choosing the best model is nested cross validation really necessary? Short Answer: Your understanding is correct. Nested cross-validation is not necessary to pick the best model (out of the models considered). However, are you sure that this is what you want to do? Long Answer: As discussed here in similar terms, you can simplify your question by thinking of hyperparameter selection and model selection (i.e., Neural Network (NN), KNN, or SVM) as the same question; that is, you can think of selecting a single model from the big set $$\left\{ NN_1, ...., NN_{p_{NN}}, KNN_1,...,KNN_{p_{KNN}}, ... SVM_1,...,SVM_{p_{SVM}} \right\}$$ (where the subscript is used to index the different hyperparameter combinations). This should make it clear why you only need one cross-validation loop to select the best model. As you suspected, nested cross-validation is needed only to obtain an unbiased estimate of error. That said, I think it's very unlikely that you would want to do this in practice (except as a (nested) step in some larger validation procedure), as this leaves you vulnerable to over-fitting (especially if the number of candidate hyperparameters is large relative to the sample size). Regardless of the estimated error, your final model, fit to the whole data, could perform absolutely terribly when confronted with new data -- you would have no way of knowing.
Nested cross validation vs repeated k-fold As I understand, the estimated error would be biased in the repeated cross validation example.... However if I am only interested in choosing the best model is nested cross validation really necessary
24,548
GAN: why is too-good discriminator supposed to provide "small error"?
You might find the answer in this paper "Towards principled methods for training generative adversarial networks" (https://arxiv.org/pdf/1701.04862.pdf). It has a part explaining why the generator's gradient vanishes as the discriminator gets stronger.
GAN: why is too-good discriminator supposed to provide "small error"?
You might find the answer in this paper "Towards principled methods for training generative adversarial networks" (https://arxiv.org/pdf/1701.04862.pdf). It has a part explaining why the generator's g
GAN: why is too-good discriminator supposed to provide "small error"? You might find the answer in this paper "Towards principled methods for training generative adversarial networks" (https://arxiv.org/pdf/1701.04862.pdf). It has a part explaining why the generator's gradient vanishes as the discriminator gets stronger.
GAN: why is too-good discriminator supposed to provide "small error"? You might find the answer in this paper "Towards principled methods for training generative adversarial networks" (https://arxiv.org/pdf/1701.04862.pdf). It has a part explaining why the generator's g
24,549
A potential confound in an experiment design
I'd be concerned about a related confound - 'Each participant can only view one essay supposedly authored by a non White male author, since we don't want participants to get suspicious about the purpose of the experiment because too many of their essays are written by Black or female authors.' This means that no matter the outcome, you won't be able to determine whether it is because of a difference between white male authorship and other authorship, or simply between 'majority authorship' and 'minority authorship'. If the design as shown also reflects presentation order (I assume it doesn't, but better to check) then it seems to be another issue.
A potential confound in an experiment design
I'd be concerned about a related confound - 'Each participant can only view one essay supposedly authored by a non White male author, since we don't want participants to get suspicious about the purpo
A potential confound in an experiment design I'd be concerned about a related confound - 'Each participant can only view one essay supposedly authored by a non White male author, since we don't want participants to get suspicious about the purpose of the experiment because too many of their essays are written by Black or female authors.' This means that no matter the outcome, you won't be able to determine whether it is because of a difference between white male authorship and other authorship, or simply between 'majority authorship' and 'minority authorship'. If the design as shown also reflects presentation order (I assume it doesn't, but better to check) then it seems to be another issue.
A potential confound in an experiment design I'd be concerned about a related confound - 'Each participant can only view one essay supposedly authored by a non White male author, since we don't want participants to get suspicious about the purpo
24,550
A potential confound in an experiment design
Wouldn't the design be simpler if each participant rated only two essays (one White male and one other)? Is so, have participants rate two essays but have them believe that the pile contained mostly male essays. They just happened to get those two by chance. Card magicians call this "forcing". If this would require too many participants, test less than 12 topics. Twelve is a lot.
A potential confound in an experiment design
Wouldn't the design be simpler if each participant rated only two essays (one White male and one other)? Is so, have participants rate two essays but have them believe that the pile contained mostly m
A potential confound in an experiment design Wouldn't the design be simpler if each participant rated only two essays (one White male and one other)? Is so, have participants rate two essays but have them believe that the pile contained mostly male essays. They just happened to get those two by chance. Card magicians call this "forcing". If this would require too many participants, test less than 12 topics. Twelve is a lot.
A potential confound in an experiment design Wouldn't the design be simpler if each participant rated only two essays (one White male and one other)? Is so, have participants rate two essays but have them believe that the pile contained mostly m
24,551
A potential confound in an experiment design
With this sample size, how can you conclude anything? If you repeated this experiment many times, then the four markers who get both a white male and a black male would all award the white males better marks in one trial out of 16.
A potential confound in an experiment design
With this sample size, how can you conclude anything? If you repeated this experiment many times, then the four markers who get both a white male and a black male would all award the white males bette
A potential confound in an experiment design With this sample size, how can you conclude anything? If you repeated this experiment many times, then the four markers who get both a white male and a black male would all award the white males better marks in one trial out of 16.
A potential confound in an experiment design With this sample size, how can you conclude anything? If you repeated this experiment many times, then the four markers who get both a white male and a black male would all award the white males bette
24,552
Outlier Detection in Time-Series: How to reduce false positives?
Don't expect much for small, discrete counts. Going from 1 to 2 visits is a 100% increase, and going from 0 to 1 visits is an infinite increase. At low levels you may be dealing with zero-inflated models, and it can be very noisy down there as well. In my experience, count data with a mixture of large and small counts like this results in two problems with your small counts: 1) they are too coarse to do much with, 2) they are generated by different processes. (Think small, rural post office versus big city post office). So you need to at least split your modeling in two: do what you're successfully doing for the larger counts, and do something different -- coarser and more approximate -- with small counts. But don't expect much of the small counts. The good news is that the big counts, by definition, include more of your transactions, so your better model covers more of the data, even though it may not cover most of your sites. (I say "modeling" to be general, but of course outlier detection is assuming a particular model and finding points that are highly unlikely with that model's assumptions.)
Outlier Detection in Time-Series: How to reduce false positives?
Don't expect much for small, discrete counts. Going from 1 to 2 visits is a 100% increase, and going from 0 to 1 visits is an infinite increase. At low levels you may be dealing with zero-inflated mod
Outlier Detection in Time-Series: How to reduce false positives? Don't expect much for small, discrete counts. Going from 1 to 2 visits is a 100% increase, and going from 0 to 1 visits is an infinite increase. At low levels you may be dealing with zero-inflated models, and it can be very noisy down there as well. In my experience, count data with a mixture of large and small counts like this results in two problems with your small counts: 1) they are too coarse to do much with, 2) they are generated by different processes. (Think small, rural post office versus big city post office). So you need to at least split your modeling in two: do what you're successfully doing for the larger counts, and do something different -- coarser and more approximate -- with small counts. But don't expect much of the small counts. The good news is that the big counts, by definition, include more of your transactions, so your better model covers more of the data, even though it may not cover most of your sites. (I say "modeling" to be general, but of course outlier detection is assuming a particular model and finding points that are highly unlikely with that model's assumptions.)
Outlier Detection in Time-Series: How to reduce false positives? Don't expect much for small, discrete counts. Going from 1 to 2 visits is a 100% increase, and going from 0 to 1 visits is an infinite increase. At low levels you may be dealing with zero-inflated mod
24,553
Outlier Detection in Time-Series: How to reduce false positives?
Each value from your time series is a sample from a probability distribution. You need to first find what the probability distribution is and then define what the word rare means within that distribution. So calculate the empirical cdf, and calculate the 95% confidence interval. Whenever something outside of that region has occurred, then by definition you know that it must be a rare event.
Outlier Detection in Time-Series: How to reduce false positives?
Each value from your time series is a sample from a probability distribution. You need to first find what the probability distribution is and then define what the word rare means within that distribut
Outlier Detection in Time-Series: How to reduce false positives? Each value from your time series is a sample from a probability distribution. You need to first find what the probability distribution is and then define what the word rare means within that distribution. So calculate the empirical cdf, and calculate the 95% confidence interval. Whenever something outside of that region has occurred, then by definition you know that it must be a rare event.
Outlier Detection in Time-Series: How to reduce false positives? Each value from your time series is a sample from a probability distribution. You need to first find what the probability distribution is and then define what the word rare means within that distribut
24,554
Outlier Detection in Time-Series: How to reduce false positives?
You are having that problem because your data is far from a normal distribution. If the distribution is highly asymmetrical, with bumps, humps or too long/short tails you will encounter problems. A good idea is to apply a transformation like Box Cox or Yeo-Johnson before using your method. In your example if you use F(x) = log(1+x) you avoid the different magnitude problem and you can convert back using: exp(z) -1 There are several procedures you could use to find automatically a good lambda for the Box-Cox transformation. I personally use the median of all the methods of the boxcoxnc function from AID package in R. If your data is not strictly positive you will need to add 1 or other positive number before using it.
Outlier Detection in Time-Series: How to reduce false positives?
You are having that problem because your data is far from a normal distribution. If the distribution is highly asymmetrical, with bumps, humps or too long/short tails you will encounter problems. A g
Outlier Detection in Time-Series: How to reduce false positives? You are having that problem because your data is far from a normal distribution. If the distribution is highly asymmetrical, with bumps, humps or too long/short tails you will encounter problems. A good idea is to apply a transformation like Box Cox or Yeo-Johnson before using your method. In your example if you use F(x) = log(1+x) you avoid the different magnitude problem and you can convert back using: exp(z) -1 There are several procedures you could use to find automatically a good lambda for the Box-Cox transformation. I personally use the median of all the methods of the boxcoxnc function from AID package in R. If your data is not strictly positive you will need to add 1 or other positive number before using it.
Outlier Detection in Time-Series: How to reduce false positives? You are having that problem because your data is far from a normal distribution. If the distribution is highly asymmetrical, with bumps, humps or too long/short tails you will encounter problems. A g
24,555
Outlier Detection in Time-Series: How to reduce false positives?
It is one thing to detect an Outlier at a particular level of confidence and yet another to place a second specification that would further restrict the acceptance of the outlier. I was once asked the following question " Can AUTOBOX detect a mean shift of xx units at a pre-specified level of confidence". Essentially what was required was a dual test. AUTOBOX is a piece of software that I have helped develop which you might find cost-effective as no free software has implemented this dual test. Thanks Nick: I was using a level shift as a particular example of an "outlier" or in general the empirically identified deterministic impact. Other forms of "outliers" are Pulses, Seasonal Pulses and Local Time Trends AND particular combinations such as a transient change to a new level. The main point was that there may be two hypothesis that are in play reflecting statistical significance and real-world significance. The customer that had originally brought this problem to my attention was interested in both.
Outlier Detection in Time-Series: How to reduce false positives?
It is one thing to detect an Outlier at a particular level of confidence and yet another to place a second specification that would further restrict the acceptance of the outlier. I was once asked th
Outlier Detection in Time-Series: How to reduce false positives? It is one thing to detect an Outlier at a particular level of confidence and yet another to place a second specification that would further restrict the acceptance of the outlier. I was once asked the following question " Can AUTOBOX detect a mean shift of xx units at a pre-specified level of confidence". Essentially what was required was a dual test. AUTOBOX is a piece of software that I have helped develop which you might find cost-effective as no free software has implemented this dual test. Thanks Nick: I was using a level shift as a particular example of an "outlier" or in general the empirically identified deterministic impact. Other forms of "outliers" are Pulses, Seasonal Pulses and Local Time Trends AND particular combinations such as a transient change to a new level. The main point was that there may be two hypothesis that are in play reflecting statistical significance and real-world significance. The customer that had originally brought this problem to my attention was interested in both.
Outlier Detection in Time-Series: How to reduce false positives? It is one thing to detect an Outlier at a particular level of confidence and yet another to place a second specification that would further restrict the acceptance of the outlier. I was once asked th
24,556
How to normalize data of unknown distribution
Have you considered taking the mean of the (3-10) measurements from each sample? Can you then work with the resulting distribution - which will approximate the t-distribution, which will approximate the normal distribution for larger n?
How to normalize data of unknown distribution
Have you considered taking the mean of the (3-10) measurements from each sample? Can you then work with the resulting distribution - which will approximate the t-distribution, which will approximate
How to normalize data of unknown distribution Have you considered taking the mean of the (3-10) measurements from each sample? Can you then work with the resulting distribution - which will approximate the t-distribution, which will approximate the normal distribution for larger n?
How to normalize data of unknown distribution Have you considered taking the mean of the (3-10) measurements from each sample? Can you then work with the resulting distribution - which will approximate the t-distribution, which will approximate
24,557
How to normalize data of unknown distribution
I don't think you're using normalize to mean what it normally means, which is typically something like normalize the mean and/or variance, and/or whitening, for example. I think what you're trying to do is find a non-linear reparameterization and/or features that lets you use linear models on your data. This is non-trivial, and has no simple answer. It's why data scientists are paid lots of money ;-) One relatively straightforward way to create non-linear features is to use a feed-forward neural network, where the number of layers, and the number of neurons per layer, controls the capacity of the network to generate features. Higher capacity => more non-linearity, more overfitting. Lower capacity => more linearity, higher bias, lower variance. Another method which gives you slightly more control is to use splines. Finally, you could create such features by hand, which I think is what you are trying to do, but then, there is no simple 'black box' answer: you'll need to carefully analyze the data, look for patterns and so on.
How to normalize data of unknown distribution
I don't think you're using normalize to mean what it normally means, which is typically something like normalize the mean and/or variance, and/or whitening, for example. I think what you're trying to
How to normalize data of unknown distribution I don't think you're using normalize to mean what it normally means, which is typically something like normalize the mean and/or variance, and/or whitening, for example. I think what you're trying to do is find a non-linear reparameterization and/or features that lets you use linear models on your data. This is non-trivial, and has no simple answer. It's why data scientists are paid lots of money ;-) One relatively straightforward way to create non-linear features is to use a feed-forward neural network, where the number of layers, and the number of neurons per layer, controls the capacity of the network to generate features. Higher capacity => more non-linearity, more overfitting. Lower capacity => more linearity, higher bias, lower variance. Another method which gives you slightly more control is to use splines. Finally, you could create such features by hand, which I think is what you are trying to do, but then, there is no simple 'black box' answer: you'll need to carefully analyze the data, look for patterns and so on.
How to normalize data of unknown distribution I don't think you're using normalize to mean what it normally means, which is typically something like normalize the mean and/or variance, and/or whitening, for example. I think what you're trying to
24,558
How to normalize data of unknown distribution
You can try to use the family of Johnson's (SL, SU, SB, SN) distribution that are four-parameters probability distributions. Each distribution represents the transformation to the normal distribution.
How to normalize data of unknown distribution
You can try to use the family of Johnson's (SL, SU, SB, SN) distribution that are four-parameters probability distributions. Each distribution represents the transformation to the normal distribution
How to normalize data of unknown distribution You can try to use the family of Johnson's (SL, SU, SB, SN) distribution that are four-parameters probability distributions. Each distribution represents the transformation to the normal distribution.
How to normalize data of unknown distribution You can try to use the family of Johnson's (SL, SU, SB, SN) distribution that are four-parameters probability distributions. Each distribution represents the transformation to the normal distribution
24,559
Is there a better name than "average of the integral"?
First of all, this is a great description of your project and of the problem. And I am big fan of your home-made measurement framework, which is super cool... so why on earth does it matter what you call "averaging the integrals"? In case you are interested in some broader positioning of your work, what you would like to do is often referred to as Anomaly detection. In its simplest setting it involves comparing a value in a time-series against the standard deviation of the previous values. The rule is then if $$x[n] > \alpha SD(x[1:n-1]) => x[n]\text{ is outlier}$$ where $x[n]$ is the $n^{th}$ value in the series, $SD(x[1:n-1])$ is the standard deviation of all previous values between the $1^{st}$ and $(n-1)^{th}$ value, and $\alpha$ is some suitable parameter you pick, such as 1, or 2, depending on how sensitive you want the detector to be. You can of course adapt this formula to work only locally (on some interval of length $h$), $$x[n] > \alpha SD(x[n-h-1:n-1]) => x[n]\text{ is outlier}$$ If I understood correctly, you are looking for a way to automate the testing of your devices, that is, declare a device as good/faulty after it performed the entire test (drew the entire diagonal). In that case simply consider the above formulas as comparing $x[n]$ against the standard deviation of all values. There are also other rules you might want to consider for the purpose of classifying a device as faulty: if any deviation (delta) is greater than some multiple of the SD of all deltas if the square sum of the deviations is larger than a certain threshold if the ratio of the sum of the positive and negative deltas is not approximately equal (which might be useful if you prefer smaller errors in both directions rather than a strong bias in a single direction) Of course you can find more rules and concatenate them using boolean logic, but I think you can get very far with the three above. Last but not least, once you set it up, you will need to test the classifier (a classifier is a system/model mapping an input to a class, in your case the data of each device, to either "good", or "faulty"). Create a testing set by manually labelling the performance of each device. Then look into ROC, which basically tells you the offset between how many devices your system correctly picks up out of the returned, in relation to how many of the faulty devices it picks up.
Is there a better name than "average of the integral"?
First of all, this is a great description of your project and of the problem. And I am big fan of your home-made measurement framework, which is super cool... so why on earth does it matter what you
Is there a better name than "average of the integral"? First of all, this is a great description of your project and of the problem. And I am big fan of your home-made measurement framework, which is super cool... so why on earth does it matter what you call "averaging the integrals"? In case you are interested in some broader positioning of your work, what you would like to do is often referred to as Anomaly detection. In its simplest setting it involves comparing a value in a time-series against the standard deviation of the previous values. The rule is then if $$x[n] > \alpha SD(x[1:n-1]) => x[n]\text{ is outlier}$$ where $x[n]$ is the $n^{th}$ value in the series, $SD(x[1:n-1])$ is the standard deviation of all previous values between the $1^{st}$ and $(n-1)^{th}$ value, and $\alpha$ is some suitable parameter you pick, such as 1, or 2, depending on how sensitive you want the detector to be. You can of course adapt this formula to work only locally (on some interval of length $h$), $$x[n] > \alpha SD(x[n-h-1:n-1]) => x[n]\text{ is outlier}$$ If I understood correctly, you are looking for a way to automate the testing of your devices, that is, declare a device as good/faulty after it performed the entire test (drew the entire diagonal). In that case simply consider the above formulas as comparing $x[n]$ against the standard deviation of all values. There are also other rules you might want to consider for the purpose of classifying a device as faulty: if any deviation (delta) is greater than some multiple of the SD of all deltas if the square sum of the deviations is larger than a certain threshold if the ratio of the sum of the positive and negative deltas is not approximately equal (which might be useful if you prefer smaller errors in both directions rather than a strong bias in a single direction) Of course you can find more rules and concatenate them using boolean logic, but I think you can get very far with the three above. Last but not least, once you set it up, you will need to test the classifier (a classifier is a system/model mapping an input to a class, in your case the data of each device, to either "good", or "faulty"). Create a testing set by manually labelling the performance of each device. Then look into ROC, which basically tells you the offset between how many devices your system correctly picks up out of the returned, in relation to how many of the faulty devices it picks up.
Is there a better name than "average of the integral"? First of all, this is a great description of your project and of the problem. And I am big fan of your home-made measurement framework, which is super cool... so why on earth does it matter what you
24,560
HMC: How many dimensions is too many?
Maximum number of parameters It depends a lot on the structure of your problem. For example, my experience with various hierarchical linear models in Stan was that it starts being very slow (hours or days to complete) at around 10 000 - 30 000 params (some reproducible numbers are on my blog on Stan vs. INLA). When working with models involving ordinary differential equations and complex structure, 10 parameters may be too many. When fitting just a vector of independent normals (see below), Stan takes about 40 minutes to complete with 1e5 parameters, using default settings (1000 iter warmup, 1000 iter sampling, 4 chains). So having much more than 1e5 params is very likely to be impractical. The longest part of a Stan run is however the warmup phase when the hyperparameters of the algorithm are tweaked. If you could provide good values for those by yourself (which is hard), you might be able to push the performance even further. Also, the support of MPI for within-chain parallelism and offloading matrix operations to GPU should be added to Stan soon (See e.g. discussion here http://discourse.mc-stan.org/t/parallelization-again-mpi-to-the-rescue/455/11, and here http://discourse.mc-stan.org/t/stan-on-the-gpu/326/10) so even larger models are likely to become practical in the near future. Diagnostics in high dimension The HMC implementation in Stan provides multiple useful diagnostics that work even with large number of parameters: divergent transitions, n_eff (effective sample size) and split Rhat (potential scale reduction). See Stan manual, section "Initialization and Convergence Monitoring" for detailed explanation of those. R code for a simple model - just a set of independent normals which can scale in the number of parameters, fit in Stan: library(rstan) model_code = " data { int N; } parameters { vector[N] a; } model { a ~ normal(0,1); } " model = stan_model(model_code = model_code) fit_large = sampling(model, data = list(N = 1e5))
HMC: How many dimensions is too many?
Maximum number of parameters It depends a lot on the structure of your problem. For example, my experience with various hierarchical linear models in Stan was that it starts being very slow (hours or
HMC: How many dimensions is too many? Maximum number of parameters It depends a lot on the structure of your problem. For example, my experience with various hierarchical linear models in Stan was that it starts being very slow (hours or days to complete) at around 10 000 - 30 000 params (some reproducible numbers are on my blog on Stan vs. INLA). When working with models involving ordinary differential equations and complex structure, 10 parameters may be too many. When fitting just a vector of independent normals (see below), Stan takes about 40 minutes to complete with 1e5 parameters, using default settings (1000 iter warmup, 1000 iter sampling, 4 chains). So having much more than 1e5 params is very likely to be impractical. The longest part of a Stan run is however the warmup phase when the hyperparameters of the algorithm are tweaked. If you could provide good values for those by yourself (which is hard), you might be able to push the performance even further. Also, the support of MPI for within-chain parallelism and offloading matrix operations to GPU should be added to Stan soon (See e.g. discussion here http://discourse.mc-stan.org/t/parallelization-again-mpi-to-the-rescue/455/11, and here http://discourse.mc-stan.org/t/stan-on-the-gpu/326/10) so even larger models are likely to become practical in the near future. Diagnostics in high dimension The HMC implementation in Stan provides multiple useful diagnostics that work even with large number of parameters: divergent transitions, n_eff (effective sample size) and split Rhat (potential scale reduction). See Stan manual, section "Initialization and Convergence Monitoring" for detailed explanation of those. R code for a simple model - just a set of independent normals which can scale in the number of parameters, fit in Stan: library(rstan) model_code = " data { int N; } parameters { vector[N] a; } model { a ~ normal(0,1); } " model = stan_model(model_code = model_code) fit_large = sampling(model, data = list(N = 1e5))
HMC: How many dimensions is too many? Maximum number of parameters It depends a lot on the structure of your problem. For example, my experience with various hierarchical linear models in Stan was that it starts being very slow (hours or
24,561
Difference between exploratory and confirmatory factor analysis in determining construct independence
These methods are examples of application of exploratory and confirmatory data analysis. Exploratory data analysis looks for patterns while confirmatory data analysis does statistical hypothesis testing on proposed models. It really should not be viewed in terms of which method to use it is more a matter of what stage in the data analysis you are at. If you are unsure of what factors to include in your model you apply EFA. Once you have eliminated some factors and settled on what to include in your model you do CFA to test the model formally to see if the chosen factors are significant.
Difference between exploratory and confirmatory factor analysis in determining construct independenc
These methods are examples of application of exploratory and confirmatory data analysis. Exploratory data analysis looks for patterns while confirmatory data analysis does statistical hypothesis testi
Difference between exploratory and confirmatory factor analysis in determining construct independence These methods are examples of application of exploratory and confirmatory data analysis. Exploratory data analysis looks for patterns while confirmatory data analysis does statistical hypothesis testing on proposed models. It really should not be viewed in terms of which method to use it is more a matter of what stage in the data analysis you are at. If you are unsure of what factors to include in your model you apply EFA. Once you have eliminated some factors and settled on what to include in your model you do CFA to test the model formally to see if the chosen factors are significant.
Difference between exploratory and confirmatory factor analysis in determining construct independenc These methods are examples of application of exploratory and confirmatory data analysis. Exploratory data analysis looks for patterns while confirmatory data analysis does statistical hypothesis testi
24,562
Difference between exploratory and confirmatory factor analysis in determining construct independence
If I understand your question correctly it is a question about testing. Then simply testing requires a kind of confirmatory factor-analysis, the same as the question: "do the means in the subgroups really differ?" requires a t-test. Unfortunately(?) with the selection of the general approach of the appropriate method of factor analysis also different mathematical (and statistical) models are often implied, for instance, if you select "CFA" in SPSS then it is implied that you assume uncorrelated errors and that uncorrelated errors are estimated and the estimation is excluded from the model - so, in my opinion, because of the further implications the initial selection of the correct factor analytic approach is often compromised by this mathematical/statistical implications. In short: your question is one of the sort "testing the null", thus you need CFA or better: the methods developed in the framework of SEM (structural equation modelling). Note, there is a friendly and helpful mailing list full of experts in SEM called "SEMNET" and since I'm not a real expert you might refine your feedback by asking there...
Difference between exploratory and confirmatory factor analysis in determining construct independenc
If I understand your question correctly it is a question about testing. Then simply testing requires a kind of confirmatory factor-analysis, the same as the question: "do the means in the subgroups re
Difference between exploratory and confirmatory factor analysis in determining construct independence If I understand your question correctly it is a question about testing. Then simply testing requires a kind of confirmatory factor-analysis, the same as the question: "do the means in the subgroups really differ?" requires a t-test. Unfortunately(?) with the selection of the general approach of the appropriate method of factor analysis also different mathematical (and statistical) models are often implied, for instance, if you select "CFA" in SPSS then it is implied that you assume uncorrelated errors and that uncorrelated errors are estimated and the estimation is excluded from the model - so, in my opinion, because of the further implications the initial selection of the correct factor analytic approach is often compromised by this mathematical/statistical implications. In short: your question is one of the sort "testing the null", thus you need CFA or better: the methods developed in the framework of SEM (structural equation modelling). Note, there is a friendly and helpful mailing list full of experts in SEM called "SEMNET" and since I'm not a real expert you might refine your feedback by asking there...
Difference between exploratory and confirmatory factor analysis in determining construct independenc If I understand your question correctly it is a question about testing. Then simply testing requires a kind of confirmatory factor-analysis, the same as the question: "do the means in the subgroups re
24,563
How can I optimise computational efficiency when fitting a complex model to a large data set repeatedly?
Why not run it on Amazon's EC2 cloud-computing service or a similar such service? MCMCpack is, if I remember correctly, mostly implemented in C, so it isn't going to get much faster unless you decrease your model complexity, iterations, etc. With EC2, or similar cloud-computing services, you can have multiple instances at whatever specs you desire, and run all of your models at once.
How can I optimise computational efficiency when fitting a complex model to a large data set repeate
Why not run it on Amazon's EC2 cloud-computing service or a similar such service? MCMCpack is, if I remember correctly, mostly implemented in C, so it isn't going to get much faster unless you decreas
How can I optimise computational efficiency when fitting a complex model to a large data set repeatedly? Why not run it on Amazon's EC2 cloud-computing service or a similar such service? MCMCpack is, if I remember correctly, mostly implemented in C, so it isn't going to get much faster unless you decrease your model complexity, iterations, etc. With EC2, or similar cloud-computing services, you can have multiple instances at whatever specs you desire, and run all of your models at once.
How can I optimise computational efficiency when fitting a complex model to a large data set repeate Why not run it on Amazon's EC2 cloud-computing service or a similar such service? MCMCpack is, if I remember correctly, mostly implemented in C, so it isn't going to get much faster unless you decreas
24,564
Logistic quantile regression – how to best convey the results
The first thing you can do is, for example, interpret $\hat{\beta_2}$ as the estimated effect of $sex$ on the logit of the quantile you're looking at. $\exp\{\hat{\beta_2}\}$, similarly to "classic" logistic regression, is the odds ratio of median (or any other quantile) outcome in males versus females. The difference with "classic" logistic regression is how the odds are calculated: using your (bounded) outcome instead of a probability. Besides, you can always look at the predicted quantiles according to one covariate. Of course you have to fix (condition on) the values of the other covariates in your model (like you did in your example). By the way, the transformation should be $\log(\frac{y-y_{min}}{y_{max}-y})$. (This is not really intended to be an answer, as it's just a (poor) rewording of what it's written in this paper, that you cited yourself. However, it was too long to be a comment and someone who doesn't have access to on-line journals could be interested anyway).
Logistic quantile regression – how to best convey the results
The first thing you can do is, for example, interpret $\hat{\beta_2}$ as the estimated effect of $sex$ on the logit of the quantile you're looking at. $\exp\{\hat{\beta_2}\}$, similarly to "classic"
Logistic quantile regression – how to best convey the results The first thing you can do is, for example, interpret $\hat{\beta_2}$ as the estimated effect of $sex$ on the logit of the quantile you're looking at. $\exp\{\hat{\beta_2}\}$, similarly to "classic" logistic regression, is the odds ratio of median (or any other quantile) outcome in males versus females. The difference with "classic" logistic regression is how the odds are calculated: using your (bounded) outcome instead of a probability. Besides, you can always look at the predicted quantiles according to one covariate. Of course you have to fix (condition on) the values of the other covariates in your model (like you did in your example). By the way, the transformation should be $\log(\frac{y-y_{min}}{y_{max}-y})$. (This is not really intended to be an answer, as it's just a (poor) rewording of what it's written in this paper, that you cited yourself. However, it was too long to be a comment and someone who doesn't have access to on-line journals could be interested anyway).
Logistic quantile regression – how to best convey the results The first thing you can do is, for example, interpret $\hat{\beta_2}$ as the estimated effect of $sex$ on the logit of the quantile you're looking at. $\exp\{\hat{\beta_2}\}$, similarly to "classic"
24,565
Adjusting the p-value for adaptive sequential analysis (for chi square test)?
This area of sequential clinical trials has been explored substantially in the literature. Some of the notable researchers are Scott Emerson, Tom Flemming, David DeMets, Stephen Senn, and Stuart Pocock among others. It's possible to specify an "alpha-spending-rule". The term has its origins in the nature of frequentist (non-Fisherian) testing where, each action that increases the risk of a false positive finding should necessarily reduce power to keep the test of the correct size. However, the majority of such tests require that "stopping rules" are prespecified based on the information bounds of the study. (as a reminder, more information means greater power when the null is false). It sounds like what you are interested is a continuous monitoring process in which each event-time warrants a "look" into the data. To the best of my knowledge, such a test has no power. It can be done with Bayesian analysis where the posterior is continuously updated as a function of time, and Bayes Factors are used to summarize evidence rather than $p$-values. See [1] www.rctdesign.org/
Adjusting the p-value for adaptive sequential analysis (for chi square test)?
This area of sequential clinical trials has been explored substantially in the literature. Some of the notable researchers are Scott Emerson, Tom Flemming, David DeMets, Stephen Senn, and Stuart Pococ
Adjusting the p-value for adaptive sequential analysis (for chi square test)? This area of sequential clinical trials has been explored substantially in the literature. Some of the notable researchers are Scott Emerson, Tom Flemming, David DeMets, Stephen Senn, and Stuart Pocock among others. It's possible to specify an "alpha-spending-rule". The term has its origins in the nature of frequentist (non-Fisherian) testing where, each action that increases the risk of a false positive finding should necessarily reduce power to keep the test of the correct size. However, the majority of such tests require that "stopping rules" are prespecified based on the information bounds of the study. (as a reminder, more information means greater power when the null is false). It sounds like what you are interested is a continuous monitoring process in which each event-time warrants a "look" into the data. To the best of my knowledge, such a test has no power. It can be done with Bayesian analysis where the posterior is continuously updated as a function of time, and Bayes Factors are used to summarize evidence rather than $p$-values. See [1] www.rctdesign.org/
Adjusting the p-value for adaptive sequential analysis (for chi square test)? This area of sequential clinical trials has been explored substantially in the literature. Some of the notable researchers are Scott Emerson, Tom Flemming, David DeMets, Stephen Senn, and Stuart Pococ
24,566
Adjusting the p-value for adaptive sequential analysis (for chi square test)?
This sounds like a simulation is in order. So I simulated your procedure as follows: $N=1000$ people are added to the trial one-by-one, randomly assigned to one of the $4$ groups. The outcome of the treatment for this person is chosen randomly (i.e. I am simulating null hypothesis of all treatments having zero effect). After adding each person, I perform a chi squared test on the $4 \times 2$ contingency table and check if $p\le \alpha$. If it is, then (and only then) I additionally perform chi squared tests on the reduced $2 \times 2$ contingency tables to test each group against other three groups pooled together. If one of these further four tests comes out significant (with the same $\alpha$), then I check if this treatment performs better or worse than the other three pooled together. If worse, I kick this treatment out and continue adding people. If better, I stop the trial. If all $N$ people are added without any winning treatment, the trial is over (note that the results of my analysis will strongly depend on $N$). Now we can run this many times and find out in what fraction of runs one of the treatments comes out as a winner -- these would be false positives. If I run it 1000 times for nominal $\alpha=0.05$, I get 282 false positives, i.e. $0.28$ type II error rate. We can repeat this whole analysis for several nominal $\alpha$ and see what actual error rate we get: $$\begin{array}{cc}\alpha & \text{error rate} \\ 0.05 & \sim 0.28 \\ 0.01 & \sim 0.06 \\ 0.001 & \sim 0.008\end{array}$$ So if you want the actual error rate to be held say at $0.05$ level, you should choose the nominal $\alpha$ of around $0.008$ -- but of course it is better to run a longer simulation to estimate this more precisely. My quick and dirty code in Matlab is below. Please note that this code is brain-dead and not optimized at all; everything runs in loops and horribly slow. This can probably be accelerated a lot. function seqAnalysis() alphas = [0.001 0.01 0.05]; for a = 1:length(alphas) falsePositives(a) = trials_run(1000, 1000, alphas(a)); end display(num2str([alphas; falsePositives])) end function outcome = trials_run(Nrep, N, alpha) outcomes = zeros(1,Nrep); for rep = 1:Nrep if mod(rep,10) == 0 fprintf('.') end outcomes(rep) = trial(N, alpha); end fprintf('\n') outcome = sum(outcomes); end function result = trial(N, alpha) outcomes = zeros(2,4); result = 0; winner = []; %// adding subjects one by one for subject = 1:N group = randi(size(outcomes,2)); outcome = randi(2); outcomes(outcome, group) = outcomes(outcome, group) + 1; %// if groups are significantly different if chisqtest(outcomes) < alpha %// compare each treatment against the rest for group = 1:size(outcomes,2) contrast = [outcomes(:, group) ... sum(outcomes(:, setdiff(1:size(outcomes,2), group)),2)]; %// if significantly different if chisqtest(contrast) < alpha %// check if better or worse if contrast(1,1)/contrast(2,1) < contrast(1,2)/contrast(2,2) %// kick out this group outcomes = outcomes(:, setdiff(1:size(outcomes,2), group)); else %// winner! winner = group; end break end end end if ~isempty(winner) result = 1; break end end end function p = chisqtest(x) e = sum(x,2)*sum(x)/sum(x(:)); X2 = (x-e).^2./e; X2 = sum(X2(:)); df = prod(size(x)-[1 1]); p = 1-chi2cdf(X2,df); end
Adjusting the p-value for adaptive sequential analysis (for chi square test)?
This sounds like a simulation is in order. So I simulated your procedure as follows: $N=1000$ people are added to the trial one-by-one, randomly assigned to one of the $4$ groups. The outcome of the t
Adjusting the p-value for adaptive sequential analysis (for chi square test)? This sounds like a simulation is in order. So I simulated your procedure as follows: $N=1000$ people are added to the trial one-by-one, randomly assigned to one of the $4$ groups. The outcome of the treatment for this person is chosen randomly (i.e. I am simulating null hypothesis of all treatments having zero effect). After adding each person, I perform a chi squared test on the $4 \times 2$ contingency table and check if $p\le \alpha$. If it is, then (and only then) I additionally perform chi squared tests on the reduced $2 \times 2$ contingency tables to test each group against other three groups pooled together. If one of these further four tests comes out significant (with the same $\alpha$), then I check if this treatment performs better or worse than the other three pooled together. If worse, I kick this treatment out and continue adding people. If better, I stop the trial. If all $N$ people are added without any winning treatment, the trial is over (note that the results of my analysis will strongly depend on $N$). Now we can run this many times and find out in what fraction of runs one of the treatments comes out as a winner -- these would be false positives. If I run it 1000 times for nominal $\alpha=0.05$, I get 282 false positives, i.e. $0.28$ type II error rate. We can repeat this whole analysis for several nominal $\alpha$ and see what actual error rate we get: $$\begin{array}{cc}\alpha & \text{error rate} \\ 0.05 & \sim 0.28 \\ 0.01 & \sim 0.06 \\ 0.001 & \sim 0.008\end{array}$$ So if you want the actual error rate to be held say at $0.05$ level, you should choose the nominal $\alpha$ of around $0.008$ -- but of course it is better to run a longer simulation to estimate this more precisely. My quick and dirty code in Matlab is below. Please note that this code is brain-dead and not optimized at all; everything runs in loops and horribly slow. This can probably be accelerated a lot. function seqAnalysis() alphas = [0.001 0.01 0.05]; for a = 1:length(alphas) falsePositives(a) = trials_run(1000, 1000, alphas(a)); end display(num2str([alphas; falsePositives])) end function outcome = trials_run(Nrep, N, alpha) outcomes = zeros(1,Nrep); for rep = 1:Nrep if mod(rep,10) == 0 fprintf('.') end outcomes(rep) = trial(N, alpha); end fprintf('\n') outcome = sum(outcomes); end function result = trial(N, alpha) outcomes = zeros(2,4); result = 0; winner = []; %// adding subjects one by one for subject = 1:N group = randi(size(outcomes,2)); outcome = randi(2); outcomes(outcome, group) = outcomes(outcome, group) + 1; %// if groups are significantly different if chisqtest(outcomes) < alpha %// compare each treatment against the rest for group = 1:size(outcomes,2) contrast = [outcomes(:, group) ... sum(outcomes(:, setdiff(1:size(outcomes,2), group)),2)]; %// if significantly different if chisqtest(contrast) < alpha %// check if better or worse if contrast(1,1)/contrast(2,1) < contrast(1,2)/contrast(2,2) %// kick out this group outcomes = outcomes(:, setdiff(1:size(outcomes,2), group)); else %// winner! winner = group; end break end end end if ~isempty(winner) result = 1; break end end end function p = chisqtest(x) e = sum(x,2)*sum(x)/sum(x(:)); X2 = (x-e).^2./e; X2 = sum(X2(:)); df = prod(size(x)-[1 1]); p = 1-chi2cdf(X2,df); end
Adjusting the p-value for adaptive sequential analysis (for chi square test)? This sounds like a simulation is in order. So I simulated your procedure as follows: $N=1000$ people are added to the trial one-by-one, randomly assigned to one of the $4$ groups. The outcome of the t
24,567
Lasso modification for LARS
Let $X$ (size $n\times p$) denote a set of standardised inputs, $y$ (size $n \times 1$) centered responses, $\beta$ (size $p \times 1$) regression weights and $\lambda > 0$ a $l_1$-norm penalisation coefficient. The LASSO problem then writes \begin{align} \beta^* &= \text{argmin}_{\beta}\ L(\beta,\lambda) \\ L(\beta,\lambda) &= \Vert y-X\beta \Vert_2^2 + \lambda \Vert \beta \Vert_1 \end{align} Solving this for all values of $\lambda > 0$ yields the so-called LASSO regularisation path $\beta^*(\lambda)$. For a fixed value of the penalisation coefficient $\lambda^*$ (i.e. fixed number of active predictors = fixed step of the LARS algorithm), it is possible to show that $\beta^*$ satisfies (simply write out the KKT stationarity condition as in this answer) $$ \lambda^* = 2 \ \text{sign}(\beta_a^*) X_a^T (y - X \beta^*),\ \ \ \forall a \in A $$ with $A$ representing the set of active predictors. Because $\lambda^*$ must be positive (it is a penalisation coefficient), it is clear that the sign of $\beta_a^*$ (weight of any non-zero hence active predictor) should be the same than that of $X_a^T (y - X\beta^*) = X_{a}^T r$ i.e. the correlation with the current regression residual.
Lasso modification for LARS
Let $X$ (size $n\times p$) denote a set of standardised inputs, $y$ (size $n \times 1$) centered responses, $\beta$ (size $p \times 1$) regression weights and $\lambda > 0$ a $l_1$-norm penalisation c
Lasso modification for LARS Let $X$ (size $n\times p$) denote a set of standardised inputs, $y$ (size $n \times 1$) centered responses, $\beta$ (size $p \times 1$) regression weights and $\lambda > 0$ a $l_1$-norm penalisation coefficient. The LASSO problem then writes \begin{align} \beta^* &= \text{argmin}_{\beta}\ L(\beta,\lambda) \\ L(\beta,\lambda) &= \Vert y-X\beta \Vert_2^2 + \lambda \Vert \beta \Vert_1 \end{align} Solving this for all values of $\lambda > 0$ yields the so-called LASSO regularisation path $\beta^*(\lambda)$. For a fixed value of the penalisation coefficient $\lambda^*$ (i.e. fixed number of active predictors = fixed step of the LARS algorithm), it is possible to show that $\beta^*$ satisfies (simply write out the KKT stationarity condition as in this answer) $$ \lambda^* = 2 \ \text{sign}(\beta_a^*) X_a^T (y - X \beta^*),\ \ \ \forall a \in A $$ with $A$ representing the set of active predictors. Because $\lambda^*$ must be positive (it is a penalisation coefficient), it is clear that the sign of $\beta_a^*$ (weight of any non-zero hence active predictor) should be the same than that of $X_a^T (y - X\beta^*) = X_{a}^T r$ i.e. the correlation with the current regression residual.
Lasso modification for LARS Let $X$ (size $n\times p$) denote a set of standardised inputs, $y$ (size $n \times 1$) centered responses, $\beta$ (size $p \times 1$) regression weights and $\lambda > 0$ a $l_1$-norm penalisation c
24,568
Lasso modification for LARS
@Mr._White provided a great intuitive explanation of the major difference between LARS and Lasso; the only point I would add is that lasso is (kind of) like a backward selection approach, knocking out a term at each step as long as a term exists for which of those ("normalized" over $X \times X$) correlations exist. LARS keeps everything in there -- basically performing the lasso in every possible order. That does mean that in lasso, each iteration is dependent on which terms have already been removed. Effron's implementation illustrates the differences vary well: lars.R in the source pkg for lars. Notice the update step of matrices $X \times X$ matrix and $\zeta$ starting at line 180, and the dropping of the terms for which $\zeta_{min} < \zeta_{current}$. I can imagine some weird situations arising from spaces $A$ where the terms are unbalanced ($x_1$ and $x_2$ are very correlated but not with others, $x_2$ with $x_3$ but not with others, etc.) the selection order could be quite biased.
Lasso modification for LARS
@Mr._White provided a great intuitive explanation of the major difference between LARS and Lasso; the only point I would add is that lasso is (kind of) like a backward selection approach, knocking out
Lasso modification for LARS @Mr._White provided a great intuitive explanation of the major difference between LARS and Lasso; the only point I would add is that lasso is (kind of) like a backward selection approach, knocking out a term at each step as long as a term exists for which of those ("normalized" over $X \times X$) correlations exist. LARS keeps everything in there -- basically performing the lasso in every possible order. That does mean that in lasso, each iteration is dependent on which terms have already been removed. Effron's implementation illustrates the differences vary well: lars.R in the source pkg for lars. Notice the update step of matrices $X \times X$ matrix and $\zeta$ starting at line 180, and the dropping of the terms for which $\zeta_{min} < \zeta_{current}$. I can imagine some weird situations arising from spaces $A$ where the terms are unbalanced ($x_1$ and $x_2$ are very correlated but not with others, $x_2$ with $x_3$ but not with others, etc.) the selection order could be quite biased.
Lasso modification for LARS @Mr._White provided a great intuitive explanation of the major difference between LARS and Lasso; the only point I would add is that lasso is (kind of) like a backward selection approach, knocking out
24,569
Asymptotic normality of a quadratic form
There is some difficulty when using Delta method. It's more convenient to derive it by hand. By law of large number, $\hat{C}\xrightarrow{P} C$. Hence $\hat{C} +\gamma_n I\xrightarrow{P} C$. Apply Slutsky's theorem, we have $$\sqrt{n}(\hat{C} +\gamma_n I)^{-1/2}(\bar{X}-\mu)\xrightarrow{d}\mathcal{N} (0,C^{-1}).$$ By continuous mapping theorem, we have $$ {n}(\bar{X}-\mu)^T (\hat{C} +\gamma_n I)^{-1}(\bar{X}-\mu)\xrightarrow{d}\sum_{i=1}^p \lambda_i^{-1}(C)\chi^2_1. $$ Hence $$ \sqrt{n}(\bar{X}-\mu)^T (\hat{C} +\gamma_n I)^{-1}(\bar{X}-\mu)\xrightarrow{P}0. $$ By Slutsky's theorem, we have $$ \sqrt{n}\mu^T(\hat{C} +\gamma_n I)^{-1}(\bar{X}-\mu)\xrightarrow{d}\mathcal{N} (0,\mu^T C^{-2}\mu). $$ Combining the above two equality yields \begin{align} &\sqrt{n}\big(\bar{X}^T (\hat{C} +\gamma_n I)^{-1}\bar{X}-\mu^T (\hat{C} +\gamma_n I)^{-1}\mu\big) \\ = &\sqrt{n}\Big((\bar{X}-\mu)^T (\hat{C} +\gamma_n I)^{-1}(\bar{X}-\mu)-2\mu^T(\hat{C} +\gamma_n I)^{-1}(\bar{X}-\mu)\Big) \\ =&-2\sqrt{n}\mu^T(\hat{C} +\gamma_n I)^{-1}(\bar{X}-\mu)+o_P(1) \\ \xrightarrow{d}&\mathcal{N} (0,4\mu^T C^{-2}\mu). \end{align} The remaining task is to deal with $$ \sqrt{n} \Big( \mu^T (\hat{C} +\gamma_n I)^{-1}\mu-\mu^T (C)^{-1}\mu \Big). $$ Unfortunately, this term dose NOT converges to $0$. The behavior become complicated and depends on the third and fourth moments. To be simple, below we assume $X_i$ are normal distributed and $\gamma_n=o(n^{-1/2})$. It's a standard result that $$ \sqrt{n}(\hat{C}-C)\xrightarrow{d}C^{1/2} W C^{1/2}, $$ where $W$ is a symmetric random matrix with diagonal elements as $\mathcal{N}(0,2)$ and off diagonal elements as $\mathcal{N}(0,1)$. Thus, $$ \sqrt{n}(\hat{C}+\gamma_n I-C)\xrightarrow{d}C^{1/2} W C^{1/2}, $$ By matrix taylor expantion $(I+A)^{-1}\sim I-A+A^2$, we have \begin{align} &\sqrt{n}\Big((\hat{C} +\gamma_n I)^{-1}- C^{-1}\Big)= \sqrt{n}C^{-1/2}\Big(\big(C^{-1/2}(\hat{C} +\gamma_n I)C^{-1/2}\big)^{-1}-I\Big)C^{-1/2}\\ =&\sqrt{n}C^{-1}\Big(\hat{C} +\gamma_n I-C\Big)C^{-1}+O_P(n^{-1/2}) \xrightarrow{d}C^{-1/2} W C^{-1/2}. \end{align} Thus, $$ \sqrt{n} \Big( \mu^T (\hat{C} +\gamma_n I)^{-1}\mu-\mu^T (C)^{-1}\mu \Big)\xrightarrow{d}\mu^T C^{-1/2} W C^{-1/2}\mu \sim N(0,(\mu^T C^{-1}\mu)^2).$$ Thus, \begin{align} \sqrt{n}\big(\bar{X}^T (\hat{C} +\gamma_n I)^{-1}\bar{X}-\mu^T C^{-1}\mu\big) \xrightarrow{d}\mathcal{N} (0,4\mu^T C^{-2}\mu+(\mu^T C^{-1}\mu)^2). \end{align}
Asymptotic normality of a quadratic form
There is some difficulty when using Delta method. It's more convenient to derive it by hand. By law of large number, $\hat{C}\xrightarrow{P} C$. Hence $\hat{C} +\gamma_n I\xrightarrow{P} C$. Apply Slu
Asymptotic normality of a quadratic form There is some difficulty when using Delta method. It's more convenient to derive it by hand. By law of large number, $\hat{C}\xrightarrow{P} C$. Hence $\hat{C} +\gamma_n I\xrightarrow{P} C$. Apply Slutsky's theorem, we have $$\sqrt{n}(\hat{C} +\gamma_n I)^{-1/2}(\bar{X}-\mu)\xrightarrow{d}\mathcal{N} (0,C^{-1}).$$ By continuous mapping theorem, we have $$ {n}(\bar{X}-\mu)^T (\hat{C} +\gamma_n I)^{-1}(\bar{X}-\mu)\xrightarrow{d}\sum_{i=1}^p \lambda_i^{-1}(C)\chi^2_1. $$ Hence $$ \sqrt{n}(\bar{X}-\mu)^T (\hat{C} +\gamma_n I)^{-1}(\bar{X}-\mu)\xrightarrow{P}0. $$ By Slutsky's theorem, we have $$ \sqrt{n}\mu^T(\hat{C} +\gamma_n I)^{-1}(\bar{X}-\mu)\xrightarrow{d}\mathcal{N} (0,\mu^T C^{-2}\mu). $$ Combining the above two equality yields \begin{align} &\sqrt{n}\big(\bar{X}^T (\hat{C} +\gamma_n I)^{-1}\bar{X}-\mu^T (\hat{C} +\gamma_n I)^{-1}\mu\big) \\ = &\sqrt{n}\Big((\bar{X}-\mu)^T (\hat{C} +\gamma_n I)^{-1}(\bar{X}-\mu)-2\mu^T(\hat{C} +\gamma_n I)^{-1}(\bar{X}-\mu)\Big) \\ =&-2\sqrt{n}\mu^T(\hat{C} +\gamma_n I)^{-1}(\bar{X}-\mu)+o_P(1) \\ \xrightarrow{d}&\mathcal{N} (0,4\mu^T C^{-2}\mu). \end{align} The remaining task is to deal with $$ \sqrt{n} \Big( \mu^T (\hat{C} +\gamma_n I)^{-1}\mu-\mu^T (C)^{-1}\mu \Big). $$ Unfortunately, this term dose NOT converges to $0$. The behavior become complicated and depends on the third and fourth moments. To be simple, below we assume $X_i$ are normal distributed and $\gamma_n=o(n^{-1/2})$. It's a standard result that $$ \sqrt{n}(\hat{C}-C)\xrightarrow{d}C^{1/2} W C^{1/2}, $$ where $W$ is a symmetric random matrix with diagonal elements as $\mathcal{N}(0,2)$ and off diagonal elements as $\mathcal{N}(0,1)$. Thus, $$ \sqrt{n}(\hat{C}+\gamma_n I-C)\xrightarrow{d}C^{1/2} W C^{1/2}, $$ By matrix taylor expantion $(I+A)^{-1}\sim I-A+A^2$, we have \begin{align} &\sqrt{n}\Big((\hat{C} +\gamma_n I)^{-1}- C^{-1}\Big)= \sqrt{n}C^{-1/2}\Big(\big(C^{-1/2}(\hat{C} +\gamma_n I)C^{-1/2}\big)^{-1}-I\Big)C^{-1/2}\\ =&\sqrt{n}C^{-1}\Big(\hat{C} +\gamma_n I-C\Big)C^{-1}+O_P(n^{-1/2}) \xrightarrow{d}C^{-1/2} W C^{-1/2}. \end{align} Thus, $$ \sqrt{n} \Big( \mu^T (\hat{C} +\gamma_n I)^{-1}\mu-\mu^T (C)^{-1}\mu \Big)\xrightarrow{d}\mu^T C^{-1/2} W C^{-1/2}\mu \sim N(0,(\mu^T C^{-1}\mu)^2).$$ Thus, \begin{align} \sqrt{n}\big(\bar{X}^T (\hat{C} +\gamma_n I)^{-1}\bar{X}-\mu^T C^{-1}\mu\big) \xrightarrow{d}\mathcal{N} (0,4\mu^T C^{-2}\mu+(\mu^T C^{-1}\mu)^2). \end{align}
Asymptotic normality of a quadratic form There is some difficulty when using Delta method. It's more convenient to derive it by hand. By law of large number, $\hat{C}\xrightarrow{P} C$. Hence $\hat{C} +\gamma_n I\xrightarrow{P} C$. Apply Slu
24,570
What is probabilistic programming?
Probabilistic Programming is a technique for defining a statistical model. Unlike defining a model by its probability distribution function, or drawing a graph, you express the model in a programming language, typically as a forward sampler. Automatic inference from a model specification is a typical feature of probabilistic programming tools, but it is not essential, and there is no need for it to be Bayesian. There are a variety of useful things that you can do with a model specified as a probabilistic program. For example, the paper Deriving Probability Density Functions from Probabilistic Functional Programs describes a tool that analyzes a probabilistic program and works out its probability distribution function. The paper Detecting Parameter Symmetries in Probabilistic Models analyzes a probabilistic program for parameter symmetries. This kind of work also falls under probabilistic programming.
What is probabilistic programming?
Probabilistic Programming is a technique for defining a statistical model. Unlike defining a model by its probability distribution function, or drawing a graph, you express the model in a programming
What is probabilistic programming? Probabilistic Programming is a technique for defining a statistical model. Unlike defining a model by its probability distribution function, or drawing a graph, you express the model in a programming language, typically as a forward sampler. Automatic inference from a model specification is a typical feature of probabilistic programming tools, but it is not essential, and there is no need for it to be Bayesian. There are a variety of useful things that you can do with a model specified as a probabilistic program. For example, the paper Deriving Probability Density Functions from Probabilistic Functional Programs describes a tool that analyzes a probabilistic program and works out its probability distribution function. The paper Detecting Parameter Symmetries in Probabilistic Models analyzes a probabilistic program for parameter symmetries. This kind of work also falls under probabilistic programming.
What is probabilistic programming? Probabilistic Programming is a technique for defining a statistical model. Unlike defining a model by its probability distribution function, or drawing a graph, you express the model in a programming
24,571
Calculating confidence intervals for mode?
While it appears there hasn't been too much research into this specifically, there is a paper that did delve into this on some level. The paper On bootstrapping the mode in the nonparametric regression model with random design (Ziegler, 2001) suggests the use of a smoothed paired bootstrap (SPB). In this method, to quote the abstract, "bootstrap variables are generated from a smooth bivariate density based on the pairs of observations." The author claims that SPB "is able to capture the correct amount of bias if the pilot estimator for m is over smoothed." Here, m is the regression function for two i.i.d. variables. Good luck, and hope this gives you a start!
Calculating confidence intervals for mode?
While it appears there hasn't been too much research into this specifically, there is a paper that did delve into this on some level. The paper On bootstrapping the mode in the nonparametric regressio
Calculating confidence intervals for mode? While it appears there hasn't been too much research into this specifically, there is a paper that did delve into this on some level. The paper On bootstrapping the mode in the nonparametric regression model with random design (Ziegler, 2001) suggests the use of a smoothed paired bootstrap (SPB). In this method, to quote the abstract, "bootstrap variables are generated from a smooth bivariate density based on the pairs of observations." The author claims that SPB "is able to capture the correct amount of bias if the pilot estimator for m is over smoothed." Here, m is the regression function for two i.i.d. variables. Good luck, and hope this gives you a start!
Calculating confidence intervals for mode? While it appears there hasn't been too much research into this specifically, there is a paper that did delve into this on some level. The paper On bootstrapping the mode in the nonparametric regressio
24,572
Can Frank Harrell's method be used to obtain optimism-corrected regression coefficients?
This goes against what Harrell seems to mean when he writes about bootstrap validation. Harrell's argument basically goes like this. Splitting the data wastes data that could have been used for training, so train on the entire dataset. However, then we risk overfitting. We always risk overfitting, but when we have holdout data, we can catch that we have just played connect-the-dots. Therefore, bootstrap your data, fit the model to the bootstrap sample, evaluate that model on the full data set, and see how that performance compares to the performance of the model that was trained on the entire data set. Because we do these many times with bootstrap samples, we can get a good estimate of overfitting when we trained on the entire data set. If we are happy with how little overfitting there is, then we should use the model that was trained on the entire data set, which we now consider validated. At no point do we adjust the coefficients.
Can Frank Harrell's method be used to obtain optimism-corrected regression coefficients?
This goes against what Harrell seems to mean when he writes about bootstrap validation. Harrell's argument basically goes like this. Splitting the data wastes data that could have been used for train
Can Frank Harrell's method be used to obtain optimism-corrected regression coefficients? This goes against what Harrell seems to mean when he writes about bootstrap validation. Harrell's argument basically goes like this. Splitting the data wastes data that could have been used for training, so train on the entire dataset. However, then we risk overfitting. We always risk overfitting, but when we have holdout data, we can catch that we have just played connect-the-dots. Therefore, bootstrap your data, fit the model to the bootstrap sample, evaluate that model on the full data set, and see how that performance compares to the performance of the model that was trained on the entire data set. Because we do these many times with bootstrap samples, we can get a good estimate of overfitting when we trained on the entire data set. If we are happy with how little overfitting there is, then we should use the model that was trained on the entire data set, which we now consider validated. At no point do we adjust the coefficients.
Can Frank Harrell's method be used to obtain optimism-corrected regression coefficients? This goes against what Harrell seems to mean when he writes about bootstrap validation. Harrell's argument basically goes like this. Splitting the data wastes data that could have been used for train
24,573
Interpolating binned data such that bin average is preserved
Here is a paper that describes an iterative method that does what you're asking: Mean preserving algorithm for smoothly interpolating averaged data M.D. Rymes, D.R. Myers, Mean preserving algorithm for smoothly interpolating averaged data, Solar Energy, Volume 71, Issue 4, 2001, Pages 225-231, ISSN 0038-092X, https://doi.org/10.1016/S0038-092X(01)00052-4. (http://www.sciencedirect.com/science/article/pii/S0038092X01000524) Abstract: Hourly mean or monthly mean values of measured solar radiation are typical vehicles for summarized solar radiation and meteorological data. Often, solar-based renewable energy system designers, researchers, and engineers prefer to work with more highly time resolved data, such as detailed diurnal profiles, or mean daily values. The object of this paper is to present a simple method for smoothly interpolating averaged (coarsely resolved) data into data with a finer resolution, while preserving the deterministic mean of the data. The technique preserves the proper component relationship between direct, diffuse, and global solar radiation (when values for at least two of the components are available), as well as the deterministic mean of the coarsely resolved data. Examples based on measured data from several sources and examples of the applicability of this mean preserving smooth interpolator to other averaged data, such as weather data, are presented.
Interpolating binned data such that bin average is preserved
Here is a paper that describes an iterative method that does what you're asking: Mean preserving algorithm for smoothly interpolating averaged data M.D. Rymes, D.R. Myers, Mean preserving algorithm f
Interpolating binned data such that bin average is preserved Here is a paper that describes an iterative method that does what you're asking: Mean preserving algorithm for smoothly interpolating averaged data M.D. Rymes, D.R. Myers, Mean preserving algorithm for smoothly interpolating averaged data, Solar Energy, Volume 71, Issue 4, 2001, Pages 225-231, ISSN 0038-092X, https://doi.org/10.1016/S0038-092X(01)00052-4. (http://www.sciencedirect.com/science/article/pii/S0038092X01000524) Abstract: Hourly mean or monthly mean values of measured solar radiation are typical vehicles for summarized solar radiation and meteorological data. Often, solar-based renewable energy system designers, researchers, and engineers prefer to work with more highly time resolved data, such as detailed diurnal profiles, or mean daily values. The object of this paper is to present a simple method for smoothly interpolating averaged (coarsely resolved) data into data with a finer resolution, while preserving the deterministic mean of the data. The technique preserves the proper component relationship between direct, diffuse, and global solar radiation (when values for at least two of the components are available), as well as the deterministic mean of the coarsely resolved data. Examples based on measured data from several sources and examples of the applicability of this mean preserving smooth interpolator to other averaged data, such as weather data, are presented.
Interpolating binned data such that bin average is preserved Here is a paper that describes an iterative method that does what you're asking: Mean preserving algorithm for smoothly interpolating averaged data M.D. Rymes, D.R. Myers, Mean preserving algorithm f
24,574
Interpolating binned data such that bin average is preserved
Mean preserving or average preserving splines can be generated from "normal" interpolating splines. Your requirements: $\frac{1}{x_{i+1}-x_i} \int_{x_i}^{x_{i+1}} f(x) \text{d}x = \text{avg}_i$ $f\in\text{C}^1$, or at least $f\in\text{C}^0$ $f(x)\geq 0$ can be written equivalently by defining the integral $F(x) = \int_{x_0}^x f(t) \text{d}t$: $F(x_{i+1}) = F(x_i) + \text{avg}_i \, (x_{i+1}-x_i)$ $F\in\text{C}^2$, or at least $F\in\text{C}^1$ $F(x)$ is monotonic This is now a standard spline interpolation for $F$. In R you could do something like: avg = c(2.2, 3.5, 5.5, 4.5, 2.2, 0.2, 4.5) X=0:length(avg) Y=vector(length=length(X)) Y[0]=0 for(i in 2:length(Y)) Y[i]=Y[i-1]+avg[i-1]*(X[i]-X[i-1]) #s=splinefun(X,Y,method="natural") #s=splinefun(X,Y,method="monoH.FC") s=splinefun(X,Y,method="hyman") Xplot=seq(X[1],tail(X,n=1),by=0.02) Yplot=s(Xplot,deriv=1) barplot(avg, space=0,ylim=c(-0.5,6)) lines(Xplot,Yplot) result for s=splinefun(X,Y,method="natural") (not guaranteed positive) result for s=splinefun(X,Y,method="monoH.FC") result for s=splinefun(X,Y,method="hyman")
Interpolating binned data such that bin average is preserved
Mean preserving or average preserving splines can be generated from "normal" interpolating splines. Your requirements: $\frac{1}{x_{i+1}-x_i} \int_{x_i}^{x_{i+1}} f(x) \text{d}x = \text{avg}_i$ $f\in
Interpolating binned data such that bin average is preserved Mean preserving or average preserving splines can be generated from "normal" interpolating splines. Your requirements: $\frac{1}{x_{i+1}-x_i} \int_{x_i}^{x_{i+1}} f(x) \text{d}x = \text{avg}_i$ $f\in\text{C}^1$, or at least $f\in\text{C}^0$ $f(x)\geq 0$ can be written equivalently by defining the integral $F(x) = \int_{x_0}^x f(t) \text{d}t$: $F(x_{i+1}) = F(x_i) + \text{avg}_i \, (x_{i+1}-x_i)$ $F\in\text{C}^2$, or at least $F\in\text{C}^1$ $F(x)$ is monotonic This is now a standard spline interpolation for $F$. In R you could do something like: avg = c(2.2, 3.5, 5.5, 4.5, 2.2, 0.2, 4.5) X=0:length(avg) Y=vector(length=length(X)) Y[0]=0 for(i in 2:length(Y)) Y[i]=Y[i-1]+avg[i-1]*(X[i]-X[i-1]) #s=splinefun(X,Y,method="natural") #s=splinefun(X,Y,method="monoH.FC") s=splinefun(X,Y,method="hyman") Xplot=seq(X[1],tail(X,n=1),by=0.02) Yplot=s(Xplot,deriv=1) barplot(avg, space=0,ylim=c(-0.5,6)) lines(Xplot,Yplot) result for s=splinefun(X,Y,method="natural") (not guaranteed positive) result for s=splinefun(X,Y,method="monoH.FC") result for s=splinefun(X,Y,method="hyman")
Interpolating binned data such that bin average is preserved Mean preserving or average preserving splines can be generated from "normal" interpolating splines. Your requirements: $\frac{1}{x_{i+1}-x_i} \int_{x_i}^{x_{i+1}} f(x) \text{d}x = \text{avg}_i$ $f\in
24,575
Interpolating binned data such that bin average is preserved
The best solution I've got so far is to do a linear interpolation between points at bin centers as shown in the graph in the question, after having done a numerical optimisation of all the $y_i$, iterating until condition #1 is met (and with a harsh penalty for violating #3). Unfortunately, numerical optimisation is a bit of a heavier process than I had hoped for. Instead of doing numerical optimisation, I tried just setting up and solving a set of linear equations. That is really straightforward and quick, but it is not robust against requirement #3: some of the $y_i$ can end up negative, which is nonsensical. Unfortunately, #3 is a non-linear thing and can't be incorporated in the set of linear equations, as far as I can tell.
Interpolating binned data such that bin average is preserved
The best solution I've got so far is to do a linear interpolation between points at bin centers as shown in the graph in the question, after having done a numerical optimisation of all the $y_i$, iter
Interpolating binned data such that bin average is preserved The best solution I've got so far is to do a linear interpolation between points at bin centers as shown in the graph in the question, after having done a numerical optimisation of all the $y_i$, iterating until condition #1 is met (and with a harsh penalty for violating #3). Unfortunately, numerical optimisation is a bit of a heavier process than I had hoped for. Instead of doing numerical optimisation, I tried just setting up and solving a set of linear equations. That is really straightforward and quick, but it is not robust against requirement #3: some of the $y_i$ can end up negative, which is nonsensical. Unfortunately, #3 is a non-linear thing and can't be incorporated in the set of linear equations, as far as I can tell.
Interpolating binned data such that bin average is preserved The best solution I've got so far is to do a linear interpolation between points at bin centers as shown in the graph in the question, after having done a numerical optimisation of all the $y_i$, iter
24,576
Interpolating binned data such that bin average is preserved
Binning is highly discouraged because of inefficiency, discontinuity, and arbitrariness. But you have made the implicit assumption that the bins should be non-overlapping. Making the bins overlap and having many more of them will alleviate some of the problems although regression splines are better. Don't use bin centers to represent the distribution of $x$ within the bin. Use the mean $x$ within each bin.
Interpolating binned data such that bin average is preserved
Binning is highly discouraged because of inefficiency, discontinuity, and arbitrariness. But you have made the implicit assumption that the bins should be non-overlapping. Making the bins overlap an
Interpolating binned data such that bin average is preserved Binning is highly discouraged because of inefficiency, discontinuity, and arbitrariness. But you have made the implicit assumption that the bins should be non-overlapping. Making the bins overlap and having many more of them will alleviate some of the problems although regression splines are better. Don't use bin centers to represent the distribution of $x$ within the bin. Use the mean $x$ within each bin.
Interpolating binned data such that bin average is preserved Binning is highly discouraged because of inefficiency, discontinuity, and arbitrariness. But you have made the implicit assumption that the bins should be non-overlapping. Making the bins overlap an
24,577
Partial correlation and multiple regression controlling for categorical variables
It seems to me that the only unanswered part of your question is the part cited below: Also, is there any robust version of partial correlation (like kendall's 𝜏 τ /Spearman's rank correlation to Pearson's correlation)? The same way you can have partial Pearson correlation coefficient, you can have partial Spearman correlation coefficient and also Kendall. See some R code below with the package ppcor that helps you with partial correlation. library(ppcor) set.seed(2021) N <- 1000 X <- rnorm(N) Y <- rnorm(N) Z <- rnorm(N) pcor.test(X, Y, Z, method='pearson') You will be given an estimate of $-0.01175714$. If you rank the variables, that would be equivalent to the Spearman correlation. pcor.test(rank(X), rank(Y), rank(Z), method='pearson') And this way you get a partial spearman correlation of $0.008965395$. But you don't have to do this, you can just changed to spearman in the parameter of the function. pcor.test(X, Y, Z, method='spearman') And here we go, $0.008965395$ again. If you want to do the partial Kendall correlation, just changed the method parameter again. pcor.test(X, Y, Z, method='Kendall') This time, we got a partial Kendall correlation of $0.006344739$. If by robust you mean not depending on the distribution of the random variables, among other things, and most importantly, a measure of independence, I recommend you to read about Mutual Information.
Partial correlation and multiple regression controlling for categorical variables
It seems to me that the only unanswered part of your question is the part cited below: Also, is there any robust version of partial correlation (like kendall's 𝜏 τ /Spearman's rank correlation to Pea
Partial correlation and multiple regression controlling for categorical variables It seems to me that the only unanswered part of your question is the part cited below: Also, is there any robust version of partial correlation (like kendall's 𝜏 τ /Spearman's rank correlation to Pearson's correlation)? The same way you can have partial Pearson correlation coefficient, you can have partial Spearman correlation coefficient and also Kendall. See some R code below with the package ppcor that helps you with partial correlation. library(ppcor) set.seed(2021) N <- 1000 X <- rnorm(N) Y <- rnorm(N) Z <- rnorm(N) pcor.test(X, Y, Z, method='pearson') You will be given an estimate of $-0.01175714$. If you rank the variables, that would be equivalent to the Spearman correlation. pcor.test(rank(X), rank(Y), rank(Z), method='pearson') And this way you get a partial spearman correlation of $0.008965395$. But you don't have to do this, you can just changed to spearman in the parameter of the function. pcor.test(X, Y, Z, method='spearman') And here we go, $0.008965395$ again. If you want to do the partial Kendall correlation, just changed the method parameter again. pcor.test(X, Y, Z, method='Kendall') This time, we got a partial Kendall correlation of $0.006344739$. If by robust you mean not depending on the distribution of the random variables, among other things, and most importantly, a measure of independence, I recommend you to read about Mutual Information.
Partial correlation and multiple regression controlling for categorical variables It seems to me that the only unanswered part of your question is the part cited below: Also, is there any robust version of partial correlation (like kendall's 𝜏 τ /Spearman's rank correlation to Pea
24,578
Model selection in offline vs. online learning
Obviously, in a streaming context you cannot split your data into train and test sets to perform cross-validation. Using only the metrics calculated on the initial train set sounds even worse, as you assume that your data changes and your model will adapt to the changes--that is why you are using the online learning mode in the first place. What you could do is to use the kind of cross-validation that is used in time-series (see Hyndman and Athanasopoulos, 2018). To assess accuracy of time-series models, you could use a sequential method, where the model is trained on $k$ observations to predict on the $k+1$ "future" timepoint. This could be applied one point at a time, or in batches, and the procedure is repeated until you have traversed all your data (see the figure below, taken from Hyndman and Athanasopoulos, 2018). At the end, you somehow average (usually arithmetic mean, but you could use something like exponential smoothing as well) the error metrics to obtain the overall accuracy estimate. In an online scenario this would mean that you start at timepoint 1 and test on timepoint 2, next re-train on timepoint 2, to test on timepoint 3 etc. Notice that such cross-validation methodology lets you account for the changing nature of your models performance. Obviously, as your model adapts to the data and the data may change, you would need to monitor the error metrics regularly: otherwise it wouldn't differ much from using fixed-size train and test sets.
Model selection in offline vs. online learning
Obviously, in a streaming context you cannot split your data into train and test sets to perform cross-validation. Using only the metrics calculated on the initial train set sounds even worse, as you
Model selection in offline vs. online learning Obviously, in a streaming context you cannot split your data into train and test sets to perform cross-validation. Using only the metrics calculated on the initial train set sounds even worse, as you assume that your data changes and your model will adapt to the changes--that is why you are using the online learning mode in the first place. What you could do is to use the kind of cross-validation that is used in time-series (see Hyndman and Athanasopoulos, 2018). To assess accuracy of time-series models, you could use a sequential method, where the model is trained on $k$ observations to predict on the $k+1$ "future" timepoint. This could be applied one point at a time, or in batches, and the procedure is repeated until you have traversed all your data (see the figure below, taken from Hyndman and Athanasopoulos, 2018). At the end, you somehow average (usually arithmetic mean, but you could use something like exponential smoothing as well) the error metrics to obtain the overall accuracy estimate. In an online scenario this would mean that you start at timepoint 1 and test on timepoint 2, next re-train on timepoint 2, to test on timepoint 3 etc. Notice that such cross-validation methodology lets you account for the changing nature of your models performance. Obviously, as your model adapts to the data and the data may change, you would need to monitor the error metrics regularly: otherwise it wouldn't differ much from using fixed-size train and test sets.
Model selection in offline vs. online learning Obviously, in a streaming context you cannot split your data into train and test sets to perform cross-validation. Using only the metrics calculated on the initial train set sounds even worse, as you
24,579
Evaluating a regression model's performance using training and test sets?
As said, typically, the Mean Squared Error is used. You calculate your regression model based on your training set, and evaluate its performance using a separate test set (a set on inputs x and known predicted outputs y) by calculating the MSE between the outputs of the test set (y) and the outputs given by the model (f(x)) for the same given inputs (x). Alternatively you can use following metrics: Root Mean Squared Error, Relative Squared Error, Mean Absolute Error, Relative Absolute Error... (ask google for definitions)
Evaluating a regression model's performance using training and test sets?
As said, typically, the Mean Squared Error is used. You calculate your regression model based on your training set, and evaluate its performance using a separate test set (a set on inputs x and known
Evaluating a regression model's performance using training and test sets? As said, typically, the Mean Squared Error is used. You calculate your regression model based on your training set, and evaluate its performance using a separate test set (a set on inputs x and known predicted outputs y) by calculating the MSE between the outputs of the test set (y) and the outputs given by the model (f(x)) for the same given inputs (x). Alternatively you can use following metrics: Root Mean Squared Error, Relative Squared Error, Mean Absolute Error, Relative Absolute Error... (ask google for definitions)
Evaluating a regression model's performance using training and test sets? As said, typically, the Mean Squared Error is used. You calculate your regression model based on your training set, and evaluate its performance using a separate test set (a set on inputs x and known
24,580
Trouble finding good model fit for count data with mixed effects - ZINB or something else?
This post has four years, but I wanted to follow on what fsociety said in a comment. Diagnosis of residuals in GLMMs is not straightforward, since standard residual plots can show non-normality, heteroscedasticity, etc., even if the model is correctly specified. There is an R package, DHARMa, specifically suited for diagnosing these type of models. The package is based on a simulation approach to generate scaled residuals from fitted generalized linear mixed models and generates different easily interpretable diagnostic plots. Here is a small example with the data from the original post and the first fitted model (m1): library(DHARMa) sim_residuals <- simulateResiduals(m1, 1000) plotSimulatedResiduals(sim_residuals) The plot on the left shows a QQ plot of the scaled residuals to detect deviations from the expected distribution, and the plot on the right represents residuals vs predicted values while performing quantile regression to detect deviations from uniformity (red lines should be horizontal and at 0.25, 0.50 and 0.75). Additionally, the package has also specific functions for testing for over/under dispersion and zero inflation, among others: testOverdispersionParametric(m1) Chisq test for overdispersion in GLMMs data: poisson dispersion = 0.18926, pearSS = 11.35600, rdf = 60.00000, p-value = 1 alternative hypothesis: true dispersion greater 1 testZeroInflation(sim_residuals) DHARMa zero-inflation test via comparison to expected zeros with simulation under H0 = fitted model data: sim_residuals ratioObsExp = 0.98894, p-value = 0.502 alternative hypothesis: more
Trouble finding good model fit for count data with mixed effects - ZINB or something else?
This post has four years, but I wanted to follow on what fsociety said in a comment. Diagnosis of residuals in GLMMs is not straightforward, since standard residual plots can show non-normality, heter
Trouble finding good model fit for count data with mixed effects - ZINB or something else? This post has four years, but I wanted to follow on what fsociety said in a comment. Diagnosis of residuals in GLMMs is not straightforward, since standard residual plots can show non-normality, heteroscedasticity, etc., even if the model is correctly specified. There is an R package, DHARMa, specifically suited for diagnosing these type of models. The package is based on a simulation approach to generate scaled residuals from fitted generalized linear mixed models and generates different easily interpretable diagnostic plots. Here is a small example with the data from the original post and the first fitted model (m1): library(DHARMa) sim_residuals <- simulateResiduals(m1, 1000) plotSimulatedResiduals(sim_residuals) The plot on the left shows a QQ plot of the scaled residuals to detect deviations from the expected distribution, and the plot on the right represents residuals vs predicted values while performing quantile regression to detect deviations from uniformity (red lines should be horizontal and at 0.25, 0.50 and 0.75). Additionally, the package has also specific functions for testing for over/under dispersion and zero inflation, among others: testOverdispersionParametric(m1) Chisq test for overdispersion in GLMMs data: poisson dispersion = 0.18926, pearSS = 11.35600, rdf = 60.00000, p-value = 1 alternative hypothesis: true dispersion greater 1 testZeroInflation(sim_residuals) DHARMa zero-inflation test via comparison to expected zeros with simulation under H0 = fitted model data: sim_residuals ratioObsExp = 0.98894, p-value = 0.502 alternative hypothesis: more
Trouble finding good model fit for count data with mixed effects - ZINB or something else? This post has four years, but I wanted to follow on what fsociety said in a comment. Diagnosis of residuals in GLMMs is not straightforward, since standard residual plots can show non-normality, heter
24,581
Comparing regression coefficients of same model across different data sets
From the ideal gas law here, $PV=nRT$, suggesting a proportional model. Make sure your units are in absolute temperature. Asking for a proportional result would imply a proportional error model. Consider, perhaps $Y=a D^b S^c$, then for multiple linear regression one can use $\ln (Y)=\ln (a)+b \ln (D)+c \ln (S)$ by taking the logarithms of the Y, D, and S values, so that this then looks like $Y_l=a_l+b D_l+c S_l$, where the $l$ subscripts mean "logarithm of." Now, this may work better than the linear model you are using, and, the answers are then relative error type. To verify what type of model to use try one and check if the residuals are homoscedastic. If they are not then you have a biased model, then do something else like model the logarithms, as above, one or more reciprocals of x or y data, square roots, squaring, exponentiation and so forth until the residuals are homoscedastic. If the model cannot yield homoscedastic residuals then use multiple linear Theil regression, with censoring if needed. How normally the data is distributed on the y axis is not required, but, outliers can and often do distort the regression parameter results markedly. If homoscedasticity cannot be found then ordinary least squares should not be used and some other type of regression needs to be performed, e.g. weighted regression, Theil regression, least squares in x, Deming regression and so forth. Also, the errors should not be serially correlated. The meaning of the output: $z = (a_{1} - b_{1}) / \sqrt{SE_{a_{1}}^2 + SE_{b_{1}}^2 )}$, may or may not be relevant. This assumes that the total variance is the sum of two independent variances. To put this another way, independence is orthogonality (perpendicularity) on an $x,y$ plot. That is, the total variability (variance) then follows Pythagorean theorem, $H=+\sqrt{A^2+O^2}$, which may or may not be the case for your data. If that is the case, then the $z$-statistic is a relative distance, i.e., a difference of means (a distance), divided by Pythagorean, A.K.A. vector, addition of standard error (SE), which are standard deviations (SDs) divided by $\sqrt{N}$, where SEs are themselves distances. Dividing one distance by the other then normalizes them, i.e., the difference in means divided by the total (standard) error, which is then in a form so that one can apply ND(0,1) to find a probability. Now, what happens if the measures are not independent, and how can one test for it? You may remember from geometry that triangles that are not right angled add their sides as $C^2=A^2+B^2-2 A B \cos (\theta ),\theta =\angle(A,B)$, if not refresh your memory here. That is, when there is something other than a 90-degree angle between the axes, we have to include what that angle is in the calculation of total distance. First recall what correlation is, standardized covariance. This for total distance $\sigma _T$ and correlation $\rho_{A,B}$ becomes $\sigma _T^2=\sigma _A^2+\sigma _B^2-2 \sigma _A \sigma _B \rho_{A,B}$. In other words, if your standard deviations are correlated (e.g., pairwise), they are not independent.
Comparing regression coefficients of same model across different data sets
From the ideal gas law here, $PV=nRT$, suggesting a proportional model. Make sure your units are in absolute temperature. Asking for a proportional result would imply a proportional error model. Consi
Comparing regression coefficients of same model across different data sets From the ideal gas law here, $PV=nRT$, suggesting a proportional model. Make sure your units are in absolute temperature. Asking for a proportional result would imply a proportional error model. Consider, perhaps $Y=a D^b S^c$, then for multiple linear regression one can use $\ln (Y)=\ln (a)+b \ln (D)+c \ln (S)$ by taking the logarithms of the Y, D, and S values, so that this then looks like $Y_l=a_l+b D_l+c S_l$, where the $l$ subscripts mean "logarithm of." Now, this may work better than the linear model you are using, and, the answers are then relative error type. To verify what type of model to use try one and check if the residuals are homoscedastic. If they are not then you have a biased model, then do something else like model the logarithms, as above, one or more reciprocals of x or y data, square roots, squaring, exponentiation and so forth until the residuals are homoscedastic. If the model cannot yield homoscedastic residuals then use multiple linear Theil regression, with censoring if needed. How normally the data is distributed on the y axis is not required, but, outliers can and often do distort the regression parameter results markedly. If homoscedasticity cannot be found then ordinary least squares should not be used and some other type of regression needs to be performed, e.g. weighted regression, Theil regression, least squares in x, Deming regression and so forth. Also, the errors should not be serially correlated. The meaning of the output: $z = (a_{1} - b_{1}) / \sqrt{SE_{a_{1}}^2 + SE_{b_{1}}^2 )}$, may or may not be relevant. This assumes that the total variance is the sum of two independent variances. To put this another way, independence is orthogonality (perpendicularity) on an $x,y$ plot. That is, the total variability (variance) then follows Pythagorean theorem, $H=+\sqrt{A^2+O^2}$, which may or may not be the case for your data. If that is the case, then the $z$-statistic is a relative distance, i.e., a difference of means (a distance), divided by Pythagorean, A.K.A. vector, addition of standard error (SE), which are standard deviations (SDs) divided by $\sqrt{N}$, where SEs are themselves distances. Dividing one distance by the other then normalizes them, i.e., the difference in means divided by the total (standard) error, which is then in a form so that one can apply ND(0,1) to find a probability. Now, what happens if the measures are not independent, and how can one test for it? You may remember from geometry that triangles that are not right angled add their sides as $C^2=A^2+B^2-2 A B \cos (\theta ),\theta =\angle(A,B)$, if not refresh your memory here. That is, when there is something other than a 90-degree angle between the axes, we have to include what that angle is in the calculation of total distance. First recall what correlation is, standardized covariance. This for total distance $\sigma _T$ and correlation $\rho_{A,B}$ becomes $\sigma _T^2=\sigma _A^2+\sigma _B^2-2 \sigma _A \sigma _B \rho_{A,B}$. In other words, if your standard deviations are correlated (e.g., pairwise), they are not independent.
Comparing regression coefficients of same model across different data sets From the ideal gas law here, $PV=nRT$, suggesting a proportional model. Make sure your units are in absolute temperature. Asking for a proportional result would imply a proportional error model. Consi
24,582
Explaining generalized method of moments to a non-statistician
In the classical method of moments you specify a moment condition for each parameter you need to estimate. The resulting set of equations are then "just-identified". GMM aims to find a solution even if the system is not just-identified. The idea is to find a minimum distance solution by finding parameter estimates that bring the moment conditions as close to zero as possible.
Explaining generalized method of moments to a non-statistician
In the classical method of moments you specify a moment condition for each parameter you need to estimate. The resulting set of equations are then "just-identified". GMM aims to find a solution even i
Explaining generalized method of moments to a non-statistician In the classical method of moments you specify a moment condition for each parameter you need to estimate. The resulting set of equations are then "just-identified". GMM aims to find a solution even if the system is not just-identified. The idea is to find a minimum distance solution by finding parameter estimates that bring the moment conditions as close to zero as possible.
Explaining generalized method of moments to a non-statistician In the classical method of moments you specify a moment condition for each parameter you need to estimate. The resulting set of equations are then "just-identified". GMM aims to find a solution even i
24,583
Explaining generalized method of moments to a non-statistician
There are several methods to estimate the parameters of a model. This is a core part of statistics/econometrics. GMM (Generalized Method of Moments) is one such method and it is more robust (statistically and literally[for non-statistics audience]) than several others. It should be intuitive that the process of estimation involves how good your model fits the data. The GMM uses more conditions than the ordinary models while doing this. (You have mentioned average and variance. I am assuming that is a familiar idea). Average and Variance are some basic metrics of the data. A person models the data to understand it's nature. A perfect(hypothetical model) would explain the data through and through. Let us take an example of modeling heights of all the people in a building. There are two metrics average and variance. Average is the first level metric, variance is the second level metric. An average is adding all the heights and dividing it by the number of people. It tells you something like 11 feet is ridiculous. 5 feet is sensible. Now consider the variance, it will tell an additional layer of information: 6 feet is not ridiculous(based on average) but how likely is it for the height of the person to be 6 feet. If the building is a middle school building, it is less likely right ? If it is office building more likely. These are examples of something technically called moments of the data(after explaining average and variance, should be comfortable ?). One's model should do well if it caters to these conditions of average and variance observed. Beyond average and variance, there are several other metrics. The GMM fits the model for these higher metrics(moments). Simpler methods cater to smaller metrics. The name as it suggests is generalized method - it tries to be as general as possible.
Explaining generalized method of moments to a non-statistician
There are several methods to estimate the parameters of a model. This is a core part of statistics/econometrics. GMM (Generalized Method of Moments) is one such method and it is more robust (statistic
Explaining generalized method of moments to a non-statistician There are several methods to estimate the parameters of a model. This is a core part of statistics/econometrics. GMM (Generalized Method of Moments) is one such method and it is more robust (statistically and literally[for non-statistics audience]) than several others. It should be intuitive that the process of estimation involves how good your model fits the data. The GMM uses more conditions than the ordinary models while doing this. (You have mentioned average and variance. I am assuming that is a familiar idea). Average and Variance are some basic metrics of the data. A person models the data to understand it's nature. A perfect(hypothetical model) would explain the data through and through. Let us take an example of modeling heights of all the people in a building. There are two metrics average and variance. Average is the first level metric, variance is the second level metric. An average is adding all the heights and dividing it by the number of people. It tells you something like 11 feet is ridiculous. 5 feet is sensible. Now consider the variance, it will tell an additional layer of information: 6 feet is not ridiculous(based on average) but how likely is it for the height of the person to be 6 feet. If the building is a middle school building, it is less likely right ? If it is office building more likely. These are examples of something technically called moments of the data(after explaining average and variance, should be comfortable ?). One's model should do well if it caters to these conditions of average and variance observed. Beyond average and variance, there are several other metrics. The GMM fits the model for these higher metrics(moments). Simpler methods cater to smaller metrics. The name as it suggests is generalized method - it tries to be as general as possible.
Explaining generalized method of moments to a non-statistician There are several methods to estimate the parameters of a model. This is a core part of statistics/econometrics. GMM (Generalized Method of Moments) is one such method and it is more robust (statistic
24,584
Population stability index - division by zero
I guess you could consider the empty bins as filled with a very small number. This retains the information and avoids division by zero. And, of course, this way you keep the original bins, which is a good thing.
Population stability index - division by zero
I guess you could consider the empty bins as filled with a very small number. This retains the information and avoids division by zero. And, of course, this way you keep the original bins, which is a
Population stability index - division by zero I guess you could consider the empty bins as filled with a very small number. This retains the information and avoids division by zero. And, of course, this way you keep the original bins, which is a good thing.
Population stability index - division by zero I guess you could consider the empty bins as filled with a very small number. This retains the information and avoids division by zero. And, of course, this way you keep the original bins, which is a
24,585
Population stability index - division by zero
One way is to assign one count to such bins and then calculate the fraction considering this new point: counts[counts==0] = 1 fract = counts/sum(counts) If your sample is large enough that few counts are negligible, then I think it is a good approximation. The information that you have after adding these "phantom" counts is almost the same (always considering the validity of new points in your sample): very very very ... few points from your sample fall down in these bins.
Population stability index - division by zero
One way is to assign one count to such bins and then calculate the fraction considering this new point: counts[counts==0] = 1 fract = counts/sum(counts) If your sample is large enough that few counts
Population stability index - division by zero One way is to assign one count to such bins and then calculate the fraction considering this new point: counts[counts==0] = 1 fract = counts/sum(counts) If your sample is large enough that few counts are negligible, then I think it is a good approximation. The information that you have after adding these "phantom" counts is almost the same (always considering the validity of new points in your sample): very very very ... few points from your sample fall down in these bins.
Population stability index - division by zero One way is to assign one count to such bins and then calculate the fraction considering this new point: counts[counts==0] = 1 fract = counts/sum(counts) If your sample is large enough that few counts
24,586
Population stability index - division by zero
Could you skip it? That is, could you understand it as zero? The division by zero is uniquely and reasonably determined as 1/0=0/0=z/0=0 in the natural extensions of fractions. We have to change our basic ideas for our space and world Division by Zero z/0 = 0 in Euclidean Spaces Hi roshi Michiwaki, Hiroshi Okumura and Saburou Saitoh International Journal of Mathematics and Computation Vol. 28(2017); Issue 1, 2017), 1 -16.   http://www.scirp.org/journal/alamt     http://dx.doi.org/10.4236/alamt.2016.62007 http://www.ijapm.org/show-63-504-1.html http://www.diogenes.bg/ijam/contents/2014-27-2/9/9.pdf http://okmr.yamatoblog.net/division%20by%20zero/announcement%20326-%20the%20divi http://okmr.yamatoblog.net/ Relations of 0 and infinity Hiroshi Okumura, Saburou Saitoh and Tsutomu Matsuura: http://www.e-jikei.org/…/Camera%20ready%20manuscript_JTSS_A… https://sites.google.com/site/sandrapinelas/icddea-2017
Population stability index - division by zero
Could you skip it? That is, could you understand it as zero? The division by zero is uniquely and reasonably determined as 1/0=0/0=z/0=0 in the natural extensions of fractions. We have to change our b
Population stability index - division by zero Could you skip it? That is, could you understand it as zero? The division by zero is uniquely and reasonably determined as 1/0=0/0=z/0=0 in the natural extensions of fractions. We have to change our basic ideas for our space and world Division by Zero z/0 = 0 in Euclidean Spaces Hi roshi Michiwaki, Hiroshi Okumura and Saburou Saitoh International Journal of Mathematics and Computation Vol. 28(2017); Issue 1, 2017), 1 -16.   http://www.scirp.org/journal/alamt     http://dx.doi.org/10.4236/alamt.2016.62007 http://www.ijapm.org/show-63-504-1.html http://www.diogenes.bg/ijam/contents/2014-27-2/9/9.pdf http://okmr.yamatoblog.net/division%20by%20zero/announcement%20326-%20the%20divi http://okmr.yamatoblog.net/ Relations of 0 and infinity Hiroshi Okumura, Saburou Saitoh and Tsutomu Matsuura: http://www.e-jikei.org/…/Camera%20ready%20manuscript_JTSS_A… https://sites.google.com/site/sandrapinelas/icddea-2017
Population stability index - division by zero Could you skip it? That is, could you understand it as zero? The division by zero is uniquely and reasonably determined as 1/0=0/0=z/0=0 in the natural extensions of fractions. We have to change our b
24,587
Can we make the Irwin-Hall distribution more general?
Well, this isn't really a full answer, will come back later to complete ... Brian Ripley's book Stochastic simulation have the closed pdf formula as exercise 3.1 page 92 and is given below: $$ f(x) = \sum_{r=0}^{\lfloor x \rfloor} (-1)^r \binom{n}{r} (x-r)^{(n-1)}/ (n-1)! $$ An R implementation of this is below: makeIH <- function(n) Vectorize( function(x) { if (x < 0) return(0.0) if (x > n) return(0.0) X <- floor(x) r <- seq(from=0, to=X) s <- (-1)^r * choose(n, r)*(x-r)^(n-1)/factorial(n-1) sum(s) } ) which is used this way: fun3 <- makeIH(3) plot(fun3,from=0,to=3,n=1001) abline(v=1, col="red") abline(v=2, col="red") and gives this plot: The unsmoothness at the integer values $x=1, x=2$ can be seen, at least with a good eyesight .... (I will come back to complete this later)
Can we make the Irwin-Hall distribution more general?
Well, this isn't really a full answer, will come back later to complete ... Brian Ripley's book Stochastic simulation have the closed pdf formula as exercise 3.1 page 92 and is given below: $$ f(x) =
Can we make the Irwin-Hall distribution more general? Well, this isn't really a full answer, will come back later to complete ... Brian Ripley's book Stochastic simulation have the closed pdf formula as exercise 3.1 page 92 and is given below: $$ f(x) = \sum_{r=0}^{\lfloor x \rfloor} (-1)^r \binom{n}{r} (x-r)^{(n-1)}/ (n-1)! $$ An R implementation of this is below: makeIH <- function(n) Vectorize( function(x) { if (x < 0) return(0.0) if (x > n) return(0.0) X <- floor(x) r <- seq(from=0, to=X) s <- (-1)^r * choose(n, r)*(x-r)^(n-1)/factorial(n-1) sum(s) } ) which is used this way: fun3 <- makeIH(3) plot(fun3,from=0,to=3,n=1001) abline(v=1, col="red") abline(v=2, col="red") and gives this plot: The unsmoothness at the integer values $x=1, x=2$ can be seen, at least with a good eyesight .... (I will come back to complete this later)
Can we make the Irwin-Hall distribution more general? Well, this isn't really a full answer, will come back later to complete ... Brian Ripley's book Stochastic simulation have the closed pdf formula as exercise 3.1 page 92 and is given below: $$ f(x) =
24,588
Ratio of sum of Normal to sum of cubes of Normal
If the formulation was $$ U_n = \frac{X_1 + X_2 + \ldots + X_n}{Y_1^3 + Y_2^3 + \ldots Y_n^3}$$ where $X_i\sim N(0,1)$ and $Y_i \sim N(0,1)$ are independent, it would be just a classic textbook exercise. You use the fact that $$F_n \stackrel{d}{\to} F,\quad G_n \stackrel{d}{\to}G \Rightarrow \frac{F_n}{G_n} \stackrel{d}{\to} \frac{F}{G}$$ and we can conclude that $U$ asymptotes to scaled Cauchy distribution. But in your formulation, we can't apply the theorem due to dependence. My Monte-Carlo suggests that the limit distribution of $U_n$ is non-degenerate and it has no first moment and is not symmetric. I'd be interested in whether there is a explicit solution to this problem. I feel like the solution can only be written in terms of Wiener process. [EDIT] Following whuber's hint, note that $$(\frac{1}{\sqrt{n}}\sum{X_i},\frac{1}{\sqrt{n}}\sum{X_i^3})\stackrel{d}{\to}(Z_1,Z_2)$$ where $$(Z_1,Z_2) \sim N(0,\pmatrix{1 &3 \\3 &15})$$ by noting that $E[X_1^4]=3$ and $E[X_1^6]=15$. (moments of standard normal, $(n-1)!!$ for even $n$) Then by continuous mapping theorem, we have $$U_n \stackrel{d}{\to} \frac{Z_1}{Z_2}$$ Noting that we can write $Z_1 = \frac{1}{5}Z_2+\sqrt{\frac{2}{5}}Z_3$ where $Z_3\sim N(0,1)$ and independent of $Z_2$, we conclude that $$U_n \stackrel{d}{\to} \frac{1}{5}+\sqrt{\frac{2}{5}}\frac{Z_3}{Z_2} \equiv \frac{1}{5}+\sqrt{\frac{2}{75}}\Gamma$$ where $\Gamma \sim Cauchy$
Ratio of sum of Normal to sum of cubes of Normal
If the formulation was $$ U_n = \frac{X_1 + X_2 + \ldots + X_n}{Y_1^3 + Y_2^3 + \ldots Y_n^3}$$ where $X_i\sim N(0,1)$ and $Y_i \sim N(0,1)$ are independent, it would be just a classic textbook exerci
Ratio of sum of Normal to sum of cubes of Normal If the formulation was $$ U_n = \frac{X_1 + X_2 + \ldots + X_n}{Y_1^3 + Y_2^3 + \ldots Y_n^3}$$ where $X_i\sim N(0,1)$ and $Y_i \sim N(0,1)$ are independent, it would be just a classic textbook exercise. You use the fact that $$F_n \stackrel{d}{\to} F,\quad G_n \stackrel{d}{\to}G \Rightarrow \frac{F_n}{G_n} \stackrel{d}{\to} \frac{F}{G}$$ and we can conclude that $U$ asymptotes to scaled Cauchy distribution. But in your formulation, we can't apply the theorem due to dependence. My Monte-Carlo suggests that the limit distribution of $U_n$ is non-degenerate and it has no first moment and is not symmetric. I'd be interested in whether there is a explicit solution to this problem. I feel like the solution can only be written in terms of Wiener process. [EDIT] Following whuber's hint, note that $$(\frac{1}{\sqrt{n}}\sum{X_i},\frac{1}{\sqrt{n}}\sum{X_i^3})\stackrel{d}{\to}(Z_1,Z_2)$$ where $$(Z_1,Z_2) \sim N(0,\pmatrix{1 &3 \\3 &15})$$ by noting that $E[X_1^4]=3$ and $E[X_1^6]=15$. (moments of standard normal, $(n-1)!!$ for even $n$) Then by continuous mapping theorem, we have $$U_n \stackrel{d}{\to} \frac{Z_1}{Z_2}$$ Noting that we can write $Z_1 = \frac{1}{5}Z_2+\sqrt{\frac{2}{5}}Z_3$ where $Z_3\sim N(0,1)$ and independent of $Z_2$, we conclude that $$U_n \stackrel{d}{\to} \frac{1}{5}+\sqrt{\frac{2}{5}}\frac{Z_3}{Z_2} \equiv \frac{1}{5}+\sqrt{\frac{2}{75}}\Gamma$$ where $\Gamma \sim Cauchy$
Ratio of sum of Normal to sum of cubes of Normal If the formulation was $$ U_n = \frac{X_1 + X_2 + \ldots + X_n}{Y_1^3 + Y_2^3 + \ldots Y_n^3}$$ where $X_i\sim N(0,1)$ and $Y_i \sim N(0,1)$ are independent, it would be just a classic textbook exerci
24,589
Ratio of sum of Normal to sum of cubes of Normal
Some comments, not a full solution. This is to long for a comment, but really only a comment. Some properties of the solution. Since the $X_i$ are iid standard normal, which is a symmetric (about zero) distribution, $X_i^3$ will also have symmetric distributions, and sums of (independent) symmetric rv's will be symmetric. So this is a ratio with the numerator and denominator both symmetric, so will be symmetric. The denominator will have a continuous density which is positive at zero, so we will expect the ratio to lack expectation (It is a general result that if $Z$ is a random variable with continuous density positive at zero, then the $1/X$ will lack expectation. See I've heard that ratios or inverses of random variables often are problematic, in not having expectations. Why is that?). But here, there is dependence between numerator and denominator which complicates the matter ... (Clearly needs more thought here). The interesting paper https://projecteuclid.org/download/pdf_1/euclid.aop/1176991795 shows that $x_i^3$ above, the cube of standard normal variables, has an indeterminate distribution "in the hamburger sense", that is, it is not determined by its moments! So the comment above about using transforms, might indicate a difficult way to proceed!
Ratio of sum of Normal to sum of cubes of Normal
Some comments, not a full solution. This is to long for a comment, but really only a comment. Some properties of the solution. Since the $X_i$ are iid standard normal, which is a symmetric (about ze
Ratio of sum of Normal to sum of cubes of Normal Some comments, not a full solution. This is to long for a comment, but really only a comment. Some properties of the solution. Since the $X_i$ are iid standard normal, which is a symmetric (about zero) distribution, $X_i^3$ will also have symmetric distributions, and sums of (independent) symmetric rv's will be symmetric. So this is a ratio with the numerator and denominator both symmetric, so will be symmetric. The denominator will have a continuous density which is positive at zero, so we will expect the ratio to lack expectation (It is a general result that if $Z$ is a random variable with continuous density positive at zero, then the $1/X$ will lack expectation. See I've heard that ratios or inverses of random variables often are problematic, in not having expectations. Why is that?). But here, there is dependence between numerator and denominator which complicates the matter ... (Clearly needs more thought here). The interesting paper https://projecteuclid.org/download/pdf_1/euclid.aop/1176991795 shows that $x_i^3$ above, the cube of standard normal variables, has an indeterminate distribution "in the hamburger sense", that is, it is not determined by its moments! So the comment above about using transforms, might indicate a difficult way to proceed!
Ratio of sum of Normal to sum of cubes of Normal Some comments, not a full solution. This is to long for a comment, but really only a comment. Some properties of the solution. Since the $X_i$ are iid standard normal, which is a symmetric (about ze
24,590
Finding a known number of circle centers that maximize the number of points within a fixed distance
This is a variation k-means problem. The radius of the centers doesn't matter, as long as their are assumed equal. Links: http://en.wikipedia.org/wiki/K-means_clustering http://stat.ethz.ch/R-manual/R-devel/library/stats/html/kmeans.html It will put the centers of the circles at locations of highest probability of the points. Classic K-means Procedure: set cluster count to 5 put each point in a random cluster for each cluster, calculate the mean position for each point, calculate the distance to each new mean position associate membership with the nearest cluster repeat until done (iterations, change in position, or other error metric) Options: You can use some under-relaxation after 3, where you translate the mean position slowly toward the new position. this is a discrete system so it doesn't converge perfectly. Sometimes it does and you can end when points stop changing membership, but sometimes they just wiggle a bit. If you are making your own code (as most folks should) then you can use the POR k-means above as a starting point, and do some variation on EM informed by percent of points exclusively and completely encompassed by the circles. Why K-means attacks the problem: It is the equivalent of fitting a Gaussian Mixture Model where the covariances of the components are equal. The centers of the mixture components are going to be located at the positions of highest expectation of points. The curves of constant probability are going to be circles. This is EM algorithm so it has the asymptotic convergence. The memberships are hard, not soft. I think that if the fundamental assumption of the equal variance components mixture model is reasonably "close", whatever that means, then this method is going to fit. If you just randomly distribute points, it is less likely to fit well. There should be some analog of a "Zero Inflated Poisson" where there is a component that is non-gaussian that picks up the uniform distribution. If you wanted to "tune" you model and were confident that there were enough sample points then you could initialize with the k-means, and then make an augmented k-means adjuster that removes points outside of the radii of the circles from competition. It would slightly perturb the circles you have, but it might have slightly improved performance given the data.
Finding a known number of circle centers that maximize the number of points within a fixed distance
This is a variation k-means problem. The radius of the centers doesn't matter, as long as their are assumed equal. Links: http://en.wikipedia.org/wiki/K-means_clustering http://stat.ethz.ch/R-manual
Finding a known number of circle centers that maximize the number of points within a fixed distance This is a variation k-means problem. The radius of the centers doesn't matter, as long as their are assumed equal. Links: http://en.wikipedia.org/wiki/K-means_clustering http://stat.ethz.ch/R-manual/R-devel/library/stats/html/kmeans.html It will put the centers of the circles at locations of highest probability of the points. Classic K-means Procedure: set cluster count to 5 put each point in a random cluster for each cluster, calculate the mean position for each point, calculate the distance to each new mean position associate membership with the nearest cluster repeat until done (iterations, change in position, or other error metric) Options: You can use some under-relaxation after 3, where you translate the mean position slowly toward the new position. this is a discrete system so it doesn't converge perfectly. Sometimes it does and you can end when points stop changing membership, but sometimes they just wiggle a bit. If you are making your own code (as most folks should) then you can use the POR k-means above as a starting point, and do some variation on EM informed by percent of points exclusively and completely encompassed by the circles. Why K-means attacks the problem: It is the equivalent of fitting a Gaussian Mixture Model where the covariances of the components are equal. The centers of the mixture components are going to be located at the positions of highest expectation of points. The curves of constant probability are going to be circles. This is EM algorithm so it has the asymptotic convergence. The memberships are hard, not soft. I think that if the fundamental assumption of the equal variance components mixture model is reasonably "close", whatever that means, then this method is going to fit. If you just randomly distribute points, it is less likely to fit well. There should be some analog of a "Zero Inflated Poisson" where there is a component that is non-gaussian that picks up the uniform distribution. If you wanted to "tune" you model and were confident that there were enough sample points then you could initialize with the k-means, and then make an augmented k-means adjuster that removes points outside of the radii of the circles from competition. It would slightly perturb the circles you have, but it might have slightly improved performance given the data.
Finding a known number of circle centers that maximize the number of points within a fixed distance This is a variation k-means problem. The radius of the centers doesn't matter, as long as their are assumed equal. Links: http://en.wikipedia.org/wiki/K-means_clustering http://stat.ethz.ch/R-manual
24,591
Finding a known number of circle centers that maximize the number of points within a fixed distance
Someone probably has a better formal algorithm, but here's one brute force approach (a hack?). I'd use one of the hexagonal binning algorithms to compute a 2D histogram. Like hexbin in R. I'd use a hexagon size that'd roughly circumscribe your circle of radius R and then sort on the top N bins. If you got N distinct far away bins, great. Now one way is to move about the circle locally on a 2*R scale (in x and y directions) from the center of the top density hexagons. Computing densities can roughly optimize position locally. This will account for the fact that the hexagons were not a moving window with respect to a fixed origin. If all top bins are close by you'd have to have some smarter way of moving your circles in that vicinity. Note that I can think of several corner cases where such a naive strategy will spectacularly fail. Yet, just a starting point. Meanwhile, I hope someone has a better algorithm.
Finding a known number of circle centers that maximize the number of points within a fixed distance
Someone probably has a better formal algorithm, but here's one brute force approach (a hack?). I'd use one of the hexagonal binning algorithms to compute a 2D histogram. Like hexbin in R. I'd use a
Finding a known number of circle centers that maximize the number of points within a fixed distance Someone probably has a better formal algorithm, but here's one brute force approach (a hack?). I'd use one of the hexagonal binning algorithms to compute a 2D histogram. Like hexbin in R. I'd use a hexagon size that'd roughly circumscribe your circle of radius R and then sort on the top N bins. If you got N distinct far away bins, great. Now one way is to move about the circle locally on a 2*R scale (in x and y directions) from the center of the top density hexagons. Computing densities can roughly optimize position locally. This will account for the fact that the hexagons were not a moving window with respect to a fixed origin. If all top bins are close by you'd have to have some smarter way of moving your circles in that vicinity. Note that I can think of several corner cases where such a naive strategy will spectacularly fail. Yet, just a starting point. Meanwhile, I hope someone has a better algorithm.
Finding a known number of circle centers that maximize the number of points within a fixed distance Someone probably has a better formal algorithm, but here's one brute force approach (a hack?). I'd use one of the hexagonal binning algorithms to compute a 2D histogram. Like hexbin in R. I'd use a
24,592
How to model a biased coin with time varying bias?
I doubt you can come up with a model with analytic solution, but the inference can still be made tractable using right tools as the dependency structure of your model is simple. As a machine learning researcher, I would prefer using the following model as the inference can be made pretty efficient using the technique of Expectation Propagation: Let $X(t)$ be the outcome of $t$-th trial. Let us define the time-varying parameter $\eta(t+1) \sim \mathcal{N}(\eta(t), \tau^2)$ for $t \geq 0$. To link $\eta(t)$ with $X(t)$, introduce latent variables $Y(t) \sim \mathcal{N}(\eta(t), \beta^2)$, and model $X(t)$ to be $X(t) = 1$ if $Y(t) \geq 0$, and $X(t) = 0$ otherwise. You can actually ignore $Y(t)$'s and marginalize them out to just say $\mathbb{P}[X(t)=1] = \Phi(\eta(t)/\beta)$, (with $\Phi$ cdf of standard normal) but the introduction of latent variables makes inference easy. Also, note that in your original parametrization $\theta(t) = \eta(t)/\beta$. If you are interested in implementing the inference algorithm, take a look at this paper. They use a very similar model so you can easily adapt the algorithm. To understand EP the following page may found useful. If you are interested in pursuing this approach let me know; I can provide more detailed advice on how to implement the inference algorithm.
How to model a biased coin with time varying bias?
I doubt you can come up with a model with analytic solution, but the inference can still be made tractable using right tools as the dependency structure of your model is simple. As a machine learning
How to model a biased coin with time varying bias? I doubt you can come up with a model with analytic solution, but the inference can still be made tractable using right tools as the dependency structure of your model is simple. As a machine learning researcher, I would prefer using the following model as the inference can be made pretty efficient using the technique of Expectation Propagation: Let $X(t)$ be the outcome of $t$-th trial. Let us define the time-varying parameter $\eta(t+1) \sim \mathcal{N}(\eta(t), \tau^2)$ for $t \geq 0$. To link $\eta(t)$ with $X(t)$, introduce latent variables $Y(t) \sim \mathcal{N}(\eta(t), \beta^2)$, and model $X(t)$ to be $X(t) = 1$ if $Y(t) \geq 0$, and $X(t) = 0$ otherwise. You can actually ignore $Y(t)$'s and marginalize them out to just say $\mathbb{P}[X(t)=1] = \Phi(\eta(t)/\beta)$, (with $\Phi$ cdf of standard normal) but the introduction of latent variables makes inference easy. Also, note that in your original parametrization $\theta(t) = \eta(t)/\beta$. If you are interested in implementing the inference algorithm, take a look at this paper. They use a very similar model so you can easily adapt the algorithm. To understand EP the following page may found useful. If you are interested in pursuing this approach let me know; I can provide more detailed advice on how to implement the inference algorithm.
How to model a biased coin with time varying bias? I doubt you can come up with a model with analytic solution, but the inference can still be made tractable using right tools as the dependency structure of your model is simple. As a machine learning
24,593
How to model a biased coin with time varying bias?
To elaborate on my comment a model such as p(t)=p$_0$ exp(-t) is a model that is simple and allows estimation of p(t) by estimating p$_0$ using maximum likelihood estimation. But does the probability really decay exponentially. This model would be clearly wrong if you observe time periods with high frequency of success than you observed at earlier and later times. Oscillatory behavior could be modelled as p(t)=p$_0$ |sint|. Both models are very tractable and can be solved by maximum likelihood but they give very different solutions.
How to model a biased coin with time varying bias?
To elaborate on my comment a model such as p(t)=p$_0$ exp(-t) is a model that is simple and allows estimation of p(t) by estimating p$_0$ using maximum likelihood estimation. But does the probability
How to model a biased coin with time varying bias? To elaborate on my comment a model such as p(t)=p$_0$ exp(-t) is a model that is simple and allows estimation of p(t) by estimating p$_0$ using maximum likelihood estimation. But does the probability really decay exponentially. This model would be clearly wrong if you observe time periods with high frequency of success than you observed at earlier and later times. Oscillatory behavior could be modelled as p(t)=p$_0$ |sint|. Both models are very tractable and can be solved by maximum likelihood but they give very different solutions.
How to model a biased coin with time varying bias? To elaborate on my comment a model such as p(t)=p$_0$ exp(-t) is a model that is simple and allows estimation of p(t) by estimating p$_0$ using maximum likelihood estimation. But does the probability
24,594
How to model a biased coin with time varying bias?
Your probability changes with $t$ but as Michael said, you don't know how. linearly or not ? It looks like a model selection problem where your probablity $p$ : $p=\Phi(g(t,\theta))$ may depend on a highly non linear $g(t,\theta)$ function. $\Phi$ is just a bounding function that guarantees between 0 and 1 probabilities. A simple exploratory approach would be to try several probits for $\Phi$ with different non linear $g()$ and to perform a $g()$ model selection based on standard Information Criterias. To answer your re-eddited question: As you said using probit would imply numerical solutions only but you may use a logistic function instead : Logistic function: $P[\theta(t+1)] = \frac{1}{1+\exp{(\theta(t)+\epsilon)}}$ Linearized by : $ \log{\frac{P}{1-P}} = \theta(t)+\epsilon $ I'm not sure how this can work under Kalman filter approach, but still believe that a non linear specification like $\theta(t+1)=a t^3 +bt^2+ct + d$ or many others without a random term will do the job. As you can see this function is "smoth" in the sense that it's continous and differentiable. Unfortunately adding $\epsilon$ would generate jumps of the resulting probability which is something you don't want so my advice would be to take out $\epsilon$. Logit probablity: $P[Coin_{t+1}=H | t] = \frac{1}{1+\exp{(\theta(t))}}$ You already have randomnes in the bernoulli event (Markov Chain) and you are adding an additional source of it due to $\epsilon$. Thus, your problem could be solved as a Probit or Logit estimated by Maximum likelihood with $t$ as explanatory variable. I suppose you agree that that parsimony is very important. Unless your main objective is to apply a given method (HMM and Kalman Filter) and not to give the simplest valid solution to your problem.
How to model a biased coin with time varying bias?
Your probability changes with $t$ but as Michael said, you don't know how. linearly or not ? It looks like a model selection problem where your probablity $p$ : $p=\Phi(g(t,\theta))$ may depend on a
How to model a biased coin with time varying bias? Your probability changes with $t$ but as Michael said, you don't know how. linearly or not ? It looks like a model selection problem where your probablity $p$ : $p=\Phi(g(t,\theta))$ may depend on a highly non linear $g(t,\theta)$ function. $\Phi$ is just a bounding function that guarantees between 0 and 1 probabilities. A simple exploratory approach would be to try several probits for $\Phi$ with different non linear $g()$ and to perform a $g()$ model selection based on standard Information Criterias. To answer your re-eddited question: As you said using probit would imply numerical solutions only but you may use a logistic function instead : Logistic function: $P[\theta(t+1)] = \frac{1}{1+\exp{(\theta(t)+\epsilon)}}$ Linearized by : $ \log{\frac{P}{1-P}} = \theta(t)+\epsilon $ I'm not sure how this can work under Kalman filter approach, but still believe that a non linear specification like $\theta(t+1)=a t^3 +bt^2+ct + d$ or many others without a random term will do the job. As you can see this function is "smoth" in the sense that it's continous and differentiable. Unfortunately adding $\epsilon$ would generate jumps of the resulting probability which is something you don't want so my advice would be to take out $\epsilon$. Logit probablity: $P[Coin_{t+1}=H | t] = \frac{1}{1+\exp{(\theta(t))}}$ You already have randomnes in the bernoulli event (Markov Chain) and you are adding an additional source of it due to $\epsilon$. Thus, your problem could be solved as a Probit or Logit estimated by Maximum likelihood with $t$ as explanatory variable. I suppose you agree that that parsimony is very important. Unless your main objective is to apply a given method (HMM and Kalman Filter) and not to give the simplest valid solution to your problem.
How to model a biased coin with time varying bias? Your probability changes with $t$ but as Michael said, you don't know how. linearly or not ? It looks like a model selection problem where your probablity $p$ : $p=\Phi(g(t,\theta))$ may depend on a
24,595
How to deal with ceiling effect due to measurement tool?
I like to use heterogeneous Mixture Models to describe combined effects from fundamentally different sources. You might look at something like a "Zero Inflated Poisson" model in the style of Diane Lambert. "Zero-Inflated Poisson Regression, With an Application to Defects in Manufacturing", Diane Lambert, Technometrics, Vol. 34, Iss. 1, 1992 I find this idea particularly delightful because it seems to contradict the notion that application of statistical design of experiments to medicine cannot fully cure disease. Behind the notion is the idea that the scientific method cannot complete its purpose in medicine comes from the idea that there is no disease data from a "perfectly" healthy individual and so that data cannot inform remedy of disease. Without measurement there is no room to improve. Using something like a zero-inflated model allows one to extract useful information from data that is partially "error free". It is using insight into the process to take the information that could be thought of as "silent" and make it speak. To me this is the kind of thing you are trying to do. Now I can't begin to assert which combinations of models to use. I suspect that you could use a zero-inflated Gaussian Mixture Model (GMM) for starters. The GMM is a bit of an empirical universal approximator for continuous PDFs - like the PDF cousin of the Fourier Series approximation, but with the support of the central limit theorem to improve the global applicability and allow typically many fewer components in order to make a "good" approximation. Best of Luck. EDIT: More on zero-inflated models: http://jirss.irstat.ir/files/site1/user_files_f6193f/admin-A-10-1-27-630ff9e.pdf http://faculty.franklin.uga.edu/dhall/sites/faculty.franklin.uga.edu.dhall/files/HallZhangStatMod04.pdf
How to deal with ceiling effect due to measurement tool?
I like to use heterogeneous Mixture Models to describe combined effects from fundamentally different sources. You might look at something like a "Zero Inflated Poisson" model in the style of Diane Lam
How to deal with ceiling effect due to measurement tool? I like to use heterogeneous Mixture Models to describe combined effects from fundamentally different sources. You might look at something like a "Zero Inflated Poisson" model in the style of Diane Lambert. "Zero-Inflated Poisson Regression, With an Application to Defects in Manufacturing", Diane Lambert, Technometrics, Vol. 34, Iss. 1, 1992 I find this idea particularly delightful because it seems to contradict the notion that application of statistical design of experiments to medicine cannot fully cure disease. Behind the notion is the idea that the scientific method cannot complete its purpose in medicine comes from the idea that there is no disease data from a "perfectly" healthy individual and so that data cannot inform remedy of disease. Without measurement there is no room to improve. Using something like a zero-inflated model allows one to extract useful information from data that is partially "error free". It is using insight into the process to take the information that could be thought of as "silent" and make it speak. To me this is the kind of thing you are trying to do. Now I can't begin to assert which combinations of models to use. I suspect that you could use a zero-inflated Gaussian Mixture Model (GMM) for starters. The GMM is a bit of an empirical universal approximator for continuous PDFs - like the PDF cousin of the Fourier Series approximation, but with the support of the central limit theorem to improve the global applicability and allow typically many fewer components in order to make a "good" approximation. Best of Luck. EDIT: More on zero-inflated models: http://jirss.irstat.ir/files/site1/user_files_f6193f/admin-A-10-1-27-630ff9e.pdf http://faculty.franklin.uga.edu/dhall/sites/faculty.franklin.uga.edu.dhall/files/HallZhangStatMod04.pdf
How to deal with ceiling effect due to measurement tool? I like to use heterogeneous Mixture Models to describe combined effects from fundamentally different sources. You might look at something like a "Zero Inflated Poisson" model in the style of Diane Lam
24,596
How to deal with ceiling effect due to measurement tool?
Clustering the results and defining a scale might be a solution. Make a category variable like so (or differently): High sensitivity Normal sensitivity Low sensitivity Insensitive (the ones that are off the scale in your case) You could use this variable to do the analysis, but whether the results are meaningful depends on how well you define the categories.
How to deal with ceiling effect due to measurement tool?
Clustering the results and defining a scale might be a solution. Make a category variable like so (or differently): High sensitivity Normal sensitivity Low sensitivity Insensitive (the ones that are
How to deal with ceiling effect due to measurement tool? Clustering the results and defining a scale might be a solution. Make a category variable like so (or differently): High sensitivity Normal sensitivity Low sensitivity Insensitive (the ones that are off the scale in your case) You could use this variable to do the analysis, but whether the results are meaningful depends on how well you define the categories.
How to deal with ceiling effect due to measurement tool? Clustering the results and defining a scale might be a solution. Make a category variable like so (or differently): High sensitivity Normal sensitivity Low sensitivity Insensitive (the ones that are
24,597
How do you select variables in a regression model?
See Gelman and Hill, Data Analysis Using Regression and Multilevel/Hierarchical Model pg 69, they have a section on model selection. She is using a question based approach which is completely fine but in her paper she needs to justify why she included what she did in the model. Just like you said "These variables are not always strong predictors of the effect but may be important for a physician when deciding on treatment for individual patients." so as long as she justifies why these predictors should be included then it is fine. For me personally I prefer these methods. So here comes my answer to 2. Stepwise, forward, and backwards I think are black boxes. When you run a model through all three you will not arrive to the same predictors. Therefore in terms of which to use I wouldn't have a clear answer. AIC or BIC is okay to use to compare models.
How do you select variables in a regression model?
See Gelman and Hill, Data Analysis Using Regression and Multilevel/Hierarchical Model pg 69, they have a section on model selection. She is using a question based approach which is completely fine but
How do you select variables in a regression model? See Gelman and Hill, Data Analysis Using Regression and Multilevel/Hierarchical Model pg 69, they have a section on model selection. She is using a question based approach which is completely fine but in her paper she needs to justify why she included what she did in the model. Just like you said "These variables are not always strong predictors of the effect but may be important for a physician when deciding on treatment for individual patients." so as long as she justifies why these predictors should be included then it is fine. For me personally I prefer these methods. So here comes my answer to 2. Stepwise, forward, and backwards I think are black boxes. When you run a model through all three you will not arrive to the same predictors. Therefore in terms of which to use I wouldn't have a clear answer. AIC or BIC is okay to use to compare models.
How do you select variables in a regression model? See Gelman and Hill, Data Analysis Using Regression and Multilevel/Hierarchical Model pg 69, they have a section on model selection. She is using a question based approach which is completely fine but
24,598
What is this "maximum correlation coefficient"?
The maximum correlation coefficient corresponds to the optimal value of the shape parameter. This site may help a bit Probability plot correlation coefficient
What is this "maximum correlation coefficient"?
The maximum correlation coefficient corresponds to the optimal value of the shape parameter. This site may help a bit Probability plot correlation coefficient
What is this "maximum correlation coefficient"? The maximum correlation coefficient corresponds to the optimal value of the shape parameter. This site may help a bit Probability plot correlation coefficient
What is this "maximum correlation coefficient"? The maximum correlation coefficient corresponds to the optimal value of the shape parameter. This site may help a bit Probability plot correlation coefficient
24,599
Validity of confidence interval for $\rho$ when $X\sim N_3(0,\Sigma)$ with $\Sigma_{ij}=\rho^{|i-j|}$
Normally, inverting the likelihood ratio test statistic or Rao's score test statistic is a nice technique to keep the parameter within its parameter space. However, since we have a sample size of 1 and the exact distribution of these test statistics is unknown in this case, these methods will fail miserably. It's times like these when the frequentist approach should give way to the Bayesian approach. The Bayesian approach will develop a credible interval, not a confidence interval, but it is guaranteed to be within its parameter space. Since we have a sample size of 1, I recommend using a uniform prior for $\rho$, namely $\rho \sim Uni(-1,1)$. Hence the posterior density of $\rho$ will be equivalent to the likelihood, up to a multiplicative constant. That is, \begin{eqnarray*} p(\rho | \boldsymbol{x} ) = \frac{1}{C} \exp\left(-\frac{1}{2} \left[\log |\boldsymbol{\Sigma}| + \boldsymbol{x}^{\prime} \boldsymbol{\Sigma}^{-1} \boldsymbol{x}\right]\right), \end{eqnarray*} where the constant $C$ is given by $C = \int_{-1}^1 p(\rho | \boldsymbol{x} ) \mbox{d} \rho$. This may be found by numerical integration. In order to construct a two-sided $100(1-\alpha)\%$ credible interval, we can use a univariate root solver to find the values of $\rho_l$ and $\rho_u$ such that $\alpha/2 =\int_{-1}^{\rho_l} p(\rho | \boldsymbol{x} ) \mbox{d} \rho$ and $\alpha/2 =\int_{\rho_u}^{1} p(\rho | \boldsymbol{x} ) \mbox{d} \rho$. The $100(1-\alpha)\%$ credible interval is then $(\rho_l, \rho_u)$. Here is some R code to accomplish this. library(MASS) p = .5 Sigma = matrix(c(1,p,p^2,p,1,p,p^2,p,1),3,3) x = mvrnorm(1,rep(0,3),Sigma) prop.post = function(p,x){ sigma = matrix(c(1,p,p^2,p,1,p,p^2,p,1),3,3) h=exp(-.5*(as.numeric(determinant(sigma, logarithm = TRUE)$mod) + as.numeric(t(x)%*%solve(sigma)%*%x))) return(h) } v.prop.post = Vectorize(prop.post,vectorize.args="p") con = integrate(prop.post,-1,1,x=x)$value true.post = function(p,x,con){ sigma = matrix(c(1,p,p^2,p,1,p,p^2,p,1),3,3) h=exp(-.5*(as.numeric(determinant(sigma, logarithm = TRUE)$mod) + as.numeric(t(x)%*%solve(sigma)%*%x))) return(h/con) } v.true.post = Vectorize(true.post,vectorize.args="p") a = .05 ci.l = function(x2,x,con,a){ return(integrate(v.true.post,-1,x2,x=x,con=con)$value - a/2) } ci.u = function(x2,x,con,a){ return(integrate(v.true.post,x2,1,x=x,con=con)$value - a/2) } v1=uniroot(ci.l,c(-.9999,.9999),x=x,con=con,a=a)$root v2=uniroot(ci.u,c(-.9999,.9999),x=x,con=con,a=a)$root s = seq(-.99,.99,length=1000) y = v.true.post(s,x,con) plot(y~s,type="l") abline(v=v1,lty=2) abline(v=v2,lty=2)
Validity of confidence interval for $\rho$ when $X\sim N_3(0,\Sigma)$ with $\Sigma_{ij}=\rho^{|i-j|}
Normally, inverting the likelihood ratio test statistic or Rao's score test statistic is a nice technique to keep the parameter within its parameter space. However, since we have a sample size of 1 a
Validity of confidence interval for $\rho$ when $X\sim N_3(0,\Sigma)$ with $\Sigma_{ij}=\rho^{|i-j|}$ Normally, inverting the likelihood ratio test statistic or Rao's score test statistic is a nice technique to keep the parameter within its parameter space. However, since we have a sample size of 1 and the exact distribution of these test statistics is unknown in this case, these methods will fail miserably. It's times like these when the frequentist approach should give way to the Bayesian approach. The Bayesian approach will develop a credible interval, not a confidence interval, but it is guaranteed to be within its parameter space. Since we have a sample size of 1, I recommend using a uniform prior for $\rho$, namely $\rho \sim Uni(-1,1)$. Hence the posterior density of $\rho$ will be equivalent to the likelihood, up to a multiplicative constant. That is, \begin{eqnarray*} p(\rho | \boldsymbol{x} ) = \frac{1}{C} \exp\left(-\frac{1}{2} \left[\log |\boldsymbol{\Sigma}| + \boldsymbol{x}^{\prime} \boldsymbol{\Sigma}^{-1} \boldsymbol{x}\right]\right), \end{eqnarray*} where the constant $C$ is given by $C = \int_{-1}^1 p(\rho | \boldsymbol{x} ) \mbox{d} \rho$. This may be found by numerical integration. In order to construct a two-sided $100(1-\alpha)\%$ credible interval, we can use a univariate root solver to find the values of $\rho_l$ and $\rho_u$ such that $\alpha/2 =\int_{-1}^{\rho_l} p(\rho | \boldsymbol{x} ) \mbox{d} \rho$ and $\alpha/2 =\int_{\rho_u}^{1} p(\rho | \boldsymbol{x} ) \mbox{d} \rho$. The $100(1-\alpha)\%$ credible interval is then $(\rho_l, \rho_u)$. Here is some R code to accomplish this. library(MASS) p = .5 Sigma = matrix(c(1,p,p^2,p,1,p,p^2,p,1),3,3) x = mvrnorm(1,rep(0,3),Sigma) prop.post = function(p,x){ sigma = matrix(c(1,p,p^2,p,1,p,p^2,p,1),3,3) h=exp(-.5*(as.numeric(determinant(sigma, logarithm = TRUE)$mod) + as.numeric(t(x)%*%solve(sigma)%*%x))) return(h) } v.prop.post = Vectorize(prop.post,vectorize.args="p") con = integrate(prop.post,-1,1,x=x)$value true.post = function(p,x,con){ sigma = matrix(c(1,p,p^2,p,1,p,p^2,p,1),3,3) h=exp(-.5*(as.numeric(determinant(sigma, logarithm = TRUE)$mod) + as.numeric(t(x)%*%solve(sigma)%*%x))) return(h/con) } v.true.post = Vectorize(true.post,vectorize.args="p") a = .05 ci.l = function(x2,x,con,a){ return(integrate(v.true.post,-1,x2,x=x,con=con)$value - a/2) } ci.u = function(x2,x,con,a){ return(integrate(v.true.post,x2,1,x=x,con=con)$value - a/2) } v1=uniroot(ci.l,c(-.9999,.9999),x=x,con=con,a=a)$root v2=uniroot(ci.u,c(-.9999,.9999),x=x,con=con,a=a)$root s = seq(-.99,.99,length=1000) y = v.true.post(s,x,con) plot(y~s,type="l") abline(v=v1,lty=2) abline(v=v2,lty=2)
Validity of confidence interval for $\rho$ when $X\sim N_3(0,\Sigma)$ with $\Sigma_{ij}=\rho^{|i-j|} Normally, inverting the likelihood ratio test statistic or Rao's score test statistic is a nice technique to keep the parameter within its parameter space. However, since we have a sample size of 1 a
24,600
Does there exist an analogous statement to BLUE (Gauss-Markov) for GLMs?
Since it’s been five years, I will post some speculation, even though I lack a proof. I would think the answer is in the negative. In GLMs other than linear regression, the idea of a "linear" estimator does not make sense to me. If we drop the linearity condition from the Gauss-Markov theorem, depending on what error distribution there is, the usual OLS solution might not be the BUE (best, possibly nonlinear, unbiased estimator). Further, the typical maximum likelihood estimation methods in GLMs (I’m thinking of logistic regression) can give biased estimates. If we relax our assumption just to require consistency, then we know maximum likelihood estimation to have nice properties about efficiency among consistent estimators, and then this is almost a tautology. “Is my GLM maximum likelihood estimator the most efficient consistent estimator?” YES! That’s part of the deal with maximum likelihood estimation and why it is a popular approach.
Does there exist an analogous statement to BLUE (Gauss-Markov) for GLMs?
Since it’s been five years, I will post some speculation, even though I lack a proof. I would think the answer is in the negative. In GLMs other than linear regression, the idea of a "linear" estimato
Does there exist an analogous statement to BLUE (Gauss-Markov) for GLMs? Since it’s been five years, I will post some speculation, even though I lack a proof. I would think the answer is in the negative. In GLMs other than linear regression, the idea of a "linear" estimator does not make sense to me. If we drop the linearity condition from the Gauss-Markov theorem, depending on what error distribution there is, the usual OLS solution might not be the BUE (best, possibly nonlinear, unbiased estimator). Further, the typical maximum likelihood estimation methods in GLMs (I’m thinking of logistic regression) can give biased estimates. If we relax our assumption just to require consistency, then we know maximum likelihood estimation to have nice properties about efficiency among consistent estimators, and then this is almost a tautology. “Is my GLM maximum likelihood estimator the most efficient consistent estimator?” YES! That’s part of the deal with maximum likelihood estimation and why it is a popular approach.
Does there exist an analogous statement to BLUE (Gauss-Markov) for GLMs? Since it’s been five years, I will post some speculation, even though I lack a proof. I would think the answer is in the negative. In GLMs other than linear regression, the idea of a "linear" estimato