idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
32,601 | Why do people use PCA when it has so many issues? | "Sign-arbitrariness" is merely an artifact of how we represent the PCA
results. There is no arbitrariness to the PCA itself: the eigenspaces
it works with are perfectly well defined. Issues (1) and (3) are
advantages of PCA, because they allow one to use subject-matter
knowledge and the objectives of the analysis appropriately. Referring
to this as "immature" rather misses the entire point of statistical
analysis, IMHO, which is to solve real problems in creative and
principled ways (as opposed to dumping data into black boxes).
– whuber
What I don't see here mentioned yet is that many use PCA the same way
you'd use a histogram, density plot, or scatter plot: A means to
quickly inspect data, rather than a final solution to a problem. PCA
is useful for this purpose as the number of dimensions grows, but of
course is more informative if care is taken in choosing whether and
how to scale.
– Frans Rodenburg | Why do people use PCA when it has so many issues? | "Sign-arbitrariness" is merely an artifact of how we represent the PCA
results. There is no arbitrariness to the PCA itself: the eigenspaces
it works with are perfectly well defined. Issues (1) an | Why do people use PCA when it has so many issues?
"Sign-arbitrariness" is merely an artifact of how we represent the PCA
results. There is no arbitrariness to the PCA itself: the eigenspaces
it works with are perfectly well defined. Issues (1) and (3) are
advantages of PCA, because they allow one to use subject-matter
knowledge and the objectives of the analysis appropriately. Referring
to this as "immature" rather misses the entire point of statistical
analysis, IMHO, which is to solve real problems in creative and
principled ways (as opposed to dumping data into black boxes).
– whuber
What I don't see here mentioned yet is that many use PCA the same way
you'd use a histogram, density plot, or scatter plot: A means to
quickly inspect data, rather than a final solution to a problem. PCA
is useful for this purpose as the number of dimensions grows, but of
course is more informative if care is taken in choosing whether and
how to scale.
– Frans Rodenburg | Why do people use PCA when it has so many issues?
"Sign-arbitrariness" is merely an artifact of how we represent the PCA
results. There is no arbitrariness to the PCA itself: the eigenspaces
it works with are perfectly well defined. Issues (1) an |
32,602 | what does regularization mean in xgboost (tree) | In tree-based methods regularization is usually understood as defining a minimum gain so which another split happens:
Minimum loss reduction required to make a further partition on a leaf
node of the tree. The larger gamma is, the more conservative the
algorithm will be.
Source: https://xgboost.readthedocs.io/en/latest/parameter.html
This minimum gain can usually be set for anything between $(0,\infty)$.
Here's a somewhat good article on how to tune regularization on XGBoost. | what does regularization mean in xgboost (tree) | In tree-based methods regularization is usually understood as defining a minimum gain so which another split happens:
Minimum loss reduction required to make a further partition on a leaf
node of t | what does regularization mean in xgboost (tree)
In tree-based methods regularization is usually understood as defining a minimum gain so which another split happens:
Minimum loss reduction required to make a further partition on a leaf
node of the tree. The larger gamma is, the more conservative the
algorithm will be.
Source: https://xgboost.readthedocs.io/en/latest/parameter.html
This minimum gain can usually be set for anything between $(0,\infty)$.
Here's a somewhat good article on how to tune regularization on XGBoost. | what does regularization mean in xgboost (tree)
In tree-based methods regularization is usually understood as defining a minimum gain so which another split happens:
Minimum loss reduction required to make a further partition on a leaf
node of t |
32,603 | Who is right, the statistician or the surgeon? | As the statistician did not make any statements, he cannot be wrong. He just asked two questions: 1) Did you have controls? and 2) Which half?
The surgeon is clearly wrong, unless a) Every patient he treated survived and b) No patient who was not treated would survive (or, of course, vice versa).
Both the surgeon and the statistician are making good points. | Who is right, the statistician or the surgeon? | As the statistician did not make any statements, he cannot be wrong. He just asked two questions: 1) Did you have controls? and 2) Which half?
The surgeon is clearly wrong, unless a) Every patient he | Who is right, the statistician or the surgeon?
As the statistician did not make any statements, he cannot be wrong. He just asked two questions: 1) Did you have controls? and 2) Which half?
The surgeon is clearly wrong, unless a) Every patient he treated survived and b) No patient who was not treated would survive (or, of course, vice versa).
Both the surgeon and the statistician are making good points. | Who is right, the statistician or the surgeon?
As the statistician did not make any statements, he cannot be wrong. He just asked two questions: 1) Did you have controls? and 2) Which half?
The surgeon is clearly wrong, unless a) Every patient he |
32,604 | Who is right, the statistician or the surgeon? | This sounds a lot like that story about one of the sons in the fourth generation of the Pearson family, the one that became a paramedic. He used to not help half of his patients with a cardiac arrest in order to test whether helping or not helping was significantly helpful in order to get the heart beating again.
A grand child of Joan Fisher and Joerge Box is currently doing a project for the final exam as air traffic controller. He is testing on half the pilot whether they will fly better and crash less often if he is not speaking to them.
Do you think they are right to do so? | Who is right, the statistician or the surgeon? | This sounds a lot like that story about one of the sons in the fourth generation of the Pearson family, the one that became a paramedic. He used to not help half of his patients with a cardiac arrest | Who is right, the statistician or the surgeon?
This sounds a lot like that story about one of the sons in the fourth generation of the Pearson family, the one that became a paramedic. He used to not help half of his patients with a cardiac arrest in order to test whether helping or not helping was significantly helpful in order to get the heart beating again.
A grand child of Joan Fisher and Joerge Box is currently doing a project for the final exam as air traffic controller. He is testing on half the pilot whether they will fly better and crash less often if he is not speaking to them.
Do you think they are right to do so? | Who is right, the statistician or the surgeon?
This sounds a lot like that story about one of the sons in the fourth generation of the Pearson family, the one that became a paramedic. He used to not help half of his patients with a cardiac arrest |
32,605 | Who is right, the statistician or the surgeon? | The statistician sounds like a frequentist, and he's correct if we view things in terms of measures of evidence. In particular, at this point we have no direct evidence in regards to the effectiveness of the surgeon's effectiveness.
Maybe surprising to most statisticians, the surgeon is taking more of a Bayesian perspective. That is, because of his advanced knowledge of medicine, he is very strongly convinced that his procedures are helping his patients. He's human, so he must realize that he does know exactly how effective his treatments are, but he also is so confident that it's positive that the long-term benefit is better for him to treat every patient than it is to collect controls, who will with very high probability be worse off than if they were treated only to collect data that confirms what he already knows. So while collecting data on controls may be informative, it is dangerous to the controls and not likely to make any differences in future decisions. Therefore, it is quite logical for him to not use controls.
Who's correct? Well, the statistician is certainly correct that we don't have any data that demonstrates that the surgeon's methods are effective.
But the lack of evidence doesn't mean the surgeon is wrong! Assuming the surgeon is not over-confident, the surgeon is also correct that collecting data on controls is not the ethical thing to do. What it all comes down to is: do you trust the surgeon's confidence? | Who is right, the statistician or the surgeon? | The statistician sounds like a frequentist, and he's correct if we view things in terms of measures of evidence. In particular, at this point we have no direct evidence in regards to the effectiveness | Who is right, the statistician or the surgeon?
The statistician sounds like a frequentist, and he's correct if we view things in terms of measures of evidence. In particular, at this point we have no direct evidence in regards to the effectiveness of the surgeon's effectiveness.
Maybe surprising to most statisticians, the surgeon is taking more of a Bayesian perspective. That is, because of his advanced knowledge of medicine, he is very strongly convinced that his procedures are helping his patients. He's human, so he must realize that he does know exactly how effective his treatments are, but he also is so confident that it's positive that the long-term benefit is better for him to treat every patient than it is to collect controls, who will with very high probability be worse off than if they were treated only to collect data that confirms what he already knows. So while collecting data on controls may be informative, it is dangerous to the controls and not likely to make any differences in future decisions. Therefore, it is quite logical for him to not use controls.
Who's correct? Well, the statistician is certainly correct that we don't have any data that demonstrates that the surgeon's methods are effective.
But the lack of evidence doesn't mean the surgeon is wrong! Assuming the surgeon is not over-confident, the surgeon is also correct that collecting data on controls is not the ethical thing to do. What it all comes down to is: do you trust the surgeon's confidence? | Who is right, the statistician or the surgeon?
The statistician sounds like a frequentist, and he's correct if we view things in terms of measures of evidence. In particular, at this point we have no direct evidence in regards to the effectiveness |
32,606 | Who is right, the statistician or the surgeon? | The surgeon is right.
The people who suffered or died because they did not get this operation serve as a control group. It would be better to formalize this and quantify the improved performance (e.g. 70% mortality rate vs 10%), but we do have a group to which we can compare.
Now...if the surgeon is claiming that his treatment saved lives, yet the patients tended to do just fine without the procedure, then the success of the treatment is not so remarkable. However, quite the opposite is implied.
The "which half" line is wrong. Nothing suggests that the surgeon's procedure causes death. Perhaps it doesn't help compared to a control group, but it certainly sounds like most patients survive. Operating on a patient certainly doesn't suggest that they are doomed to die in the OR. | Who is right, the statistician or the surgeon? | The surgeon is right.
The people who suffered or died because they did not get this operation serve as a control group. It would be better to formalize this and quantify the improved performance (e.g | Who is right, the statistician or the surgeon?
The surgeon is right.
The people who suffered or died because they did not get this operation serve as a control group. It would be better to formalize this and quantify the improved performance (e.g. 70% mortality rate vs 10%), but we do have a group to which we can compare.
Now...if the surgeon is claiming that his treatment saved lives, yet the patients tended to do just fine without the procedure, then the success of the treatment is not so remarkable. However, quite the opposite is implied.
The "which half" line is wrong. Nothing suggests that the surgeon's procedure causes death. Perhaps it doesn't help compared to a control group, but it certainly sounds like most patients survive. Operating on a patient certainly doesn't suggest that they are doomed to die in the OR. | Who is right, the statistician or the surgeon?
The surgeon is right.
The people who suffered or died because they did not get this operation serve as a control group. It would be better to formalize this and quantify the improved performance (e.g |
32,607 | Why are natural splines almost always cubic? | This is probably anticlimactic... I think this is a bit of conversion if we want to consider the resulting fit to be smooth. It stems as well as feeds on the fact that by smooth function one commonly refers to "twice differentiable". To quote Faraway's Linear Models with R: "The basis function is continuous and is also continuous in its first and second derivatives at each knotpoint. This property ensures the smoothness of the fit.".
To start with an example: Such a convention immediately takes care of Taylor's theorem such that if $g$ is a smooth function there exist a $\psi \in (0,x)$ such that $g(x) = g(0) + xg'(0) + \frac{x^2}{2}g''(\psi)$. Higher order differentials definitely do matter at times but the usual convention is to check the first two and proceed.
Additionally, following the rationale from Ramsay & Silverman's seminal book on Functional Data Analysis, the second derivative $g''(x)$ of a function at $x$ is often called its curvature at $x$ and the squared integral of it (i.e. the integrated squared second derivative: $\int [g''(x)]^2dx$) can be seen as a natural measure of a function smoothness (or roughness depend how we look at this). This working assumption of "smooth enough because second derivative exists" is almost universal when working with curves/functional data (e.g. Horváth & Kokoszka's Inference for Functional Data with Applications and Ferraty & Vieu's Nonparametric Functional Data Analysis put similar convention in place); once again this working assumption and not a hard requirement. It also goes without saying that if we work with $g''(x)$ as our unit of analysis we assume that $g''''(x)$ exist and so forth. As a side-note: the existence of a second derivative is associated with the isotropy of a function (e.g. see Switzer (1976) Geometrical measures of the smoothness of random functions) which is a reasonable assumption for data assumed to lie on a continuum (e.g. have spatial dependence).
Let me note that there is no reason why a higher or lower order requirement for the continuity of derivatives cannot be used. For example, we might choose to use a piecewise linear interpolant in cases where we have an insufficient amount of data. Finally, the degree of smoothness is indeed chosen using a cross-validation approach (usually Generalised Cross-Validation to be more exact) based on the metric we choose (see for example the popular function mgcv::gam does exactly that when fitting the smooth splines, Yao et al. (2005) Functional linear regression analysis for longitudinal data does the same when picking the bandwidth of the kernel smoothers, etc.)
One might find the following Math.SE thread on: Is second derivative of a function related to curve smoothness? also insightful, unfortunately it does not contain a definite answer.
So, "why are the natural splines almost always cubic?" Because assuming the existence of second derivatives and thus the need for a cubic fit, is a good convention for most cases. ☺ | Why are natural splines almost always cubic? | This is probably anticlimactic... I think this is a bit of conversion if we want to consider the resulting fit to be smooth. It stems as well as feeds on the fact that by smooth function one commonly | Why are natural splines almost always cubic?
This is probably anticlimactic... I think this is a bit of conversion if we want to consider the resulting fit to be smooth. It stems as well as feeds on the fact that by smooth function one commonly refers to "twice differentiable". To quote Faraway's Linear Models with R: "The basis function is continuous and is also continuous in its first and second derivatives at each knotpoint. This property ensures the smoothness of the fit.".
To start with an example: Such a convention immediately takes care of Taylor's theorem such that if $g$ is a smooth function there exist a $\psi \in (0,x)$ such that $g(x) = g(0) + xg'(0) + \frac{x^2}{2}g''(\psi)$. Higher order differentials definitely do matter at times but the usual convention is to check the first two and proceed.
Additionally, following the rationale from Ramsay & Silverman's seminal book on Functional Data Analysis, the second derivative $g''(x)$ of a function at $x$ is often called its curvature at $x$ and the squared integral of it (i.e. the integrated squared second derivative: $\int [g''(x)]^2dx$) can be seen as a natural measure of a function smoothness (or roughness depend how we look at this). This working assumption of "smooth enough because second derivative exists" is almost universal when working with curves/functional data (e.g. Horváth & Kokoszka's Inference for Functional Data with Applications and Ferraty & Vieu's Nonparametric Functional Data Analysis put similar convention in place); once again this working assumption and not a hard requirement. It also goes without saying that if we work with $g''(x)$ as our unit of analysis we assume that $g''''(x)$ exist and so forth. As a side-note: the existence of a second derivative is associated with the isotropy of a function (e.g. see Switzer (1976) Geometrical measures of the smoothness of random functions) which is a reasonable assumption for data assumed to lie on a continuum (e.g. have spatial dependence).
Let me note that there is no reason why a higher or lower order requirement for the continuity of derivatives cannot be used. For example, we might choose to use a piecewise linear interpolant in cases where we have an insufficient amount of data. Finally, the degree of smoothness is indeed chosen using a cross-validation approach (usually Generalised Cross-Validation to be more exact) based on the metric we choose (see for example the popular function mgcv::gam does exactly that when fitting the smooth splines, Yao et al. (2005) Functional linear regression analysis for longitudinal data does the same when picking the bandwidth of the kernel smoothers, etc.)
One might find the following Math.SE thread on: Is second derivative of a function related to curve smoothness? also insightful, unfortunately it does not contain a definite answer.
So, "why are the natural splines almost always cubic?" Because assuming the existence of second derivatives and thus the need for a cubic fit, is a good convention for most cases. ☺ | Why are natural splines almost always cubic?
This is probably anticlimactic... I think this is a bit of conversion if we want to consider the resulting fit to be smooth. It stems as well as feeds on the fact that by smooth function one commonly |
32,608 | Forecasting daily time series sales revenue with many zero entries | Would I need to change my $ into sales numbers #?
Ideally yes. Since this is not a unique product, how do I know if 600\$ is one 600\$ unit or 2 300$ units?
Is there a Python package for crostons or do I have to use R?
Right now there are not any good R packages for Croston's and similar intermittent methods. R offers better options.
Is my assumption correct that ARIMA has issues with data structured like mine?
Depends. ARIMA might work with the data in the top graph, but it won't work with the data in the bottom graph (or anything sparser than that).
Croston's and it's newer versions (TSB, etc...) are a better option. But you need to keep in mind that such methods don't produce a normal forecast the way ARIMA or ETS does. They forecast a rate of sale (or velocity) which can then be used then to figure out average sales over a long period of time.
You should also look at count predicting methods like Negative Binomial and Poisson distributions. | Forecasting daily time series sales revenue with many zero entries | Would I need to change my $ into sales numbers #?
Ideally yes. Since this is not a unique product, how do I know if 600\$ is one 600\$ unit or 2 300$ units?
Is there a Python package for crostons | Forecasting daily time series sales revenue with many zero entries
Would I need to change my $ into sales numbers #?
Ideally yes. Since this is not a unique product, how do I know if 600\$ is one 600\$ unit or 2 300$ units?
Is there a Python package for crostons or do I have to use R?
Right now there are not any good R packages for Croston's and similar intermittent methods. R offers better options.
Is my assumption correct that ARIMA has issues with data structured like mine?
Depends. ARIMA might work with the data in the top graph, but it won't work with the data in the bottom graph (or anything sparser than that).
Croston's and it's newer versions (TSB, etc...) are a better option. But you need to keep in mind that such methods don't produce a normal forecast the way ARIMA or ETS does. They forecast a rate of sale (or velocity) which can then be used then to figure out average sales over a long period of time.
You should also look at count predicting methods like Negative Binomial and Poisson distributions. | Forecasting daily time series sales revenue with many zero entries
Would I need to change my $ into sales numbers #?
Ideally yes. Since this is not a unique product, how do I know if 600\$ is one 600\$ unit or 2 300$ units?
Is there a Python package for crostons |
32,609 | Forecasting daily time series sales revenue with many zero entries | I encountered the same problem recently with a time serie similiar to yours, kept trying different models but the one that worked the best for this kind of series is using XGboost with hyperparameter optimization and some lagged features. | Forecasting daily time series sales revenue with many zero entries | I encountered the same problem recently with a time serie similiar to yours, kept trying different models but the one that worked the best for this kind of series is using XGboost with hyperparameter | Forecasting daily time series sales revenue with many zero entries
I encountered the same problem recently with a time serie similiar to yours, kept trying different models but the one that worked the best for this kind of series is using XGboost with hyperparameter optimization and some lagged features. | Forecasting daily time series sales revenue with many zero entries
I encountered the same problem recently with a time serie similiar to yours, kept trying different models but the one that worked the best for this kind of series is using XGboost with hyperparameter |
32,610 | What is the most powerful result about the maximum of i.i.d. Gaussians? The most used in practice? | In any probabilistic application, the most fundamental object is the distribution, with the moments and limiting properties being derivable from this. Hence, the most "important" result, in the sense you've described, is the full distribution function $F_{Z_n}(z) = \Phi^n(z)$ (equivalently, the corresponding density function). In practice, this distributional result is perhaps less illuminating than some of the more basic asymptotic properties you've already listed. Although it logically implies these asymptotic results, in my view, those results are likely to be more illuminating in understanding the changing nature of the extreme value as we change $n$.
It is clear from your question that you have a good understanding of the extreme value properties in the case of a maximum of IID standard normal random variables. These properties are all logically derivable from the distribution function for $Z_n$, so that is the most fundamental object at work in this problem. As in many cases, the most fundamental object is not necessarily the most illuminating, and so you will probably find that you have to make do with knowing all the results, and knowing that they illuminate different aspects of the problem. | What is the most powerful result about the maximum of i.i.d. Gaussians? The most used in practice? | In any probabilistic application, the most fundamental object is the distribution, with the moments and limiting properties being derivable from this. Hence, the most "important" result, in the sense | What is the most powerful result about the maximum of i.i.d. Gaussians? The most used in practice?
In any probabilistic application, the most fundamental object is the distribution, with the moments and limiting properties being derivable from this. Hence, the most "important" result, in the sense you've described, is the full distribution function $F_{Z_n}(z) = \Phi^n(z)$ (equivalently, the corresponding density function). In practice, this distributional result is perhaps less illuminating than some of the more basic asymptotic properties you've already listed. Although it logically implies these asymptotic results, in my view, those results are likely to be more illuminating in understanding the changing nature of the extreme value as we change $n$.
It is clear from your question that you have a good understanding of the extreme value properties in the case of a maximum of IID standard normal random variables. These properties are all logically derivable from the distribution function for $Z_n$, so that is the most fundamental object at work in this problem. As in many cases, the most fundamental object is not necessarily the most illuminating, and so you will probably find that you have to make do with knowing all the results, and knowing that they illuminate different aspects of the problem. | What is the most powerful result about the maximum of i.i.d. Gaussians? The most used in practice?
In any probabilistic application, the most fundamental object is the distribution, with the moments and limiting properties being derivable from this. Hence, the most "important" result, in the sense |
32,611 | What is the most powerful result about the maximum of i.i.d. Gaussians? The most used in practice? | W.I.P.: Work in progress
Following p. 370 of Cramer's 1946 Mathematical Methods of Statistics, define $$\Xi_n = n(1 - \Phi(Z_n)) \,. $$ Here $\Phi$ is the cumulative distribution function of the standard normal distribution, $\mathscr{N}(0,1)$. As a consequence of its definition, we are guaranteed that $0\le \Xi_n \le n$ almost surely.
Consider a given realization $\omega \in \Omega$ of our sample space. Then in this sense $Z_n$ is both a function of $n$ and $\omega$, and $\Xi_n$ a function of $Z_n, n$, and $\omega$. For a fixed $\omega$, we can consider $Z_n$ a deterministic function of $n$, and $\Xi_n$ a deterministic function of $Z_n$ and $n$, thereby simplifying the problem. We aim to show results which hold for almost surely all $\omega \in \Omega$, allowing us to transfer our results from a non-deterministic analysis to the non-deterministic setting.
Following p. 374 of Cramer's 1946 Mathematical Methods of Statistics, assume for the moment (I aim to come back and supply a proof later) that we are able to show that (for any given $\omega \in \Omega$) the following asymptotic expansion holds (using integration by parts and the definition of $\Phi$):
$$ \frac{\sqrt{2\pi}}{n}\Xi_n = \frac{1}{Z_n}e^{-\frac{Z_n^2}{2}}\left( 1 + O \left( \frac{1}{Z_n^2} \right) \right) \quad ~~ as ~~ Z_n \to \infty \,. \tag{~}$$
Clearly we have that $Z_{n+1} \ge Z_n$ for any $n$, and $Z_n$ is almost surely an increasing function of $n$ as $n\to \infty$, therefore we claim in what follows throughout that for (almost surely all) fixed $\omega$: $$ Z_n \to \infty \quad \iff \quad n \to \infty \,. $$
Hence it follows that we have (where $\sim$ denotes asymptotic equivalence):
$$ \frac{\sqrt{2\pi}}{n} \Xi_n \sim \frac{1}{Z_n} e^{-\frac{1}{Z_n^2}} \quad ~~ as ~~ Z_n \to \infty \quad n \to \infty \,. $$
How we proceed in what follows amounts essentially to the method of dominant balance, and our manipulations will be formally justified by the following lemma:
Lemma: Assume that $f(n) \sim g(n)$ as $n \to \infty$, and $f(n) \to \infty$ (thus $g(n) \to \infty$). Then given any function $h$ which is formed via compositions, additions, and multiplications of logarithms and power laws (essentially any "polylog" function), we must have also that as $n \to \infty$: $$ h(f(n)) \sim h(g(n)) \,. $$ In other words, such "polylog" functions preserve asymptotic equivalence.
The truth of this lemma is a consequence of Theorem 2.1. as written here. Note also that what follows is mostly an expanded (more details) version of the answer to a similar question found here.
Taking logarithms of both sides, we get that:
$$\log ( \sqrt{2\pi} \Xi_n ) - \log n \sim -\log Z_n - \frac{Z_n^2}{2} \,. \tag{1}$$
This is where Cramer is somewhat cagey; he just says "assuming $\Xi_n$ is bounded", we can conclude blah blah blah. But showing that $\Xi_n$ is suitably bounded almost surely seems to be actually somewhat non-trivial. It seems that the proof of this may essentially be part of what's discussed on pp. 265-267 of Galambos, but I am not sure given that I am still working to understand the content of that book.
Anyway, assuming one can show that $\log \Xi_n = o(\log n)$, then it follows (since the $-Z_n^2/2$ term dominates the $-\log Z_n$ term) that:
$$ - \log n \sim - \frac{Z_n^2}{2} \quad \implies \quad Z_n \sim \sqrt{2 \log n} \,. $$
This is somewhat nice, since it is already most of what we want to show, although again it is worthwhile to note that it is essentially only kicking the can down the road, since now we have to show some certain almost surely boundedness of $\Xi_n$. On the other hand, $\Xi_n$ has the same distribution for any maximum of i.i.d. continuous random variables, so this may be tractable.
Anyway, if $Z_n \sim \sqrt{2 \log n}$ a.s., then clearly one can also conclude that $Z_n \sim \sqrt{2 \log n}(1 + \alpha(n))$ for any $\alpha(n)$ which is $o(1)$ as $n \to \infty$. Using our lemma about polylog functions preserving asymptotic equivalence above, we can substitute this expression back into $(1)$ to get:
$$\log(\sqrt{2 \pi} \Xi _n)- \log n \sim -\log (1 + \alpha) - \frac{1}{2}\log 2 - \frac{1}{2}\log \log n - \log n - 2 \alpha \log n - \alpha^2 \log n \,. $$
$$ \implies -\log(\Xi_n \sqrt{2 \pi}) \sim \log(1 + \alpha) + \frac{1}{2} \log 2 + \frac{1}{2} \log \log n + 2\alpha \log n + \alpha^2 \log n \,. $$
Here we have to go even further, and assume that $\log \Xi_n = o( \log \log n) ~~ as ~~ n \to \infty$ almost surely. Again, all Cramer says is "assuming $\Xi_n$ is bounded". But since all one can say a priori about $\Xi_n$ is that $0 \le Xi_n \le n$ a.s., it hardly seems clear that one should have $\Xi_n = O(1)$ almost surely, which seems to be the substance of Cramer's claim.
But anyway, assuming one believes that, then it follows that the dominant term which does not contain $\alpha$ is $\frac{1}{2} \log \log n$. Since $\alpha = o(1)$, it follows that $\alpha^2 = o(\alpha)$, and clearly $\log ( 1 + \alpha) = o (\alpha) = o(o(\alpha \log n))$, so the dominant term containing $\alpha$ is $2 \alpha \log n$. Therefore, we can rearrange and (dividing everything by $\frac{1}{2}\log\log n$ or $2 \alpha \log n$) find that
$$ - \frac{1}{2} \log \log n \sim 2 \alpha \log n \quad \implies \quad \alpha \sim - \frac{\log \log n}{4 \log n} \,. $$
Therefore, substituting this back into the above, we get that:
$$Z_n \sim \sqrt{2 \log n}- \frac{\log\log n}{2 \sqrt{2 \log n}} \,, $$
again, assuming we believe certain things about $\Xi_n$.
We rehash the same technique again; since $Z_n \sim \sqrt{2 \log n} - \frac{\log \log n}{2 \sqrt{2 \log n}}$, then it also follows that
$$ Z_n \sim \sqrt{2 \log n} - \frac{\log \log n}{2 \sqrt{2 \log n}} (1 + \beta(n)) = \sqrt{2 \log n} \left( 1 - \frac{\log \log n}{8 \log n}(1 + \beta(n)) \right) \,,$$
when $\beta(n)=o(1)$. Let's simplify a little before substituting directly back into (1); we get that:
$$ \log Z_n \sim \log(\sqrt{2 \log n}) + \underbrace{\log \left(1 - \frac{\log \log n}{8 \log n}(1 + \beta(n)) \right) }_{\log(O(1)) = o(\log n)} \sim \log (\sqrt{2 \log n}) \,.$$
$$ \frac{Z_n^2}{2} \sim \log n - \frac{1}{2} \log \log n (1 + \beta) + \underbrace{\frac{(\log \log n)^2}{8 \log n} ( 1 \beta)^2}_{o((1+ \beta) \log \log n)} \sim \log n - \frac{1}{2} (1 + \beta) \log \log n \,. $$
Substituting this back into (1), we find that:
$$ \log ( \sqrt{2 \pi} \Xi_n) - \log n \sim - \log(\sqrt{2 \log n}) - \log n + \frac{1}{2}(1 + \beta) \log \log n \quad \implies \quad \beta \sim \frac{\log (4 \pi \Xi_n^2)}{\log \log n} \,. $$
Therefore, we conclude that almost surely
$$Z_n \sim \sqrt{2 \log n} - \frac{\log \log n}{2 \sqrt{2 \log n}} \left(1 + \frac{\log(4 \pi) + 2 \log( \Xi_n)}{\log \log n} \right)\\ = \sqrt{2 \log n} - \frac{\log \log n + \log (4 \pi)}{ 2 \sqrt{2 \log n} } - \frac{\log (\Xi_n)}{\sqrt{2 \log n}} \,. $$
This corresponds to the final result on p.374 of Cramer's 1946 Mathematical Methods of Statistics except that here the exact order of the error term isn't given. Apparently applying this one more term gives the exact order of the error term, but anyway it doesn't seem necessary to prove the results about the maxima of i.i.d. standard normals in which we are interested.
Given the result of the above, namely that almost surely:
$$Z_n \sim \sqrt{2 \log n} - \frac{\log \log n + \log (4 \pi)}{2 \sqrt{2 \log n}} - \frac{\log (\Xi_n)}{\sqrt{2 \log n}} \quad \implies \\ Z_n = \sqrt{2 \log n} - \frac{\log \log n + \log (4 \pi)}{2 \sqrt{2 \log n}} - \frac{\log (\Xi_n)}{\sqrt{2 \log n}} + o(1)\,. \tag{$\dagger$}$$
2. Then by linearity of expectation it follows that:
$$ \mathbb{E}Z_n = \sqrt{2 \log n} - \frac{\log \log n + \log (4 \pi)}{2 \sqrt{2 \log n}} - \frac{\mathbb{E}[\log (\Xi_n)]}{\sqrt{2 \log n}} + o(1) \quad \implies \\ \frac{\mathbb{E}Z_n}{\sqrt{2 \log n}} = 1 - \frac{\mathbb{E}[\log \Xi_n]}{2 \log n} + o(1) \,. $$
Therefore, we have shown that
$$ \lim_{n \to \infty } \frac{\mathbb{E} Z_n}{\sqrt{2 \log n}} = 1 \,,$$
as long as we can also show that
$$ \mathbb{E}[\log \Xi_n] = o(\log n) \,. $$
This might not be too difficult to show since again $\Xi_n$ has the same distribution for every continuous random variable. Thus we have the second result from above.
1. Similarly, we also have from the above that almost surely:
$$\frac{Z_n}{\sqrt{2 \log n}} = 1 - \frac{\log(\Xi_n)}{2 \log n} +o(1),.$$
Therefore, if we can show that:
$$ \log(\Xi_n) = o(\log n) \text{ almost surely}, \tag{*}$$
then we will have shown the first result from above. Result (*) would also clearly imply a fortiori that $\mathbb{E}[\log (\Xi_n)] = o(\log n)$, thereby also giving us the first result from above.
Also note that in the proof above of ($\dagger$) we needed to assume anyway that $\Xi_n = o(\log n)$ almost surely (or at least something similar), so that if we are able to show ($\dagger$) then we will most likely also have in the process needed to show $\Xi_n = o(\log n)$ almost surely, and therefore if we can prove $(\dagger)$ we will most likely be able to immediately reach all of the following conclusions.
3. However, if we have this result, then I don't understand how one would also have that $\mathbb{E}Z_n = \sqrt{2 \log n} + \Theta(1)$, since $o(1) \not= \Theta(1)$. But at the very least it would seem to be true that $$\mathbb{E}Z_n = \sqrt{2 \log n} + O(1) \,.$$
So then it seems that we can focus on answering the question of how to show that $$ \Xi_n = o(\log n) \text{ almost surely.} $$
We will also need to do the grunt work of providing a proof for (~), but to the best of my knowledge that is just calculus and involves no probability theory, although I have yet to sit down and try it yet.
First let's go through a chain of trivialities in order to rephrase the problem in a way which makes it easier to solve (note that by definition $\Xi_n \ge 0$):
$$\Xi_n = o(\log n) \quad \iff \quad \lim_{n \to \infty} \frac{\Xi_n}{\log n} = 0 \quad \iff \quad \\ \forall \varepsilon > 0, \frac{\Xi_n}{\log n} > \varepsilon \text{ only finitely many times} \quad \iff \\ \forall \varepsilon >0, \quad \Xi_n > \varepsilon \log n \text{ only finitely many times} \,.$$
One also has that:
$$\Xi_n > \varepsilon \log n \quad \iff \quad n(1 - F(Z_n)) > \varepsilon \log n \quad \iff \quad 1 - F(Z_n) > \frac{\varepsilon \log n}{n} \\ \iff \quad F(Z_n) < 1 - \frac{\varepsilon \log n}{n} \quad \iff \quad Z_n \le \inf \left\{ y: F(y) \ge 1 - \frac{\varepsilon \log n}{n} \right\} \,. $$
Correspondingly, define for all $n$:
$$ u_n^{(\varepsilon)} = \inf \left\{ y: F(y) \ge 1 - \frac{\varepsilon \log n}{n} \right\} \,. $$
Therefore the above steps show us that:
$$\Xi_n = o(\log n) \text{ a.s.} \quad \iff \quad \mathbb{P}(\Xi_n = o(\log n)) = 1 \quad \iff \quad \\
\mathbb{P}(\forall \varepsilon > 0 , \Xi_n > \varepsilon \log n \text{ only finitely many times}) = 1 \\
\iff \mathbb{P}(\forall \varepsilon > 0, Z_n \le u_n^{(\varepsilon)} \text{ only finitely many times}) = 1 \\
\iff \mathbb{P}(\forall \varepsilon >0, Z_n \le u_n^{(\varepsilon)} \text{ infinitely often}) =0 \,. $$
Notice that we can write:
$$ \{ \forall \varepsilon >0, Z_n \le u_n^{(\varepsilon)} \text{ infinitely often} \} = \bigcap_{\varepsilon > 0} \{ Z_n \le u_n^{(\varepsilon)} \text{ infinitely often} \} \,.$$
The sequences $u_n^{(\varepsilon)}$ become uniformly larger as $\varepsilon$ decreases, so we can conclude that the events $$\{ Z_n \le u_n^{(\varepsilon)} \text{ infinitely often} \} $$ are decreasing (or at least somehow monotonic) as $\varepsilon$ goes to $0$. Therefore the probability axiom regarding monotonic sequences of events allows us to conclude that:
$$\mathbb{P}(\forall \varepsilon >0, Z_n \le u_n^{(\varepsilon)} \text{ infinitely often}) = \\
\mathbb{P} \left( \bigcap_{\varepsilon > 0} \{ Z_n \le u_n^{(\varepsilon)} \text{ infinitely often} \} \right) = \\
\mathbb{P} \left( \lim_{\varepsilon \downarrow 0} \{ Z_n \le u_n^{(\varepsilon)} \text{ infinitely often} \} \right) = \\
\lim_{\varepsilon \downarrow 0} \mathbb{P}(Z_n \le u_n^{(\varepsilon)} \text{ infinitely often}) \,.$$
Therefore it suffices to show that for all $\varepsilon >0$,
$$\mathbb{P}(Z_n \le u_n^{(\varepsilon)} \text{ infinitely often}) = 0 $$
because of course the limit of any constant sequence is the constant.
Here is somewhat of a sledgehammer result:
Theorem 4.3.1., p. 252 of Galambos, The Asymptotic Theory of Extreme Order Statistics, 2nd edition. Let $X_1, X_2, \dots$ be i.i.d. variables with common nondegenerate and continuous distribution function $F(x)$, and let $u_n$ be a nondecreasing sequence such that $n(1 - F(u_n))$ is also nondecreasing. Then, for $u_n < \sup \{ x: F(x) <1 \}$, $$\mathbb{P}(Z_n \le u_n \text{ infinitely often}) =0 \text{ or }1 $$
according as
$$\sum_{j=1}^{+\infty}[1 - F(u_j)]\exp(-j[1-F(u_j)]) < +\infty \text{ or }=+\infty \,. $$
The proof is technical and takes around five pages, but ultimately it turns out to be a corollary of one of the Borel-Cantelli lemmas. I may get around to trying to condense the proof to only use the part required for this analysis as well as only the assumptions which hold in the Gaussian case, which may be shorter (but maybe it isn't) and type it up here, but holding your breath is not recommended. Note that in this case $\omega(F)=+\infty$, so that condition is vacuous, and $n(1-F(n))$ is $\varepsilon \log n$ thus clearly non-decreasing.
Anyway the point being that, appealing to this theorem, if we can show that:
$$\sum_{j=1}^{+\infty}[1 - F(u_j^{(\varepsilon)})]\exp(-j[1-F(u_j^{(\varepsilon)})]) = \sum_{j=1}^{+\infty}\left[ \frac{\varepsilon \log j}{j} \right]\exp(-\varepsilon \log j) = \varepsilon \sum_{j=1}^{+\infty} \frac{ \log j}{j^{1 + \varepsilon}} < + \infty \,. $$
Note that since logarithmic growth is slower than any power law growth for any positive power law exponent (logarithms and exponentials are monotonicity preserving, so $\log \log n \le \alpha \log n \iff \log n \le n^{\alpha}$ and the former inequality can always be seen to hold for all $n$ large enough due to the fact that $\log n \le n$ and a change of variables), we have that:
$$ \sum_{j=1}^{+\infty} \frac{\log j}{j^{1 + \varepsilon}} \le \sum_{j=1}^{+\infty} \frac{j^{\varepsilon/2}}{j^{1 + \varepsilon}} = \sum_{j=1}^{+\infty} \frac{1}{j^{1 + \varepsilon/2}} < +\infty \,,$$
since the p-series is known to converge for all $p>1$, and $\varepsilon >0$ of course implies $1 + \varepsilon/2 > 1$.
Thus using the above theorem we have shown that for all $\varepsilon >0$, $\mathbb{P}(Z_n \le u_n^{(\varepsilon)} \text{ i.o.}) = 0$, which to recapitulate should mean that $\Xi_n = o(\log n)$ almost surely.
We need to show still that $\log \Xi_n = o(\log \log n)$. This doesn't follow from the above, since, e.g.,
$$ \frac{1}{n} \log n = o(\log n) \,, - \log n + \log \log n \not= o(\log n) \,. $$
However, given a sequence $x_n$, if one can show that $x_n = o( (\log n)^{\delta})$ for arbitrary $\delta >0$, then it does follow that $\log(x_n) = o(\log \log n)$. Ideally I would like to be able to show this for $\Xi_n$ using the above lemma (assuming it's even true), but am not able to (as of yet). | What is the most powerful result about the maximum of i.i.d. Gaussians? The most used in practice? | W.I.P.: Work in progress
Following p. 370 of Cramer's 1946 Mathematical Methods of Statistics, define $$\Xi_n = n(1 - \Phi(Z_n)) \,. $$ Here $\Phi$ is the cumulative distribution function of the stand | What is the most powerful result about the maximum of i.i.d. Gaussians? The most used in practice?
W.I.P.: Work in progress
Following p. 370 of Cramer's 1946 Mathematical Methods of Statistics, define $$\Xi_n = n(1 - \Phi(Z_n)) \,. $$ Here $\Phi$ is the cumulative distribution function of the standard normal distribution, $\mathscr{N}(0,1)$. As a consequence of its definition, we are guaranteed that $0\le \Xi_n \le n$ almost surely.
Consider a given realization $\omega \in \Omega$ of our sample space. Then in this sense $Z_n$ is both a function of $n$ and $\omega$, and $\Xi_n$ a function of $Z_n, n$, and $\omega$. For a fixed $\omega$, we can consider $Z_n$ a deterministic function of $n$, and $\Xi_n$ a deterministic function of $Z_n$ and $n$, thereby simplifying the problem. We aim to show results which hold for almost surely all $\omega \in \Omega$, allowing us to transfer our results from a non-deterministic analysis to the non-deterministic setting.
Following p. 374 of Cramer's 1946 Mathematical Methods of Statistics, assume for the moment (I aim to come back and supply a proof later) that we are able to show that (for any given $\omega \in \Omega$) the following asymptotic expansion holds (using integration by parts and the definition of $\Phi$):
$$ \frac{\sqrt{2\pi}}{n}\Xi_n = \frac{1}{Z_n}e^{-\frac{Z_n^2}{2}}\left( 1 + O \left( \frac{1}{Z_n^2} \right) \right) \quad ~~ as ~~ Z_n \to \infty \,. \tag{~}$$
Clearly we have that $Z_{n+1} \ge Z_n$ for any $n$, and $Z_n$ is almost surely an increasing function of $n$ as $n\to \infty$, therefore we claim in what follows throughout that for (almost surely all) fixed $\omega$: $$ Z_n \to \infty \quad \iff \quad n \to \infty \,. $$
Hence it follows that we have (where $\sim$ denotes asymptotic equivalence):
$$ \frac{\sqrt{2\pi}}{n} \Xi_n \sim \frac{1}{Z_n} e^{-\frac{1}{Z_n^2}} \quad ~~ as ~~ Z_n \to \infty \quad n \to \infty \,. $$
How we proceed in what follows amounts essentially to the method of dominant balance, and our manipulations will be formally justified by the following lemma:
Lemma: Assume that $f(n) \sim g(n)$ as $n \to \infty$, and $f(n) \to \infty$ (thus $g(n) \to \infty$). Then given any function $h$ which is formed via compositions, additions, and multiplications of logarithms and power laws (essentially any "polylog" function), we must have also that as $n \to \infty$: $$ h(f(n)) \sim h(g(n)) \,. $$ In other words, such "polylog" functions preserve asymptotic equivalence.
The truth of this lemma is a consequence of Theorem 2.1. as written here. Note also that what follows is mostly an expanded (more details) version of the answer to a similar question found here.
Taking logarithms of both sides, we get that:
$$\log ( \sqrt{2\pi} \Xi_n ) - \log n \sim -\log Z_n - \frac{Z_n^2}{2} \,. \tag{1}$$
This is where Cramer is somewhat cagey; he just says "assuming $\Xi_n$ is bounded", we can conclude blah blah blah. But showing that $\Xi_n$ is suitably bounded almost surely seems to be actually somewhat non-trivial. It seems that the proof of this may essentially be part of what's discussed on pp. 265-267 of Galambos, but I am not sure given that I am still working to understand the content of that book.
Anyway, assuming one can show that $\log \Xi_n = o(\log n)$, then it follows (since the $-Z_n^2/2$ term dominates the $-\log Z_n$ term) that:
$$ - \log n \sim - \frac{Z_n^2}{2} \quad \implies \quad Z_n \sim \sqrt{2 \log n} \,. $$
This is somewhat nice, since it is already most of what we want to show, although again it is worthwhile to note that it is essentially only kicking the can down the road, since now we have to show some certain almost surely boundedness of $\Xi_n$. On the other hand, $\Xi_n$ has the same distribution for any maximum of i.i.d. continuous random variables, so this may be tractable.
Anyway, if $Z_n \sim \sqrt{2 \log n}$ a.s., then clearly one can also conclude that $Z_n \sim \sqrt{2 \log n}(1 + \alpha(n))$ for any $\alpha(n)$ which is $o(1)$ as $n \to \infty$. Using our lemma about polylog functions preserving asymptotic equivalence above, we can substitute this expression back into $(1)$ to get:
$$\log(\sqrt{2 \pi} \Xi _n)- \log n \sim -\log (1 + \alpha) - \frac{1}{2}\log 2 - \frac{1}{2}\log \log n - \log n - 2 \alpha \log n - \alpha^2 \log n \,. $$
$$ \implies -\log(\Xi_n \sqrt{2 \pi}) \sim \log(1 + \alpha) + \frac{1}{2} \log 2 + \frac{1}{2} \log \log n + 2\alpha \log n + \alpha^2 \log n \,. $$
Here we have to go even further, and assume that $\log \Xi_n = o( \log \log n) ~~ as ~~ n \to \infty$ almost surely. Again, all Cramer says is "assuming $\Xi_n$ is bounded". But since all one can say a priori about $\Xi_n$ is that $0 \le Xi_n \le n$ a.s., it hardly seems clear that one should have $\Xi_n = O(1)$ almost surely, which seems to be the substance of Cramer's claim.
But anyway, assuming one believes that, then it follows that the dominant term which does not contain $\alpha$ is $\frac{1}{2} \log \log n$. Since $\alpha = o(1)$, it follows that $\alpha^2 = o(\alpha)$, and clearly $\log ( 1 + \alpha) = o (\alpha) = o(o(\alpha \log n))$, so the dominant term containing $\alpha$ is $2 \alpha \log n$. Therefore, we can rearrange and (dividing everything by $\frac{1}{2}\log\log n$ or $2 \alpha \log n$) find that
$$ - \frac{1}{2} \log \log n \sim 2 \alpha \log n \quad \implies \quad \alpha \sim - \frac{\log \log n}{4 \log n} \,. $$
Therefore, substituting this back into the above, we get that:
$$Z_n \sim \sqrt{2 \log n}- \frac{\log\log n}{2 \sqrt{2 \log n}} \,, $$
again, assuming we believe certain things about $\Xi_n$.
We rehash the same technique again; since $Z_n \sim \sqrt{2 \log n} - \frac{\log \log n}{2 \sqrt{2 \log n}}$, then it also follows that
$$ Z_n \sim \sqrt{2 \log n} - \frac{\log \log n}{2 \sqrt{2 \log n}} (1 + \beta(n)) = \sqrt{2 \log n} \left( 1 - \frac{\log \log n}{8 \log n}(1 + \beta(n)) \right) \,,$$
when $\beta(n)=o(1)$. Let's simplify a little before substituting directly back into (1); we get that:
$$ \log Z_n \sim \log(\sqrt{2 \log n}) + \underbrace{\log \left(1 - \frac{\log \log n}{8 \log n}(1 + \beta(n)) \right) }_{\log(O(1)) = o(\log n)} \sim \log (\sqrt{2 \log n}) \,.$$
$$ \frac{Z_n^2}{2} \sim \log n - \frac{1}{2} \log \log n (1 + \beta) + \underbrace{\frac{(\log \log n)^2}{8 \log n} ( 1 \beta)^2}_{o((1+ \beta) \log \log n)} \sim \log n - \frac{1}{2} (1 + \beta) \log \log n \,. $$
Substituting this back into (1), we find that:
$$ \log ( \sqrt{2 \pi} \Xi_n) - \log n \sim - \log(\sqrt{2 \log n}) - \log n + \frac{1}{2}(1 + \beta) \log \log n \quad \implies \quad \beta \sim \frac{\log (4 \pi \Xi_n^2)}{\log \log n} \,. $$
Therefore, we conclude that almost surely
$$Z_n \sim \sqrt{2 \log n} - \frac{\log \log n}{2 \sqrt{2 \log n}} \left(1 + \frac{\log(4 \pi) + 2 \log( \Xi_n)}{\log \log n} \right)\\ = \sqrt{2 \log n} - \frac{\log \log n + \log (4 \pi)}{ 2 \sqrt{2 \log n} } - \frac{\log (\Xi_n)}{\sqrt{2 \log n}} \,. $$
This corresponds to the final result on p.374 of Cramer's 1946 Mathematical Methods of Statistics except that here the exact order of the error term isn't given. Apparently applying this one more term gives the exact order of the error term, but anyway it doesn't seem necessary to prove the results about the maxima of i.i.d. standard normals in which we are interested.
Given the result of the above, namely that almost surely:
$$Z_n \sim \sqrt{2 \log n} - \frac{\log \log n + \log (4 \pi)}{2 \sqrt{2 \log n}} - \frac{\log (\Xi_n)}{\sqrt{2 \log n}} \quad \implies \\ Z_n = \sqrt{2 \log n} - \frac{\log \log n + \log (4 \pi)}{2 \sqrt{2 \log n}} - \frac{\log (\Xi_n)}{\sqrt{2 \log n}} + o(1)\,. \tag{$\dagger$}$$
2. Then by linearity of expectation it follows that:
$$ \mathbb{E}Z_n = \sqrt{2 \log n} - \frac{\log \log n + \log (4 \pi)}{2 \sqrt{2 \log n}} - \frac{\mathbb{E}[\log (\Xi_n)]}{\sqrt{2 \log n}} + o(1) \quad \implies \\ \frac{\mathbb{E}Z_n}{\sqrt{2 \log n}} = 1 - \frac{\mathbb{E}[\log \Xi_n]}{2 \log n} + o(1) \,. $$
Therefore, we have shown that
$$ \lim_{n \to \infty } \frac{\mathbb{E} Z_n}{\sqrt{2 \log n}} = 1 \,,$$
as long as we can also show that
$$ \mathbb{E}[\log \Xi_n] = o(\log n) \,. $$
This might not be too difficult to show since again $\Xi_n$ has the same distribution for every continuous random variable. Thus we have the second result from above.
1. Similarly, we also have from the above that almost surely:
$$\frac{Z_n}{\sqrt{2 \log n}} = 1 - \frac{\log(\Xi_n)}{2 \log n} +o(1),.$$
Therefore, if we can show that:
$$ \log(\Xi_n) = o(\log n) \text{ almost surely}, \tag{*}$$
then we will have shown the first result from above. Result (*) would also clearly imply a fortiori that $\mathbb{E}[\log (\Xi_n)] = o(\log n)$, thereby also giving us the first result from above.
Also note that in the proof above of ($\dagger$) we needed to assume anyway that $\Xi_n = o(\log n)$ almost surely (or at least something similar), so that if we are able to show ($\dagger$) then we will most likely also have in the process needed to show $\Xi_n = o(\log n)$ almost surely, and therefore if we can prove $(\dagger)$ we will most likely be able to immediately reach all of the following conclusions.
3. However, if we have this result, then I don't understand how one would also have that $\mathbb{E}Z_n = \sqrt{2 \log n} + \Theta(1)$, since $o(1) \not= \Theta(1)$. But at the very least it would seem to be true that $$\mathbb{E}Z_n = \sqrt{2 \log n} + O(1) \,.$$
So then it seems that we can focus on answering the question of how to show that $$ \Xi_n = o(\log n) \text{ almost surely.} $$
We will also need to do the grunt work of providing a proof for (~), but to the best of my knowledge that is just calculus and involves no probability theory, although I have yet to sit down and try it yet.
First let's go through a chain of trivialities in order to rephrase the problem in a way which makes it easier to solve (note that by definition $\Xi_n \ge 0$):
$$\Xi_n = o(\log n) \quad \iff \quad \lim_{n \to \infty} \frac{\Xi_n}{\log n} = 0 \quad \iff \quad \\ \forall \varepsilon > 0, \frac{\Xi_n}{\log n} > \varepsilon \text{ only finitely many times} \quad \iff \\ \forall \varepsilon >0, \quad \Xi_n > \varepsilon \log n \text{ only finitely many times} \,.$$
One also has that:
$$\Xi_n > \varepsilon \log n \quad \iff \quad n(1 - F(Z_n)) > \varepsilon \log n \quad \iff \quad 1 - F(Z_n) > \frac{\varepsilon \log n}{n} \\ \iff \quad F(Z_n) < 1 - \frac{\varepsilon \log n}{n} \quad \iff \quad Z_n \le \inf \left\{ y: F(y) \ge 1 - \frac{\varepsilon \log n}{n} \right\} \,. $$
Correspondingly, define for all $n$:
$$ u_n^{(\varepsilon)} = \inf \left\{ y: F(y) \ge 1 - \frac{\varepsilon \log n}{n} \right\} \,. $$
Therefore the above steps show us that:
$$\Xi_n = o(\log n) \text{ a.s.} \quad \iff \quad \mathbb{P}(\Xi_n = o(\log n)) = 1 \quad \iff \quad \\
\mathbb{P}(\forall \varepsilon > 0 , \Xi_n > \varepsilon \log n \text{ only finitely many times}) = 1 \\
\iff \mathbb{P}(\forall \varepsilon > 0, Z_n \le u_n^{(\varepsilon)} \text{ only finitely many times}) = 1 \\
\iff \mathbb{P}(\forall \varepsilon >0, Z_n \le u_n^{(\varepsilon)} \text{ infinitely often}) =0 \,. $$
Notice that we can write:
$$ \{ \forall \varepsilon >0, Z_n \le u_n^{(\varepsilon)} \text{ infinitely often} \} = \bigcap_{\varepsilon > 0} \{ Z_n \le u_n^{(\varepsilon)} \text{ infinitely often} \} \,.$$
The sequences $u_n^{(\varepsilon)}$ become uniformly larger as $\varepsilon$ decreases, so we can conclude that the events $$\{ Z_n \le u_n^{(\varepsilon)} \text{ infinitely often} \} $$ are decreasing (or at least somehow monotonic) as $\varepsilon$ goes to $0$. Therefore the probability axiom regarding monotonic sequences of events allows us to conclude that:
$$\mathbb{P}(\forall \varepsilon >0, Z_n \le u_n^{(\varepsilon)} \text{ infinitely often}) = \\
\mathbb{P} \left( \bigcap_{\varepsilon > 0} \{ Z_n \le u_n^{(\varepsilon)} \text{ infinitely often} \} \right) = \\
\mathbb{P} \left( \lim_{\varepsilon \downarrow 0} \{ Z_n \le u_n^{(\varepsilon)} \text{ infinitely often} \} \right) = \\
\lim_{\varepsilon \downarrow 0} \mathbb{P}(Z_n \le u_n^{(\varepsilon)} \text{ infinitely often}) \,.$$
Therefore it suffices to show that for all $\varepsilon >0$,
$$\mathbb{P}(Z_n \le u_n^{(\varepsilon)} \text{ infinitely often}) = 0 $$
because of course the limit of any constant sequence is the constant.
Here is somewhat of a sledgehammer result:
Theorem 4.3.1., p. 252 of Galambos, The Asymptotic Theory of Extreme Order Statistics, 2nd edition. Let $X_1, X_2, \dots$ be i.i.d. variables with common nondegenerate and continuous distribution function $F(x)$, and let $u_n$ be a nondecreasing sequence such that $n(1 - F(u_n))$ is also nondecreasing. Then, for $u_n < \sup \{ x: F(x) <1 \}$, $$\mathbb{P}(Z_n \le u_n \text{ infinitely often}) =0 \text{ or }1 $$
according as
$$\sum_{j=1}^{+\infty}[1 - F(u_j)]\exp(-j[1-F(u_j)]) < +\infty \text{ or }=+\infty \,. $$
The proof is technical and takes around five pages, but ultimately it turns out to be a corollary of one of the Borel-Cantelli lemmas. I may get around to trying to condense the proof to only use the part required for this analysis as well as only the assumptions which hold in the Gaussian case, which may be shorter (but maybe it isn't) and type it up here, but holding your breath is not recommended. Note that in this case $\omega(F)=+\infty$, so that condition is vacuous, and $n(1-F(n))$ is $\varepsilon \log n$ thus clearly non-decreasing.
Anyway the point being that, appealing to this theorem, if we can show that:
$$\sum_{j=1}^{+\infty}[1 - F(u_j^{(\varepsilon)})]\exp(-j[1-F(u_j^{(\varepsilon)})]) = \sum_{j=1}^{+\infty}\left[ \frac{\varepsilon \log j}{j} \right]\exp(-\varepsilon \log j) = \varepsilon \sum_{j=1}^{+\infty} \frac{ \log j}{j^{1 + \varepsilon}} < + \infty \,. $$
Note that since logarithmic growth is slower than any power law growth for any positive power law exponent (logarithms and exponentials are monotonicity preserving, so $\log \log n \le \alpha \log n \iff \log n \le n^{\alpha}$ and the former inequality can always be seen to hold for all $n$ large enough due to the fact that $\log n \le n$ and a change of variables), we have that:
$$ \sum_{j=1}^{+\infty} \frac{\log j}{j^{1 + \varepsilon}} \le \sum_{j=1}^{+\infty} \frac{j^{\varepsilon/2}}{j^{1 + \varepsilon}} = \sum_{j=1}^{+\infty} \frac{1}{j^{1 + \varepsilon/2}} < +\infty \,,$$
since the p-series is known to converge for all $p>1$, and $\varepsilon >0$ of course implies $1 + \varepsilon/2 > 1$.
Thus using the above theorem we have shown that for all $\varepsilon >0$, $\mathbb{P}(Z_n \le u_n^{(\varepsilon)} \text{ i.o.}) = 0$, which to recapitulate should mean that $\Xi_n = o(\log n)$ almost surely.
We need to show still that $\log \Xi_n = o(\log \log n)$. This doesn't follow from the above, since, e.g.,
$$ \frac{1}{n} \log n = o(\log n) \,, - \log n + \log \log n \not= o(\log n) \,. $$
However, given a sequence $x_n$, if one can show that $x_n = o( (\log n)^{\delta})$ for arbitrary $\delta >0$, then it does follow that $\log(x_n) = o(\log \log n)$. Ideally I would like to be able to show this for $\Xi_n$ using the above lemma (assuming it's even true), but am not able to (as of yet). | What is the most powerful result about the maximum of i.i.d. Gaussians? The most used in practice?
W.I.P.: Work in progress
Following p. 370 of Cramer's 1946 Mathematical Methods of Statistics, define $$\Xi_n = n(1 - \Phi(Z_n)) \,. $$ Here $\Phi$ is the cumulative distribution function of the stand |
32,612 | When is the AIC a good model selection criterion for forecasting and when is it not? | I am not completely satisfied with my answer, but here goes.
To an extent, you are comparing apples and oranges. Your two calls to Arima() use method="ML", whereas your auto.arima() uses the default, which is method="CSS-ML". Then again, refitting everything with the default does not make a real difference.
Minimizing the AIC is asymptotically equivalent to minimizing the one-step ahead squared prediction error. (I don't have a reference at hand, sorry.) Note that this is an asymptotic result in a suitable statistical sense. It's quite possible for a handpicked model to outperform AIC on a limited length time series. And on a single one, at that.
Finally, as you write in a comment, the AirPassengers dataset exhibits strong multiplicative seasonality. ARIMA does not model multiplicative seasonality or trend; it can only deal with additive effects. Your overparameterized model gets the multiplicative trend and seasonality right, but it may also forecast this in a series that does not exhibit such effects. There are reasons why such large models are typically not considered.
To model multiplicative effects, allow auto.arima() to use Box-Cox transformations:
> (foo <- auto.arima(train,lambda="auto"))
Series: train
ARIMA(0,1,1)(0,1,1)[12]
Box Cox transformation: lambda= -0.3096628
Coefficients:
ma1 sma1
-0.3936 -0.5713
s.e. 0.1035 0.0863
> accuracy(forecast(foo,h=24,biasadj=TRUE),test)
ME RMSE MAE MPE MAPE MASE ACF1 Theil's U
Training set -0.7186038 8.915531 6.691014 -0.2079082 2.753580 0.2341638 0.04889565 NA
Test set 28.5600533 31.711896 28.884516 6.2710488 6.348486 1.0108644 0.17279165 0.6372069
I cut out the AIC, because that is not comparable to the AIC on nontransformed data. Note that we end up much closer to your large model in terms of the test RMSE, but the model is much more interpretable, and I personally would trust it a lot more than an ARIMA(15,1,15)(4,1,4)[12] one. Incidentally, searching through more possible ARIMA models yields the exact same model:
> (bar <- auto.arima(train,max.p=15,max.q=15,max.P=4,max.Q=4,
+ lambda="auto",stepwise=FALSE,approximation=FALSE))
Series: train
ARIMA(0,1,1)(0,1,1)[12]
Box Cox transformation: lambda= -0.3096628
Coefficients:
ma1 sma1
-0.3936 -0.5713
s.e. 0.1035 0.0863 | When is the AIC a good model selection criterion for forecasting and when is it not? | I am not completely satisfied with my answer, but here goes.
To an extent, you are comparing apples and oranges. Your two calls to Arima() use method="ML", whereas your auto.arima() uses the default, | When is the AIC a good model selection criterion for forecasting and when is it not?
I am not completely satisfied with my answer, but here goes.
To an extent, you are comparing apples and oranges. Your two calls to Arima() use method="ML", whereas your auto.arima() uses the default, which is method="CSS-ML". Then again, refitting everything with the default does not make a real difference.
Minimizing the AIC is asymptotically equivalent to minimizing the one-step ahead squared prediction error. (I don't have a reference at hand, sorry.) Note that this is an asymptotic result in a suitable statistical sense. It's quite possible for a handpicked model to outperform AIC on a limited length time series. And on a single one, at that.
Finally, as you write in a comment, the AirPassengers dataset exhibits strong multiplicative seasonality. ARIMA does not model multiplicative seasonality or trend; it can only deal with additive effects. Your overparameterized model gets the multiplicative trend and seasonality right, but it may also forecast this in a series that does not exhibit such effects. There are reasons why such large models are typically not considered.
To model multiplicative effects, allow auto.arima() to use Box-Cox transformations:
> (foo <- auto.arima(train,lambda="auto"))
Series: train
ARIMA(0,1,1)(0,1,1)[12]
Box Cox transformation: lambda= -0.3096628
Coefficients:
ma1 sma1
-0.3936 -0.5713
s.e. 0.1035 0.0863
> accuracy(forecast(foo,h=24,biasadj=TRUE),test)
ME RMSE MAE MPE MAPE MASE ACF1 Theil's U
Training set -0.7186038 8.915531 6.691014 -0.2079082 2.753580 0.2341638 0.04889565 NA
Test set 28.5600533 31.711896 28.884516 6.2710488 6.348486 1.0108644 0.17279165 0.6372069
I cut out the AIC, because that is not comparable to the AIC on nontransformed data. Note that we end up much closer to your large model in terms of the test RMSE, but the model is much more interpretable, and I personally would trust it a lot more than an ARIMA(15,1,15)(4,1,4)[12] one. Incidentally, searching through more possible ARIMA models yields the exact same model:
> (bar <- auto.arima(train,max.p=15,max.q=15,max.P=4,max.Q=4,
+ lambda="auto",stepwise=FALSE,approximation=FALSE))
Series: train
ARIMA(0,1,1)(0,1,1)[12]
Box Cox transformation: lambda= -0.3096628
Coefficients:
ma1 sma1
-0.3936 -0.5713
s.e. 0.1035 0.0863 | When is the AIC a good model selection criterion for forecasting and when is it not?
I am not completely satisfied with my answer, but here goes.
To an extent, you are comparing apples and oranges. Your two calls to Arima() use method="ML", whereas your auto.arima() uses the default, |
32,613 | When to normalize data when using two datasets from the same distribution? | You should apply the same transformation to all individuals.
Don't use method 1; it will be biased. An easy way to realize this is to imagine that two individuals with identical features exist in $D_1$ and $D_2$. You would want these two individuals to also be identical in the transformed datasets, but your method 1 doesn't allow this.
Method 2 is OK. If you want to train sequentially, another option would be to apply the transformation induced by mean_1 and std_dev_1 to all data points; note however that this can lead to issues if future data points are vastly different from the data in $D_1$. | When to normalize data when using two datasets from the same distribution? | You should apply the same transformation to all individuals.
Don't use method 1; it will be biased. An easy way to realize this is to imagine that two individuals with identical features exist in $D_1 | When to normalize data when using two datasets from the same distribution?
You should apply the same transformation to all individuals.
Don't use method 1; it will be biased. An easy way to realize this is to imagine that two individuals with identical features exist in $D_1$ and $D_2$. You would want these two individuals to also be identical in the transformed datasets, but your method 1 doesn't allow this.
Method 2 is OK. If you want to train sequentially, another option would be to apply the transformation induced by mean_1 and std_dev_1 to all data points; note however that this can lead to issues if future data points are vastly different from the data in $D_1$. | When to normalize data when using two datasets from the same distribution?
You should apply the same transformation to all individuals.
Don't use method 1; it will be biased. An easy way to realize this is to imagine that two individuals with identical features exist in $D_1 |
32,614 | When to normalize data when using two datasets from the same distribution? | If D1 and D2 are truly from the same distribution, then as long as D1 has at least a few hundred datapoints, you shouldn't be seeing much variation in the mean and standard deviation, so normalizing all the data based on D1 shouldn't pose much of a problem. Normalizing each subset of data based on its own mean and standard deviation means that your overfitting is going to follow more from the subset sample size than the overall sample size, and in a way you're introducing a spurious feature of "what subsample did this datapoint come from?". Normalizing the data with different means and std shouldn't affect the result of the neural net training so much as how long it takes to converge, and if you're worried about the latter, consider taking the results from one as the initial values for the next. | When to normalize data when using two datasets from the same distribution? | If D1 and D2 are truly from the same distribution, then as long as D1 has at least a few hundred datapoints, you shouldn't be seeing much variation in the mean and standard deviation, so normalizing a | When to normalize data when using two datasets from the same distribution?
If D1 and D2 are truly from the same distribution, then as long as D1 has at least a few hundred datapoints, you shouldn't be seeing much variation in the mean and standard deviation, so normalizing all the data based on D1 shouldn't pose much of a problem. Normalizing each subset of data based on its own mean and standard deviation means that your overfitting is going to follow more from the subset sample size than the overall sample size, and in a way you're introducing a spurious feature of "what subsample did this datapoint come from?". Normalizing the data with different means and std shouldn't affect the result of the neural net training so much as how long it takes to converge, and if you're worried about the latter, consider taking the results from one as the initial values for the next. | When to normalize data when using two datasets from the same distribution?
If D1 and D2 are truly from the same distribution, then as long as D1 has at least a few hundred datapoints, you shouldn't be seeing much variation in the mean and standard deviation, so normalizing a |
32,615 | When to normalize data when using two datasets from the same distribution? | A few remarks:
If both datasets are from the same distribution, both procedures should give the same result (as D1 and D2 would have the same mean and variance). But apparently they are not.
Check what are actual mean and variance for each dataset. If they are the same - it does not matter. If they are different, then normalize each dataset with its own mean and variance (method 1)
Train on a shuffled dataset $D3 = \bar{D1} \cup \bar{D2}$, where a bar means normalization. Training first on one dataset and then on another may cause a lot of problem and should be done with caution (see catastrophic forgetting).
If new inputs will be from the same source as D2, then normalize them with mean and variance of D2.
Applying sigmoid on normalized values may not be necessary. | When to normalize data when using two datasets from the same distribution? | A few remarks:
If both datasets are from the same distribution, both procedures should give the same result (as D1 and D2 would have the same mean and variance). But apparently they are not.
Check wh | When to normalize data when using two datasets from the same distribution?
A few remarks:
If both datasets are from the same distribution, both procedures should give the same result (as D1 and D2 would have the same mean and variance). But apparently they are not.
Check what are actual mean and variance for each dataset. If they are the same - it does not matter. If they are different, then normalize each dataset with its own mean and variance (method 1)
Train on a shuffled dataset $D3 = \bar{D1} \cup \bar{D2}$, where a bar means normalization. Training first on one dataset and then on another may cause a lot of problem and should be done with caution (see catastrophic forgetting).
If new inputs will be from the same source as D2, then normalize them with mean and variance of D2.
Applying sigmoid on normalized values may not be necessary. | When to normalize data when using two datasets from the same distribution?
A few remarks:
If both datasets are from the same distribution, both procedures should give the same result (as D1 and D2 would have the same mean and variance). But apparently they are not.
Check wh |
32,616 | How to deal with unstable estimates during curve fitting? | I believe that your problem occurs because the algorithm stops too early (another issue would be ending up in a local minimum) and you can "solve" this by working on the stopping rule.
For the L-BFGS-B algorithm in the optim the algorithm stops when the change of the objective function is smaller than a certain limit.
Zigzagging
Note that the optimum is not in the direction of the slope.
Even when there is a single (global) maximum, what you may end up with is the situation that the change of the function is in certain directions more extreme than in other directions. The result is that the algorithm selects only a small step size and mostly determined by those dominant directions. You will only get a small change of the objective function, possibly resulting in the termination of the algorithm.
The way that the function will approach the optimum is in a zigzagging pattern which is only slowly converging and possibly early terminating.
Below are three ways/solutions too 'help' the algorithm. Another "solution" might be too use a different (smarter) algorithm.
Solution 1: scaling parameters
You can debug this by observing the Hessian matrix (the second order partial derivatives)
> optim(par=initp,estI01,refd=ds,
+ method="L-BFGS-B",
+ lower=lowb,
+ upper=uppb,
+ control=conl, hessian = 1) -> res3
> res3$hessian
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 7.609540375 5.339149352 1.253786410 2.902051e-02 -9.718628e-02 -4.618742e-03
[2,] 5.339149352 11.231282671 7.121692787 8.657414e-02 -4.019626e-03 -2.007495e-02
[3,] 1.253786410 7.121692787 11.868611589 3.210269e-02 1.689158e-01 -8.289745e-03
[4,] 0.029020509 0.086574137 0.032102688 -6.388602e-05 0.000000e+00 0.000000e+00
[5,] -0.097186278 -0.004019626 0.168915754 0.000000e+00 7.534015e-05 -2.602085e-14
[6,] -0.004618742 -0.020074953 -0.008289745 0.000000e+00 -2.602085e-14 -8.705671e-07
and you see that the change of the parameters 1-3 has more effect on the slope than the parameters 4-6.
If you scale your parameters (which changes the gradient and puts more weight on changes in the direction of the parameters 4-6) then you get the same results for the three starting conditions.
conl <- list(maxit = 10^4,
parscale = c(rep(10^0,3),rep(10^2,3))
)
Solution 2: Changing objective function and convergence limits
You can change the objective function such that you will not reach the machine limit so easily. For instance with your function you can change the mean (which involves a division of your objective function by 161) into the sum.
#return(mean(abs(refd$Irel - Iest))
return(sum(abs(refd$Irel - Iest)))
and also change the conditions for convergence.
conl <- list(maxit=10^4,
factr = 1
)
The algorithm stops when the change of the function is below factr multiplied with the machine tolerance (the default is $10^7$ and setting it at $1$ is the most extreme you can go)
Solution 3: Segreated solving for parameters
(this works most effectively in your situation)
You can solve the first three parameters separately from the other three parameters. This can be done in various ways. For instance if you use this function
# I am putting the estimation in a seperate function
# such that you call this function seperately, e.g. for plotting
Iest <- function(pars,refd, coefout = 0){
n <- length(pars)/2
outer(refd$nm, pars[n+1:n], Im, inv=T) -> Im.j
# use fitting to estimate the first three parameter values
fit <- L1pack::l1fit(x = Im.j, y = refd$Irel, intercept = 0)
#Iest <- Im.j%*%pars[1:n]
Iest <- fit$fitted.values
# the stuff with coefout allows you to
# use this function in optim but also outside optim
# when you want to get the coefficients
if (coefout == 0) {
Iest
} else {
fit$coefficients
}
}
estI01 <- function(pars,refd){
Iest <- Iest(pars,refd)
return(mean(abs((refd$Irel - Iest))^1))
}
Now optim only optimizes for three parameters. The optimization of the other three parameters is nested inside the prediction of the values. In this example this nested prediction is done with the function l1fit from the L1pack package because you seek to optimize the L1-norm. But this method of splitting up the variables is especially useful when you seek to optimize the L2-norm because then the optimization of the first three parameters can be done with an explicit function.
Comparison of output from solution 1, 2 and 3
plotting the solutions in the colors red green and blue. | How to deal with unstable estimates during curve fitting? | I believe that your problem occurs because the algorithm stops too early (another issue would be ending up in a local minimum) and you can "solve" this by working on the stopping rule.
For the L-BFGS- | How to deal with unstable estimates during curve fitting?
I believe that your problem occurs because the algorithm stops too early (another issue would be ending up in a local minimum) and you can "solve" this by working on the stopping rule.
For the L-BFGS-B algorithm in the optim the algorithm stops when the change of the objective function is smaller than a certain limit.
Zigzagging
Note that the optimum is not in the direction of the slope.
Even when there is a single (global) maximum, what you may end up with is the situation that the change of the function is in certain directions more extreme than in other directions. The result is that the algorithm selects only a small step size and mostly determined by those dominant directions. You will only get a small change of the objective function, possibly resulting in the termination of the algorithm.
The way that the function will approach the optimum is in a zigzagging pattern which is only slowly converging and possibly early terminating.
Below are three ways/solutions too 'help' the algorithm. Another "solution" might be too use a different (smarter) algorithm.
Solution 1: scaling parameters
You can debug this by observing the Hessian matrix (the second order partial derivatives)
> optim(par=initp,estI01,refd=ds,
+ method="L-BFGS-B",
+ lower=lowb,
+ upper=uppb,
+ control=conl, hessian = 1) -> res3
> res3$hessian
[,1] [,2] [,3] [,4] [,5] [,6]
[1,] 7.609540375 5.339149352 1.253786410 2.902051e-02 -9.718628e-02 -4.618742e-03
[2,] 5.339149352 11.231282671 7.121692787 8.657414e-02 -4.019626e-03 -2.007495e-02
[3,] 1.253786410 7.121692787 11.868611589 3.210269e-02 1.689158e-01 -8.289745e-03
[4,] 0.029020509 0.086574137 0.032102688 -6.388602e-05 0.000000e+00 0.000000e+00
[5,] -0.097186278 -0.004019626 0.168915754 0.000000e+00 7.534015e-05 -2.602085e-14
[6,] -0.004618742 -0.020074953 -0.008289745 0.000000e+00 -2.602085e-14 -8.705671e-07
and you see that the change of the parameters 1-3 has more effect on the slope than the parameters 4-6.
If you scale your parameters (which changes the gradient and puts more weight on changes in the direction of the parameters 4-6) then you get the same results for the three starting conditions.
conl <- list(maxit = 10^4,
parscale = c(rep(10^0,3),rep(10^2,3))
)
Solution 2: Changing objective function and convergence limits
You can change the objective function such that you will not reach the machine limit so easily. For instance with your function you can change the mean (which involves a division of your objective function by 161) into the sum.
#return(mean(abs(refd$Irel - Iest))
return(sum(abs(refd$Irel - Iest)))
and also change the conditions for convergence.
conl <- list(maxit=10^4,
factr = 1
)
The algorithm stops when the change of the function is below factr multiplied with the machine tolerance (the default is $10^7$ and setting it at $1$ is the most extreme you can go)
Solution 3: Segreated solving for parameters
(this works most effectively in your situation)
You can solve the first three parameters separately from the other three parameters. This can be done in various ways. For instance if you use this function
# I am putting the estimation in a seperate function
# such that you call this function seperately, e.g. for plotting
Iest <- function(pars,refd, coefout = 0){
n <- length(pars)/2
outer(refd$nm, pars[n+1:n], Im, inv=T) -> Im.j
# use fitting to estimate the first three parameter values
fit <- L1pack::l1fit(x = Im.j, y = refd$Irel, intercept = 0)
#Iest <- Im.j%*%pars[1:n]
Iest <- fit$fitted.values
# the stuff with coefout allows you to
# use this function in optim but also outside optim
# when you want to get the coefficients
if (coefout == 0) {
Iest
} else {
fit$coefficients
}
}
estI01 <- function(pars,refd){
Iest <- Iest(pars,refd)
return(mean(abs((refd$Irel - Iest))^1))
}
Now optim only optimizes for three parameters. The optimization of the other three parameters is nested inside the prediction of the values. In this example this nested prediction is done with the function l1fit from the L1pack package because you seek to optimize the L1-norm. But this method of splitting up the variables is especially useful when you seek to optimize the L2-norm because then the optimization of the first three parameters can be done with an explicit function.
Comparison of output from solution 1, 2 and 3
plotting the solutions in the colors red green and blue. | How to deal with unstable estimates during curve fitting?
I believe that your problem occurs because the algorithm stops too early (another issue would be ending up in a local minimum) and you can "solve" this by working on the stopping rule.
For the L-BFGS- |
32,617 | Why is N/k the effective number of parameters in k-NN? | To answer that, you need to ask yourself what a model is. For example, a simple univariate linear model $y = \beta_0 + \beta_1 x$, where you have 2 parameters $\beta_0, \beta_1$ and input data $x$. You can predict values by introducing new data into $x$ and exploiting the knowledge which consists of the linear relation and 2 parameters. No other knowledge is needed to predict, so it is fine to say that the parameters of the linear model are 2. No training data values are included into parameters, they are sublimated into the regression model parameters.
Now considering k nearest neighbor. Which is the model? You have something like the class index of maximum count from neighbor of input x. Now going on, if we further assume that the regions do not overlap (which is somehow a strong assumption, but does not affect the main idea), then you can conclude that in general you will have $n/k$ regions, and in each region you predict with a max count of the target variable. This max count is what is used for prediction, no other information is required. Your model (considering non overlapping regions) is something like $y = \sum_{r=1}^{n/k} argmax_k (count(y_j)) 1_{j \in region_r}$, where $1$ is indicator function equals with $1$ if $j \in region_r$, $0$ otherwise.
From that formalization, you can see that for each region you have a parameter which is fitted as $argmax_k$ so you can say that you have $n/k$ parameters, since those only are used for prediction (other that input $x$ which is not a parameter).
This can go further to an extreme case, for example consider $k=n$. In this case, we have a single region, for that region we fit a single parameter as the argmax index of the majority category.
The assumption for non overlapping regions looks rather strong, but it is used in order to simplify the calculus a lot. | Why is N/k the effective number of parameters in k-NN? | To answer that, you need to ask yourself what a model is. For example, a simple univariate linear model $y = \beta_0 + \beta_1 x$, where you have 2 parameters $\beta_0, \beta_1$ and input data $x$. Yo | Why is N/k the effective number of parameters in k-NN?
To answer that, you need to ask yourself what a model is. For example, a simple univariate linear model $y = \beta_0 + \beta_1 x$, where you have 2 parameters $\beta_0, \beta_1$ and input data $x$. You can predict values by introducing new data into $x$ and exploiting the knowledge which consists of the linear relation and 2 parameters. No other knowledge is needed to predict, so it is fine to say that the parameters of the linear model are 2. No training data values are included into parameters, they are sublimated into the regression model parameters.
Now considering k nearest neighbor. Which is the model? You have something like the class index of maximum count from neighbor of input x. Now going on, if we further assume that the regions do not overlap (which is somehow a strong assumption, but does not affect the main idea), then you can conclude that in general you will have $n/k$ regions, and in each region you predict with a max count of the target variable. This max count is what is used for prediction, no other information is required. Your model (considering non overlapping regions) is something like $y = \sum_{r=1}^{n/k} argmax_k (count(y_j)) 1_{j \in region_r}$, where $1$ is indicator function equals with $1$ if $j \in region_r$, $0$ otherwise.
From that formalization, you can see that for each region you have a parameter which is fitted as $argmax_k$ so you can say that you have $n/k$ parameters, since those only are used for prediction (other that input $x$ which is not a parameter).
This can go further to an extreme case, for example consider $k=n$. In this case, we have a single region, for that region we fit a single parameter as the argmax index of the majority category.
The assumption for non overlapping regions looks rather strong, but it is used in order to simplify the calculus a lot. | Why is N/k the effective number of parameters in k-NN?
To answer that, you need to ask yourself what a model is. For example, a simple univariate linear model $y = \beta_0 + \beta_1 x$, where you have 2 parameters $\beta_0, \beta_1$ and input data $x$. Yo |
32,618 | Why is N/k the effective number of parameters in k-NN? | Remember that 'parameter' is different in statistics; instead of merely an argument to a function, a parameter is a value that describes a population. | Why is N/k the effective number of parameters in k-NN? | Remember that 'parameter' is different in statistics; instead of merely an argument to a function, a parameter is a value that describes a population. | Why is N/k the effective number of parameters in k-NN?
Remember that 'parameter' is different in statistics; instead of merely an argument to a function, a parameter is a value that describes a population. | Why is N/k the effective number of parameters in k-NN?
Remember that 'parameter' is different in statistics; instead of merely an argument to a function, a parameter is a value that describes a population. |
32,619 | Why is N/k the effective number of parameters in k-NN? | Hastie's other works (Generalized Linear Models book and Effective Degrees of Freedom paper) are more precise on the issue of "effective number parameters" aka "effective degrees of freedom."
The idea is that for linear parametric methods, the number of parameters is proportional to the expected gap between train and test error. Take the noise-scale-independent component of this quantity and call the result "effective parameters". This lets you "count parameters" even when there are no parameters to speak of. This view also explains why "effective number of parameters" of ridge regression depends on the value of $\lambda$
Relevant part from the Effective Degrees of Freedom (DF) paper:
I haven't seen the derivation of "DF" quantity for the nearest neighbor classifier, showing that DF is equal to $N/k$ for the case of nearest neighbor would answer the question.
There's a related result for "running mean smoother of $k$ observations" in Hastie's "Generalized Linear Models" book, section 3.3, showing that the number of effective parameters is $O(1/k)$ | Why is N/k the effective number of parameters in k-NN? | Hastie's other works (Generalized Linear Models book and Effective Degrees of Freedom paper) are more precise on the issue of "effective number parameters" aka "effective degrees of freedom."
The idea | Why is N/k the effective number of parameters in k-NN?
Hastie's other works (Generalized Linear Models book and Effective Degrees of Freedom paper) are more precise on the issue of "effective number parameters" aka "effective degrees of freedom."
The idea is that for linear parametric methods, the number of parameters is proportional to the expected gap between train and test error. Take the noise-scale-independent component of this quantity and call the result "effective parameters". This lets you "count parameters" even when there are no parameters to speak of. This view also explains why "effective number of parameters" of ridge regression depends on the value of $\lambda$
Relevant part from the Effective Degrees of Freedom (DF) paper:
I haven't seen the derivation of "DF" quantity for the nearest neighbor classifier, showing that DF is equal to $N/k$ for the case of nearest neighbor would answer the question.
There's a related result for "running mean smoother of $k$ observations" in Hastie's "Generalized Linear Models" book, section 3.3, showing that the number of effective parameters is $O(1/k)$ | Why is N/k the effective number of parameters in k-NN?
Hastie's other works (Generalized Linear Models book and Effective Degrees of Freedom paper) are more precise on the issue of "effective number parameters" aka "effective degrees of freedom."
The idea |
32,620 | Elo-type ranking system that incorporates game score | The way to generalize Elo is to consider it in the broader context of paired comparison models. These models were first developed in psychology to model rankings and preferences of participants over decisions and options. Classic Elo can be seen as a discretized dynamic approximation of the Bradley-Terry Model, which is amoung the earliest and most well known models in the paired comparison literature, and can be formulated as follows:
\begin{align}
P(i > j) = \frac{1}{1 + exp(\beta_j - \beta_i)}
\end{align}
I.e we model the probability of player i beating j (or general object i ranking higher than j) as being a function of the difference in relative strengths of the players, i and j ($\beta_i - \beta_j$). In the frequentist context, we can't identify the values of $\beta$, but we can estimate their values relative to one another (In the frequentist case, usually a constraint is placed on the $\beta$s, i.e they sum to 1, is used to get identifiability if desired. In the Bayesian context, a prior on the $\beta$s achieves identifiability in the Bayesian sense).
So essentially, the Bradley Terry model is just a logistic regression (technically a dynamic approximation to one, for more precision about the relationship see https://www.stat.berkeley.edu/~aldous/Papers/me150.pdf), where the regressors are 1,-1 indicators for which players are playing in a given context.
The reason we see that classic elo is restricted to 0,1 outcomes is because of the likelihood, which models binary outcomes. We see then that the solution to accommodating different outcomes is by modifying the likelihood. There is a huge literature adapting paired comparison models to nearly every relevant case seen in sports imaginable. At the end of this post I will put a few examples of papers.
In american football this was done by modelling the difference in scores as a normal. Say that $S_A, S_B$ are the scores of team A and B respectively (typically A is meant to denote the home team and a home advantage is modelled).
$$S_A - S_B \sim N(f(\beta_A - \beta_B), \sigma)$$
We model score differences again as some function of the difference in strengths. The variance $(\sigma)$ can be modelled in a number of different ways including as a function of score variances for each team and/or something that varies over time. See https://www.researchgate.net/publication/2244176_A_State-Space_Model_for_National_Football_League_Scores for this example. This is one of the most influential papers in the literature. Notice that in a previous paper, they mention that they spent time validating the assumption that observed score differences in American football are approximately normal. In lower scoring sports this assumption is unlikely to hold. In a really low scoring sport, say European football, one could model the outcomes as an ordered logit or probit for example. Different sports will require different likelihoods and approaches and validation to capture the underlying winning process.
An extremely clever and good paper, attacked the problem of finding a unifying approach to different sports, by modelling betting log odds directly as a normal distribution(https://arxiv.org/pdf/1701.05976.pdf). The idea is that while the game scores in different sports have completely different properties, betting markets are the same across many sports. Betting lines imply market probabilities and log-odds are approximately normal quantities. This is a good way of extracting more information that simple binary win-loss if you can get your hands on betting information.
So in short, the solution is to consider Elo in the context of paired comparison models. This framework is richer and more flexible, allowing for different likelihood specification and additionally can easily accomodate ratings which vary over time (in the bayesian context at least). It is also easier to accommodate covariates in the framework. Most elo-type models are able to accomodate home advantage, but rarely more covariates. The only advantage of Elo-type themselves models is that they are easy to calculate dynamically, which is a really useful property if the goal is to create rankings for say online chess or videogames. Only some paired comparison models have been turned into elo-like models, but this is a growing literature. Microsoft trueskill is an example for one. If the goal is for say betting or for a small non-dynamic data set, this shouldn't be much of a drawback. There are many existing packages for a lot of paired comparison models as well.
Additional papers
Modelling Sports with rankings (I.e track and field sports for example): http://www.glicko.net/research/multicompetitor.pdf
Stochastic and Dynamic Model: http://www.glicko.net/research/dpcmsv.pdf | Elo-type ranking system that incorporates game score | The way to generalize Elo is to consider it in the broader context of paired comparison models. These models were first developed in psychology to model rankings and preferences of participants over d | Elo-type ranking system that incorporates game score
The way to generalize Elo is to consider it in the broader context of paired comparison models. These models were first developed in psychology to model rankings and preferences of participants over decisions and options. Classic Elo can be seen as a discretized dynamic approximation of the Bradley-Terry Model, which is amoung the earliest and most well known models in the paired comparison literature, and can be formulated as follows:
\begin{align}
P(i > j) = \frac{1}{1 + exp(\beta_j - \beta_i)}
\end{align}
I.e we model the probability of player i beating j (or general object i ranking higher than j) as being a function of the difference in relative strengths of the players, i and j ($\beta_i - \beta_j$). In the frequentist context, we can't identify the values of $\beta$, but we can estimate their values relative to one another (In the frequentist case, usually a constraint is placed on the $\beta$s, i.e they sum to 1, is used to get identifiability if desired. In the Bayesian context, a prior on the $\beta$s achieves identifiability in the Bayesian sense).
So essentially, the Bradley Terry model is just a logistic regression (technically a dynamic approximation to one, for more precision about the relationship see https://www.stat.berkeley.edu/~aldous/Papers/me150.pdf), where the regressors are 1,-1 indicators for which players are playing in a given context.
The reason we see that classic elo is restricted to 0,1 outcomes is because of the likelihood, which models binary outcomes. We see then that the solution to accommodating different outcomes is by modifying the likelihood. There is a huge literature adapting paired comparison models to nearly every relevant case seen in sports imaginable. At the end of this post I will put a few examples of papers.
In american football this was done by modelling the difference in scores as a normal. Say that $S_A, S_B$ are the scores of team A and B respectively (typically A is meant to denote the home team and a home advantage is modelled).
$$S_A - S_B \sim N(f(\beta_A - \beta_B), \sigma)$$
We model score differences again as some function of the difference in strengths. The variance $(\sigma)$ can be modelled in a number of different ways including as a function of score variances for each team and/or something that varies over time. See https://www.researchgate.net/publication/2244176_A_State-Space_Model_for_National_Football_League_Scores for this example. This is one of the most influential papers in the literature. Notice that in a previous paper, they mention that they spent time validating the assumption that observed score differences in American football are approximately normal. In lower scoring sports this assumption is unlikely to hold. In a really low scoring sport, say European football, one could model the outcomes as an ordered logit or probit for example. Different sports will require different likelihoods and approaches and validation to capture the underlying winning process.
An extremely clever and good paper, attacked the problem of finding a unifying approach to different sports, by modelling betting log odds directly as a normal distribution(https://arxiv.org/pdf/1701.05976.pdf). The idea is that while the game scores in different sports have completely different properties, betting markets are the same across many sports. Betting lines imply market probabilities and log-odds are approximately normal quantities. This is a good way of extracting more information that simple binary win-loss if you can get your hands on betting information.
So in short, the solution is to consider Elo in the context of paired comparison models. This framework is richer and more flexible, allowing for different likelihood specification and additionally can easily accomodate ratings which vary over time (in the bayesian context at least). It is also easier to accommodate covariates in the framework. Most elo-type models are able to accomodate home advantage, but rarely more covariates. The only advantage of Elo-type themselves models is that they are easy to calculate dynamically, which is a really useful property if the goal is to create rankings for say online chess or videogames. Only some paired comparison models have been turned into elo-like models, but this is a growing literature. Microsoft trueskill is an example for one. If the goal is for say betting or for a small non-dynamic data set, this shouldn't be much of a drawback. There are many existing packages for a lot of paired comparison models as well.
Additional papers
Modelling Sports with rankings (I.e track and field sports for example): http://www.glicko.net/research/multicompetitor.pdf
Stochastic and Dynamic Model: http://www.glicko.net/research/dpcmsv.pdf | Elo-type ranking system that incorporates game score
The way to generalize Elo is to consider it in the broader context of paired comparison models. These models were first developed in psychology to model rankings and preferences of participants over d |
32,621 | Elo-type ranking system that incorporates game score | In the context of Elo's rating system, the update formula is as follows:
R' = R + K (S - E)
K is the K-Factor, which can be used to tune-fine your system for your particular requirements (application). S is a discrete variable that can be 1 if the player wins, 1/2 if ties, and 0 if lose. E is the probability of winning.
Said that, if you want to incorporate the score, in reality, the reward of a match for a winning player can be as follows:
R'- R = K (1 - E) = 10 (from 10-1, for example), and E is something that you can calculate if you have the initial rating of both players or teams.
Therefore, you will have to play with the K-factor (a bit of data-analysis) to apply Elo's rating systems for your requirements. | Elo-type ranking system that incorporates game score | In the context of Elo's rating system, the update formula is as follows:
R' = R + K (S - E)
K is the K-Factor, which can be used to tune-fine your system for your particular requirements (application) | Elo-type ranking system that incorporates game score
In the context of Elo's rating system, the update formula is as follows:
R' = R + K (S - E)
K is the K-Factor, which can be used to tune-fine your system for your particular requirements (application). S is a discrete variable that can be 1 if the player wins, 1/2 if ties, and 0 if lose. E is the probability of winning.
Said that, if you want to incorporate the score, in reality, the reward of a match for a winning player can be as follows:
R'- R = K (1 - E) = 10 (from 10-1, for example), and E is something that you can calculate if you have the initial rating of both players or teams.
Therefore, you will have to play with the K-factor (a bit of data-analysis) to apply Elo's rating systems for your requirements. | Elo-type ranking system that incorporates game score
In the context of Elo's rating system, the update formula is as follows:
R' = R + K (S - E)
K is the K-Factor, which can be used to tune-fine your system for your particular requirements (application) |
32,622 | How to correctly represent difference variables in DAGs? | The solution is to think functionally.
The value of $\Delta E_{2} = f(E_{1},E_{2})$ more specifically$ \Delta E_{2} = E_{2} - E_{1}$. Therefore difference variables may be represented in DAGs by option 4, "something else" (this DAG assumes $E_{1}$ and $E_{2}$ directly cause $O$ in addition to their difference):
If $E_{1}$ & $E_{2}$ do not have direct effects on $O$, $\Delta E_{2}$ still remains a function of its parents:
If we rewrite the single lag generalized error correction model thus ($Q_{t-1}$ for 'eQuilibrium term', where $Q_{t-1} = O_{t-1} - E_{t-1}$):
$$\Delta O_t = \beta_{0} + \beta_{\text{c}}\left(Q_{t-1}\right) + \beta_{\Delta E}\Delta E_{t} + \beta_E E_{t-1} + \varepsilon_t$$
Then the DAG underlying the model for $\Delta O_{t}$ (ignoring its descendants at $t+1$) is:
The effects of $E$ on $\Delta O_{t}$ from the model thus enter from equilibrium term $Q_{t-1}$, from $E_{t-1}$ and from change term $\Delta E_{t}$. Other causes of $O_{t-1}$, $O_{t}$, $E_{t-1}$ and $E_{t}$ (e.g., unmodeled variables, random inputs) are left implicit.
The portion of this answer corresponding to the first two DAGs is courtesy of personal communication with Miguel Hernán. | How to correctly represent difference variables in DAGs? | The solution is to think functionally.
The value of $\Delta E_{2} = f(E_{1},E_{2})$ more specifically$ \Delta E_{2} = E_{2} - E_{1}$. Therefore difference variables may be represented in DAGs by optio | How to correctly represent difference variables in DAGs?
The solution is to think functionally.
The value of $\Delta E_{2} = f(E_{1},E_{2})$ more specifically$ \Delta E_{2} = E_{2} - E_{1}$. Therefore difference variables may be represented in DAGs by option 4, "something else" (this DAG assumes $E_{1}$ and $E_{2}$ directly cause $O$ in addition to their difference):
If $E_{1}$ & $E_{2}$ do not have direct effects on $O$, $\Delta E_{2}$ still remains a function of its parents:
If we rewrite the single lag generalized error correction model thus ($Q_{t-1}$ for 'eQuilibrium term', where $Q_{t-1} = O_{t-1} - E_{t-1}$):
$$\Delta O_t = \beta_{0} + \beta_{\text{c}}\left(Q_{t-1}\right) + \beta_{\Delta E}\Delta E_{t} + \beta_E E_{t-1} + \varepsilon_t$$
Then the DAG underlying the model for $\Delta O_{t}$ (ignoring its descendants at $t+1$) is:
The effects of $E$ on $\Delta O_{t}$ from the model thus enter from equilibrium term $Q_{t-1}$, from $E_{t-1}$ and from change term $\Delta E_{t}$. Other causes of $O_{t-1}$, $O_{t}$, $E_{t-1}$ and $E_{t}$ (e.g., unmodeled variables, random inputs) are left implicit.
The portion of this answer corresponding to the first two DAGs is courtesy of personal communication with Miguel Hernán. | How to correctly represent difference variables in DAGs?
The solution is to think functionally.
The value of $\Delta E_{2} = f(E_{1},E_{2})$ more specifically$ \Delta E_{2} = E_{2} - E_{1}$. Therefore difference variables may be represented in DAGs by optio |
32,623 | How to correctly represent difference variables in DAGs? | EDIT:
If you are only concerned with representing nonparametric relationships among your variables, I think 1) would be most appropriate. While there may be a more specific functional form relating the two variables to the outcome, in a DAG it is not necessary to represent that form. On the other hand, if you wanted to use a path diagram representing a linear structural equation model like the one you wrote, it would make sense to include the difference score in the diagram; this way, the specific model you wrote and the diagram would be equally specific. A DAG is more vague (but also more flexible) since it does not require (or necessary allow) for specific function form.
It might come down to the goal of drawing your DAG. If your goal is represent with as much precision as possible the relationships among your variables, it would make sense to include the difference term as its own variable since it does have its own causal force. A graph without it would also be valid. You could, in theory, make the same conditional independence statements about the outcome and the predictors with a more detailed DAG than with a less detailed one.
My intuition is closest to 3). If it's true that $E_1$ and $E_2$ do not directly affect $O$ except through their difference, then 3) is correct, and I would add edges from $E_1$ and $E_2$ to $\Delta E_2$ and from $E_1$ to $E_2$ for completeness. No other nodes would point to the difference variable, but variables that predict the difference would point instead to $E_1$ and/or $E_2$. Graphically, what I'm describing is:
E1
|----> E2-E1 ---> O
V ^
E2-------|
with possible additional arrows from $E_1$ and $E_2$ to $O$ if they affect $O$ beyond their effect through their difference. | How to correctly represent difference variables in DAGs? | EDIT:
If you are only concerned with representing nonparametric relationships among your variables, I think 1) would be most appropriate. While there may be a more specific functional form relating th | How to correctly represent difference variables in DAGs?
EDIT:
If you are only concerned with representing nonparametric relationships among your variables, I think 1) would be most appropriate. While there may be a more specific functional form relating the two variables to the outcome, in a DAG it is not necessary to represent that form. On the other hand, if you wanted to use a path diagram representing a linear structural equation model like the one you wrote, it would make sense to include the difference score in the diagram; this way, the specific model you wrote and the diagram would be equally specific. A DAG is more vague (but also more flexible) since it does not require (or necessary allow) for specific function form.
It might come down to the goal of drawing your DAG. If your goal is represent with as much precision as possible the relationships among your variables, it would make sense to include the difference term as its own variable since it does have its own causal force. A graph without it would also be valid. You could, in theory, make the same conditional independence statements about the outcome and the predictors with a more detailed DAG than with a less detailed one.
My intuition is closest to 3). If it's true that $E_1$ and $E_2$ do not directly affect $O$ except through their difference, then 3) is correct, and I would add edges from $E_1$ and $E_2$ to $\Delta E_2$ and from $E_1$ to $E_2$ for completeness. No other nodes would point to the difference variable, but variables that predict the difference would point instead to $E_1$ and/or $E_2$. Graphically, what I'm describing is:
E1
|----> E2-E1 ---> O
V ^
E2-------|
with possible additional arrows from $E_1$ and $E_2$ to $O$ if they affect $O$ beyond their effect through their difference. | How to correctly represent difference variables in DAGs?
EDIT:
If you are only concerned with representing nonparametric relationships among your variables, I think 1) would be most appropriate. While there may be a more specific functional form relating th |
32,624 | What is the problem with overdifferencing a long memory time series? | First differences remove all the long term memory whilst fractional differences preserve some of it. If, therefore, the long term memory is important for your intended application fractional differencing is the way to go. Chapter 5 of the book Advances in Financial Machine Learning discusses this in some detail.
For example, assume you want to predict a long term ( financial?) trend using a machine learning algorithm and you feed it first differences as training data, it will be extremely difficult for the algorithm to learn the trend as it isn't present in the data it's presented with; the algorithm is much more likely to learn to predict just one step ahead, and thereby "failing."
The above book suggests that this "over-differencing" is one reason why ML on (financial) time series is so often problematic. The book also, tentatively, suggests that this may be the reason why the Efficient Market Hypothesis holds such sway among financial academia, in that removing the "historical memory" from the data logically leads to the conclusion that future prices cannot be predicted from past prices. Any evidence that contradicts this belief is called an anomaly, e.g. momentum. However, fractionally differenced financial data show that trends and momentum do persist, whilst at the same time being stationary. | What is the problem with overdifferencing a long memory time series? | First differences remove all the long term memory whilst fractional differences preserve some of it. If, therefore, the long term memory is important for your intended application fractional differenc | What is the problem with overdifferencing a long memory time series?
First differences remove all the long term memory whilst fractional differences preserve some of it. If, therefore, the long term memory is important for your intended application fractional differencing is the way to go. Chapter 5 of the book Advances in Financial Machine Learning discusses this in some detail.
For example, assume you want to predict a long term ( financial?) trend using a machine learning algorithm and you feed it first differences as training data, it will be extremely difficult for the algorithm to learn the trend as it isn't present in the data it's presented with; the algorithm is much more likely to learn to predict just one step ahead, and thereby "failing."
The above book suggests that this "over-differencing" is one reason why ML on (financial) time series is so often problematic. The book also, tentatively, suggests that this may be the reason why the Efficient Market Hypothesis holds such sway among financial academia, in that removing the "historical memory" from the data logically leads to the conclusion that future prices cannot be predicted from past prices. Any evidence that contradicts this belief is called an anomaly, e.g. momentum. However, fractionally differenced financial data show that trends and momentum do persist, whilst at the same time being stationary. | What is the problem with overdifferencing a long memory time series?
First differences remove all the long term memory whilst fractional differences preserve some of it. If, therefore, the long term memory is important for your intended application fractional differenc |
32,625 | Adaptively selecting the number of bootstrap replicates | If the estimation of $\theta$ on the replicates are normally distributed I guess you can estimate the error $\hat{\sigma}$ on $\hat{\theta}$ from the standard deviation $\sigma$:
$$
\hat{\sigma} = \frac{\sigma}{\sqrt{n}}
$$
then you can just stop when $1.96*\hat{\sigma} < \epsilon$.
Or have I misunderstood the question? Or do you want an answer without assuming normality and in presence of significant autocorrelations? | Adaptively selecting the number of bootstrap replicates | If the estimation of $\theta$ on the replicates are normally distributed I guess you can estimate the error $\hat{\sigma}$ on $\hat{\theta}$ from the standard deviation $\sigma$:
$$
\hat{\sigma} = \fr | Adaptively selecting the number of bootstrap replicates
If the estimation of $\theta$ on the replicates are normally distributed I guess you can estimate the error $\hat{\sigma}$ on $\hat{\theta}$ from the standard deviation $\sigma$:
$$
\hat{\sigma} = \frac{\sigma}{\sqrt{n}}
$$
then you can just stop when $1.96*\hat{\sigma} < \epsilon$.
Or have I misunderstood the question? Or do you want an answer without assuming normality and in presence of significant autocorrelations? | Adaptively selecting the number of bootstrap replicates
If the estimation of $\theta$ on the replicates are normally distributed I guess you can estimate the error $\hat{\sigma}$ on $\hat{\theta}$ from the standard deviation $\sigma$:
$$
\hat{\sigma} = \fr |
32,626 | Adaptively selecting the number of bootstrap replicates | On pages 113-114 of the first edition of my book Bootstrap Methods: A Practitioner's Guide Wiley (1999) I discuss methods for determining how many bootstrap replications to take when using the Monte Carlo approximation.
I go into detail about a procedure due to Hall that was described in his book The Bootstrap and Edgeworth Expansion, Springer-Verlag (1992). He shows that when the sample size n is large and the number of bootstrap replications B is large the variance of the bootstrap estimate is C/B where C is an unknown constant that does not depend on n or B. So if you can determine C or bound it above you can determine a value for B that makes the error of the estimate smaller than the $\epsilon$ that you specify in your question.
I describe a situation where C = 1/4. But if You don't have a good idea as to what the value C is you can resort to the approach you describe where you take B=500 say and then double it to 1000 and compare the difference in those bootstrap estimates. This procedure can be repeated until the difference is as small as you want it to be.
Another idea is given by Efron in the article "Better bootstrap confidence intervals (with discussion)", (1987) Journal of the American Statistical Association Vol. 82 pp 171-200. | Adaptively selecting the number of bootstrap replicates | On pages 113-114 of the first edition of my book Bootstrap Methods: A Practitioner's Guide Wiley (1999) I discuss methods for determining how many bootstrap replications to take when using the Monte C | Adaptively selecting the number of bootstrap replicates
On pages 113-114 of the first edition of my book Bootstrap Methods: A Practitioner's Guide Wiley (1999) I discuss methods for determining how many bootstrap replications to take when using the Monte Carlo approximation.
I go into detail about a procedure due to Hall that was described in his book The Bootstrap and Edgeworth Expansion, Springer-Verlag (1992). He shows that when the sample size n is large and the number of bootstrap replications B is large the variance of the bootstrap estimate is C/B where C is an unknown constant that does not depend on n or B. So if you can determine C or bound it above you can determine a value for B that makes the error of the estimate smaller than the $\epsilon$ that you specify in your question.
I describe a situation where C = 1/4. But if You don't have a good idea as to what the value C is you can resort to the approach you describe where you take B=500 say and then double it to 1000 and compare the difference in those bootstrap estimates. This procedure can be repeated until the difference is as small as you want it to be.
Another idea is given by Efron in the article "Better bootstrap confidence intervals (with discussion)", (1987) Journal of the American Statistical Association Vol. 82 pp 171-200. | Adaptively selecting the number of bootstrap replicates
On pages 113-114 of the first edition of my book Bootstrap Methods: A Practitioner's Guide Wiley (1999) I discuss methods for determining how many bootstrap replications to take when using the Monte C |
32,627 | One-shot object detection with Deep Learning | As it turns out, just training a ordinary object detection network with a bunch of data augmentation will get you some decent results.
I took the "coca cola" logo from your post, and performed some random augmentations on it. Then I downloaded 10000 random images from flickr and randomly pasted the logo onto these images. I also added random red regions to the images so the network wouldn't learn that any red blob was a valid object. Some samples from my training data:
I then trained an RCNN model on this dataset. Here are some test-set images I found on google images, and the model seems to do pretty ok.
The results aren't perfect, but I slapped this together in about 2 hours. I expect with a bit more care spent with data generation and with training the model, you could get far better results.
I think ideas from papers such as Learning to Model the Tail could be used to allow learning of new object categories with just one or a few examples, instead of needing to generate a bunch of data like I did, but I'm not aware of them doing any experiments with object detection. | One-shot object detection with Deep Learning | As it turns out, just training a ordinary object detection network with a bunch of data augmentation will get you some decent results.
I took the "coca cola" logo from your post, and performed some r | One-shot object detection with Deep Learning
As it turns out, just training a ordinary object detection network with a bunch of data augmentation will get you some decent results.
I took the "coca cola" logo from your post, and performed some random augmentations on it. Then I downloaded 10000 random images from flickr and randomly pasted the logo onto these images. I also added random red regions to the images so the network wouldn't learn that any red blob was a valid object. Some samples from my training data:
I then trained an RCNN model on this dataset. Here are some test-set images I found on google images, and the model seems to do pretty ok.
The results aren't perfect, but I slapped this together in about 2 hours. I expect with a bit more care spent with data generation and with training the model, you could get far better results.
I think ideas from papers such as Learning to Model the Tail could be used to allow learning of new object categories with just one or a few examples, instead of needing to generate a bunch of data like I did, but I'm not aware of them doing any experiments with object detection. | One-shot object detection with Deep Learning
As it turns out, just training a ordinary object detection network with a bunch of data augmentation will get you some decent results.
I took the "coca cola" logo from your post, and performed some r |
32,628 | Surprising behavior of the power of Fisher exact test (permutation tests) | Why the p-values different
There are two effects going on:
Because of the discreteness of the values you choose the 'most likely to happen' 0 2 1 1 1 vector. But this would differ from the (impossible) 0 1.25 1.25 1.25 1.25, which would have a smaller $\chi^2$ value.
The result is that the vector 5 0 0 0 0 is not being counted anymore
as at least of extreme case (5 0 0 0 0 has smaller $\chi^2$ than 0 2 1 1 1) . This was the case before. The two
sided Fisher test on the 2x2 table counts both cases of the 5 exposures being in the first or the second group as equally extreme.
This is why the p-value differs by almost a factor 2. (not exactly because of the next point)
While you loose the 5 0 0 0 0 as an equally extreme case, you gain the 1 4 0 0 0 as a more extreme case than 0 2 1 1 1.
So the difference is in the boundary of the $\chi^2$ value (or a directly calculated p-value as used by the R implementation of the exact Fisher test). If you split up the group of 400 into 4 groups of 100 then different cases will be considered as more or less 'extreme' than the other. 5 0 0 0 0 is now less 'extreme' than 0 2 1 1 1. But 1 4 0 0 0 is more 'extreme'.
code example:
# probability of distribution a and b exposures among 2 groups of 400
draw2 <- function(a,b) {
choose(400,a)*choose(400,b)/choose(800,5)
}
# probability of distribution a, b, c, d and e exposures among 5 groups of resp 400, 100, 100, 100, 100
draw5 <- function(a,b,c,d,e) {
choose(400,a)*choose(100,b)*choose(100,c)*choose(100,d)*choose(100,e)/choose(800,5)
}
# looping all possible distributions of 5 exposers among 5 groups
# summing the probability when it's p-value is smaller or equal to the observed value 0 2 1 1 1
sumx <- 0
for (f in c(0:5)) {
for(g in c(0:(5-f))) {
for(h in c(0:(5-f-g))) {
for(i in c(0:(5-f-g-h))) {
j = 5-f-g-h-i
if (draw5(f, g, h, i, j) <= draw5(0, 2, 1, 1, 1)) {
sumx <- sumx + draw5(f, g, h, i, j)
}
}
}
}
}
sumx #output is 0.3318617
# the split up case (5 groups, 400 100 100 100 100) can be calculated manually
# as a sum of probabilities for cases 0 5 and 1 4 0 0 0 (0 5 includes all cases 1 a b c d with the sum of the latter four equal to 5)
fisher.test(matrix( c(400, 98, 99 , 99, 99, 0, 2, 1, 1, 1) , ncol = 2))[1]
draw2(0,5) + 4*draw(1,4,0,0,0)
# the original case of 2 groups (400 400) can be calculated manually
# as a sum of probabilities for the cases 0 5 and 5 0
fisher.test(matrix( c(400, 395, 0, 5) , ncol = 2))[1]
draw2(0,5) + draw2(5,0)
output of that last bit
> fisher.test(matrix( c(400, 98, 99 , 99, 99, 0, 2, 1, 1, 1) , ncol = 2))[1]
$p.value
[1] 0.03318617
> draw2(0,5) + 4*draw(1,4,0,0,0)
[1] 0.03318617
> fisher.test(matrix( c(400, 395, 0, 5) , ncol = 2))[1]
$p.value
[1] 0.06171924
> draw2(0,5) + draw2(5,0)
[1] 0.06171924
How it effects power when splitting groups
There are some differences due to the discrete steps in the 'available' levels of the p-values and the conservativeness of Fishers's exact test (and these differences may become quite big).
also the Fisher test fits the (unknown) model based on the data and then uses this model to calculate p-values. The model in the example is that there are exactly 5 exposed individuals. If you model the data with a binomial for the different groups then you will get occasionally more or less than 5 individuals. When you apply the fisher test to this, then some of the error will be fitted and the residuals will be smaller in comparison to tests with fixed marginals. The result is that the test is much too conservative, not exact.
I had expected that the effect on the experiment type I error probability would not be so great if you randomly split the groups. If the null hypothesis is true then you will encounter in roughly $\alpha$ percent of the cases a significant p-value. For this example the differences are big as the image shows. The main reason is that, with total 5 exposures, there are only three levels of absolute difference (5-0, 4-1, 3-2, 2-3, 1-4, 0-5) and only three discrete p-values (in the case of two groups of 400).
Most interesting is the plot of probabilities to reject $H_0$ if $H_0$ is true and if $H_a$ is true. In this case the alpha level and discreteness does not matter so much (we plot the effective rejection rate), and we still see a big difference.
The question remains whether this holds for all possible situations.
3 times code adjustment of your power analysis (and 3 images):
using binomial restricting to case of 5 exposed individuals
Plots of the effective probability to reject $H_0$ as function of the selected alpha. It is known for Fisher's exact test that the p-value is exactly calculated but only few levels (the steps) occur so often the test might be too conservative in relation to a chosen alpha level.
It is interesting to see that the effect is much stronger for the 400-400 case (red) versus the 400-100-100-100-100 case (blue). Thus we may indeed uses this split up to increase the power, make it more likely to reject the H_0. (although we care not so much about making the type I error more likely, so the point of doing this split up to increase power may not always be so strong)
using binomial not restricting to 5 exposed individuals
If we use a binomial like you did then neither of the two cases 400-400 (red) or 400-100-100-100-100 (blue) gives an accurate p-value. This is because the Fisher exact test assumes fixed row and column totals, but the binomial model allows these to be free. The Fisher test will 'fit' the row and column totals making the residual term smaller than the true error term.
does the increased power come at a cost?
If we compare the probabilities of rejecting when the $H_0$ is true and when the $H_a$ is true (we wish the first value low and the second value high) then we see that indeed the power (rejecting when $H_a$ is true) can be increased without the cost that the type I error increases.
# using binomial distribution for 400, 100, 100, 100, 100
# x uses separate cases
# y uses the sum of the 100 groups
p <- replicate(4000, { n <- rbinom(4, 100, 0.006125); m <- rbinom(1, 400, 0.006125);
x <- matrix( c(400 - m, 100 - n, m, n), ncol = 2);
y <- matrix( c(400 - m, 400 - sum(n), m, sum(n)), ncol = 2);
c(sum(n,m),fisher.test(x)$p.value,fisher.test(y)$p.value)} )
# calculate hypothesis test using only tables with sum of 5 for the 1st row
ps <- c(1:1000)/1000
m1 <- sapply(ps,FUN = function(x) mean(p[2,p[1,]==5] < x))
m2 <- sapply(ps,FUN = function(x) mean(p[3,p[1,]==5] < x))
plot(ps,ps,type="l",
xlab = "chosen alpha level",
ylab = "p rejection")
lines(ps,m1,col=4)
lines(ps,m2,col=2)
title("due to concervative test p-value will be smaller\n leading to differences")
# using all samples also when the sum exposed individuals is not 5
ps <- c(1:1000)/1000
m1 <- sapply(ps,FUN = function(x) mean(p[2,] < x))
m2 <- sapply(ps,FUN = function(x) mean(p[3,] < x))
plot(ps,ps,type="l",
xlab = "chosen alpha level",
ylab = "p rejection")
lines(ps,m1,col=4)
lines(ps,m2,col=2)
title("overly conservative, low effective p-values \n fitting marginals makes residuals smaller than real error")
#
# Third graph comparing H_0 and H_a
#
# using binomial distribution for 400, 100, 100, 100, 100
# x uses separate cases
# y uses the sum of the 100 groups
offset <- 0.5
p <- replicate(10000, { n <- rbinom(4, 100, offset*0.0125); m <- rbinom(1, 400, (1-offset)*0.0125);
x <- matrix( c(400 - m, 100 - n, m, n), ncol = 2);
y <- matrix( c(400 - m, 400 - sum(n), m, sum(n)), ncol = 2);
c(sum(n,m),fisher.test(x)$p.value,fisher.test(y)$p.value)} )
# calculate hypothesis test using only tables with sum of 5 for the 1st row
ps <- c(1:10000)/10000
m1 <- sapply(ps,FUN = function(x) mean(p[2,p[1,]==5] < x))
m2 <- sapply(ps,FUN = function(x) mean(p[3,p[1,]==5] < x))
offset <- 0.6
p <- replicate(10000, { n <- rbinom(4, 100, offset*0.0125); m <- rbinom(1, 400, (1-offset)*0.0125);
x <- matrix( c(400 - m, 100 - n, m, n), ncol = 2);
y <- matrix( c(400 - m, 400 - sum(n), m, sum(n)), ncol = 2);
c(sum(n,m),fisher.test(x)$p.value,fisher.test(y)$p.value)} )
# calculate hypothesis test using only tables with sum of 5 for the 1st row
ps <- c(1:10000)/10000
m1a <- sapply(ps,FUN = function(x) mean(p[2,p[1,]==5] < x))
m2a <- sapply(ps,FUN = function(x) mean(p[3,p[1,]==5] < x))
plot(ps,ps,type="l",
xlab = "p rejecting if H_0 true",
ylab = "p rejecting if H_a true",log="xy")
points(m1,m1a,col=4)
points(m2,m2a,col=2)
legend(0.01,0.001,c("400-400","400-100-100-100-100"),pch=c(1,1),col=c(2,4))
title("comparing H_0:p=0.5 \n with H_a:p=0.6")
Why does it affect the power
I believe that the key of the problem is in the difference of the outcome values that are chosen to be "significant". The situation is five exposed individuals being drawn from 5 groups of 400, 100, 100, 100 and 100 size. Different selections can be made that are considered 'extreme'. apparently the power increases (even when the effective type I error is the same) when we go for the second strategy.
If we would sketch the difference between the first and second strategy graphically. Then I imagine a coordinate system with 5 axes (for the groups of 400 100 100 100 and 100) with a point for the hypothesis values and surface that depicts a distance of deviation beyond which the probability is below a certain level. With the first strategy this surface is a cylinder, with the second strategy this surface is a sphere. The same is true for the true values and a surface around it for the error. What we want is the overlap to be as small as possible.
We can make an actual graphic when we consider a slightly different problem (with lower dimensionality).
Imagine we wish to test a Bernoulli process $H_0: p=0.5$ by doing 1000 experiments. Then we can do the same strategy by splitting the 1000 up into groups into two groups of size 500. How does this look like (let X and Y be the counts in both groups)?
The plot shows how the groups of 500 and 500 (instead of a single group of 1000) are distributed.
The standard hypothesis test would assess (for a 95% alpha level) whether the sum of X and Y is larger than 531 or smaller than 469.
But this includes very unlikely unequal distribution of X and Y.
Imagine a shift of the distribution from $H_0$ to $H_a$. Then the regions in the edges do not matter so much, and a more circular boundary would make more sense.
This is however not (necesarilly) true when we do not select the splitting of the groups randomly and when there may be a meaning to the groups. | Surprising behavior of the power of Fisher exact test (permutation tests) | Why the p-values different
There are two effects going on:
Because of the discreteness of the values you choose the 'most likely to happen' 0 2 1 1 1 vector. But this would differ from the (impossibl | Surprising behavior of the power of Fisher exact test (permutation tests)
Why the p-values different
There are two effects going on:
Because of the discreteness of the values you choose the 'most likely to happen' 0 2 1 1 1 vector. But this would differ from the (impossible) 0 1.25 1.25 1.25 1.25, which would have a smaller $\chi^2$ value.
The result is that the vector 5 0 0 0 0 is not being counted anymore
as at least of extreme case (5 0 0 0 0 has smaller $\chi^2$ than 0 2 1 1 1) . This was the case before. The two
sided Fisher test on the 2x2 table counts both cases of the 5 exposures being in the first or the second group as equally extreme.
This is why the p-value differs by almost a factor 2. (not exactly because of the next point)
While you loose the 5 0 0 0 0 as an equally extreme case, you gain the 1 4 0 0 0 as a more extreme case than 0 2 1 1 1.
So the difference is in the boundary of the $\chi^2$ value (or a directly calculated p-value as used by the R implementation of the exact Fisher test). If you split up the group of 400 into 4 groups of 100 then different cases will be considered as more or less 'extreme' than the other. 5 0 0 0 0 is now less 'extreme' than 0 2 1 1 1. But 1 4 0 0 0 is more 'extreme'.
code example:
# probability of distribution a and b exposures among 2 groups of 400
draw2 <- function(a,b) {
choose(400,a)*choose(400,b)/choose(800,5)
}
# probability of distribution a, b, c, d and e exposures among 5 groups of resp 400, 100, 100, 100, 100
draw5 <- function(a,b,c,d,e) {
choose(400,a)*choose(100,b)*choose(100,c)*choose(100,d)*choose(100,e)/choose(800,5)
}
# looping all possible distributions of 5 exposers among 5 groups
# summing the probability when it's p-value is smaller or equal to the observed value 0 2 1 1 1
sumx <- 0
for (f in c(0:5)) {
for(g in c(0:(5-f))) {
for(h in c(0:(5-f-g))) {
for(i in c(0:(5-f-g-h))) {
j = 5-f-g-h-i
if (draw5(f, g, h, i, j) <= draw5(0, 2, 1, 1, 1)) {
sumx <- sumx + draw5(f, g, h, i, j)
}
}
}
}
}
sumx #output is 0.3318617
# the split up case (5 groups, 400 100 100 100 100) can be calculated manually
# as a sum of probabilities for cases 0 5 and 1 4 0 0 0 (0 5 includes all cases 1 a b c d with the sum of the latter four equal to 5)
fisher.test(matrix( c(400, 98, 99 , 99, 99, 0, 2, 1, 1, 1) , ncol = 2))[1]
draw2(0,5) + 4*draw(1,4,0,0,0)
# the original case of 2 groups (400 400) can be calculated manually
# as a sum of probabilities for the cases 0 5 and 5 0
fisher.test(matrix( c(400, 395, 0, 5) , ncol = 2))[1]
draw2(0,5) + draw2(5,0)
output of that last bit
> fisher.test(matrix( c(400, 98, 99 , 99, 99, 0, 2, 1, 1, 1) , ncol = 2))[1]
$p.value
[1] 0.03318617
> draw2(0,5) + 4*draw(1,4,0,0,0)
[1] 0.03318617
> fisher.test(matrix( c(400, 395, 0, 5) , ncol = 2))[1]
$p.value
[1] 0.06171924
> draw2(0,5) + draw2(5,0)
[1] 0.06171924
How it effects power when splitting groups
There are some differences due to the discrete steps in the 'available' levels of the p-values and the conservativeness of Fishers's exact test (and these differences may become quite big).
also the Fisher test fits the (unknown) model based on the data and then uses this model to calculate p-values. The model in the example is that there are exactly 5 exposed individuals. If you model the data with a binomial for the different groups then you will get occasionally more or less than 5 individuals. When you apply the fisher test to this, then some of the error will be fitted and the residuals will be smaller in comparison to tests with fixed marginals. The result is that the test is much too conservative, not exact.
I had expected that the effect on the experiment type I error probability would not be so great if you randomly split the groups. If the null hypothesis is true then you will encounter in roughly $\alpha$ percent of the cases a significant p-value. For this example the differences are big as the image shows. The main reason is that, with total 5 exposures, there are only three levels of absolute difference (5-0, 4-1, 3-2, 2-3, 1-4, 0-5) and only three discrete p-values (in the case of two groups of 400).
Most interesting is the plot of probabilities to reject $H_0$ if $H_0$ is true and if $H_a$ is true. In this case the alpha level and discreteness does not matter so much (we plot the effective rejection rate), and we still see a big difference.
The question remains whether this holds for all possible situations.
3 times code adjustment of your power analysis (and 3 images):
using binomial restricting to case of 5 exposed individuals
Plots of the effective probability to reject $H_0$ as function of the selected alpha. It is known for Fisher's exact test that the p-value is exactly calculated but only few levels (the steps) occur so often the test might be too conservative in relation to a chosen alpha level.
It is interesting to see that the effect is much stronger for the 400-400 case (red) versus the 400-100-100-100-100 case (blue). Thus we may indeed uses this split up to increase the power, make it more likely to reject the H_0. (although we care not so much about making the type I error more likely, so the point of doing this split up to increase power may not always be so strong)
using binomial not restricting to 5 exposed individuals
If we use a binomial like you did then neither of the two cases 400-400 (red) or 400-100-100-100-100 (blue) gives an accurate p-value. This is because the Fisher exact test assumes fixed row and column totals, but the binomial model allows these to be free. The Fisher test will 'fit' the row and column totals making the residual term smaller than the true error term.
does the increased power come at a cost?
If we compare the probabilities of rejecting when the $H_0$ is true and when the $H_a$ is true (we wish the first value low and the second value high) then we see that indeed the power (rejecting when $H_a$ is true) can be increased without the cost that the type I error increases.
# using binomial distribution for 400, 100, 100, 100, 100
# x uses separate cases
# y uses the sum of the 100 groups
p <- replicate(4000, { n <- rbinom(4, 100, 0.006125); m <- rbinom(1, 400, 0.006125);
x <- matrix( c(400 - m, 100 - n, m, n), ncol = 2);
y <- matrix( c(400 - m, 400 - sum(n), m, sum(n)), ncol = 2);
c(sum(n,m),fisher.test(x)$p.value,fisher.test(y)$p.value)} )
# calculate hypothesis test using only tables with sum of 5 for the 1st row
ps <- c(1:1000)/1000
m1 <- sapply(ps,FUN = function(x) mean(p[2,p[1,]==5] < x))
m2 <- sapply(ps,FUN = function(x) mean(p[3,p[1,]==5] < x))
plot(ps,ps,type="l",
xlab = "chosen alpha level",
ylab = "p rejection")
lines(ps,m1,col=4)
lines(ps,m2,col=2)
title("due to concervative test p-value will be smaller\n leading to differences")
# using all samples also when the sum exposed individuals is not 5
ps <- c(1:1000)/1000
m1 <- sapply(ps,FUN = function(x) mean(p[2,] < x))
m2 <- sapply(ps,FUN = function(x) mean(p[3,] < x))
plot(ps,ps,type="l",
xlab = "chosen alpha level",
ylab = "p rejection")
lines(ps,m1,col=4)
lines(ps,m2,col=2)
title("overly conservative, low effective p-values \n fitting marginals makes residuals smaller than real error")
#
# Third graph comparing H_0 and H_a
#
# using binomial distribution for 400, 100, 100, 100, 100
# x uses separate cases
# y uses the sum of the 100 groups
offset <- 0.5
p <- replicate(10000, { n <- rbinom(4, 100, offset*0.0125); m <- rbinom(1, 400, (1-offset)*0.0125);
x <- matrix( c(400 - m, 100 - n, m, n), ncol = 2);
y <- matrix( c(400 - m, 400 - sum(n), m, sum(n)), ncol = 2);
c(sum(n,m),fisher.test(x)$p.value,fisher.test(y)$p.value)} )
# calculate hypothesis test using only tables with sum of 5 for the 1st row
ps <- c(1:10000)/10000
m1 <- sapply(ps,FUN = function(x) mean(p[2,p[1,]==5] < x))
m2 <- sapply(ps,FUN = function(x) mean(p[3,p[1,]==5] < x))
offset <- 0.6
p <- replicate(10000, { n <- rbinom(4, 100, offset*0.0125); m <- rbinom(1, 400, (1-offset)*0.0125);
x <- matrix( c(400 - m, 100 - n, m, n), ncol = 2);
y <- matrix( c(400 - m, 400 - sum(n), m, sum(n)), ncol = 2);
c(sum(n,m),fisher.test(x)$p.value,fisher.test(y)$p.value)} )
# calculate hypothesis test using only tables with sum of 5 for the 1st row
ps <- c(1:10000)/10000
m1a <- sapply(ps,FUN = function(x) mean(p[2,p[1,]==5] < x))
m2a <- sapply(ps,FUN = function(x) mean(p[3,p[1,]==5] < x))
plot(ps,ps,type="l",
xlab = "p rejecting if H_0 true",
ylab = "p rejecting if H_a true",log="xy")
points(m1,m1a,col=4)
points(m2,m2a,col=2)
legend(0.01,0.001,c("400-400","400-100-100-100-100"),pch=c(1,1),col=c(2,4))
title("comparing H_0:p=0.5 \n with H_a:p=0.6")
Why does it affect the power
I believe that the key of the problem is in the difference of the outcome values that are chosen to be "significant". The situation is five exposed individuals being drawn from 5 groups of 400, 100, 100, 100 and 100 size. Different selections can be made that are considered 'extreme'. apparently the power increases (even when the effective type I error is the same) when we go for the second strategy.
If we would sketch the difference between the first and second strategy graphically. Then I imagine a coordinate system with 5 axes (for the groups of 400 100 100 100 and 100) with a point for the hypothesis values and surface that depicts a distance of deviation beyond which the probability is below a certain level. With the first strategy this surface is a cylinder, with the second strategy this surface is a sphere. The same is true for the true values and a surface around it for the error. What we want is the overlap to be as small as possible.
We can make an actual graphic when we consider a slightly different problem (with lower dimensionality).
Imagine we wish to test a Bernoulli process $H_0: p=0.5$ by doing 1000 experiments. Then we can do the same strategy by splitting the 1000 up into groups into two groups of size 500. How does this look like (let X and Y be the counts in both groups)?
The plot shows how the groups of 500 and 500 (instead of a single group of 1000) are distributed.
The standard hypothesis test would assess (for a 95% alpha level) whether the sum of X and Y is larger than 531 or smaller than 469.
But this includes very unlikely unequal distribution of X and Y.
Imagine a shift of the distribution from $H_0$ to $H_a$. Then the regions in the edges do not matter so much, and a more circular boundary would make more sense.
This is however not (necesarilly) true when we do not select the splitting of the groups randomly and when there may be a meaning to the groups. | Surprising behavior of the power of Fisher exact test (permutation tests)
Why the p-values different
There are two effects going on:
Because of the discreteness of the values you choose the 'most likely to happen' 0 2 1 1 1 vector. But this would differ from the (impossibl |
32,629 | Why does MAP converge to MLE? | There are two issues here, first, why does the MAP converge to the MLE generally (but not always) and the "vanishing likelihood" problem.
For the first issue, we refer ourselves to the Bernstein - von Mises theorem. The essence of it is that, as the sample size grows, the relative information contained in the prior and in the data shifts in favor of the data, so the posterior becomes more concentrated around the data-only estimate of the MLE, and the peak actually converges to the MLE (with the usual caveat that certain assumptions have to be met.) See the Wikipedia page for a brief overview.
For the second issue, this comes about because you have not normalized the posterior density. By Bayes' Rule:
$$P(h|D) = {P(D|h)p(h) \over p(D)}$$
and, although $P(D|h) \to 0$ as $n \to \infty$, as you observe, so does $P(D)$. For a little more concreteness, if we assume two hypotheses $h_1$ and $h_2$, we find the posterior by:
$$P(h_1|D) = {P(D|h_1)p(h_1) \over P(D|h_1)p(h_1) + P(D|h_2)p(h_2)}$$
Both the numerator and the denominator have terms raised to the power $N$, so both $\to 0$ as $N \to \infty$, but it should be clear that the normalization required fixes the problem that this would otherwise cause. | Why does MAP converge to MLE? | There are two issues here, first, why does the MAP converge to the MLE generally (but not always) and the "vanishing likelihood" problem.
For the first issue, we refer ourselves to the Bernstein - von | Why does MAP converge to MLE?
There are two issues here, first, why does the MAP converge to the MLE generally (but not always) and the "vanishing likelihood" problem.
For the first issue, we refer ourselves to the Bernstein - von Mises theorem. The essence of it is that, as the sample size grows, the relative information contained in the prior and in the data shifts in favor of the data, so the posterior becomes more concentrated around the data-only estimate of the MLE, and the peak actually converges to the MLE (with the usual caveat that certain assumptions have to be met.) See the Wikipedia page for a brief overview.
For the second issue, this comes about because you have not normalized the posterior density. By Bayes' Rule:
$$P(h|D) = {P(D|h)p(h) \over p(D)}$$
and, although $P(D|h) \to 0$ as $n \to \infty$, as you observe, so does $P(D)$. For a little more concreteness, if we assume two hypotheses $h_1$ and $h_2$, we find the posterior by:
$$P(h_1|D) = {P(D|h_1)p(h_1) \over P(D|h_1)p(h_1) + P(D|h_2)p(h_2)}$$
Both the numerator and the denominator have terms raised to the power $N$, so both $\to 0$ as $N \to \infty$, but it should be clear that the normalization required fixes the problem that this would otherwise cause. | Why does MAP converge to MLE?
There are two issues here, first, why does the MAP converge to the MLE generally (but not always) and the "vanishing likelihood" problem.
For the first issue, we refer ourselves to the Bernstein - von |
32,630 | XGBoost vs Gradient Boosting Machines | @jbowman has the right answer: XGBoost is a particular implementation of GBM.
GBM is an algorithm and you can find the details in Greedy Function Approximation: A Gradient Boosting Machine.
XGBoost is an implementation of the GBM, you can configure in the GBM for what base learner to be used. It can be a tree, or stump or other models, even linear model.
Here is an example of using a linear model as base learning in XGBoost.
How does linear base learner works in boosting? And how does it works in the xgboost library? | XGBoost vs Gradient Boosting Machines | @jbowman has the right answer: XGBoost is a particular implementation of GBM.
GBM is an algorithm and you can find the details in Greedy Function Approximation: A Gradient Boosting Machine.
XGBoost is | XGBoost vs Gradient Boosting Machines
@jbowman has the right answer: XGBoost is a particular implementation of GBM.
GBM is an algorithm and you can find the details in Greedy Function Approximation: A Gradient Boosting Machine.
XGBoost is an implementation of the GBM, you can configure in the GBM for what base learner to be used. It can be a tree, or stump or other models, even linear model.
Here is an example of using a linear model as base learning in XGBoost.
How does linear base learner works in boosting? And how does it works in the xgboost library? | XGBoost vs Gradient Boosting Machines
@jbowman has the right answer: XGBoost is a particular implementation of GBM.
GBM is an algorithm and you can find the details in Greedy Function Approximation: A Gradient Boosting Machine.
XGBoost is |
32,631 | XGBoost vs Gradient Boosting Machines | the gradient boosting (GBM) algorithm computes the residuals (negative gradient) and then fit them by using a regression tree with mean square error (MSE) as the splitting criterion. How is that different from the XGBoost algorithm?
Both indeed fit a regression tree to minimize MSE w.r.t. a pseudo-response variable in every boosting iteration. This pseudo response is computed based on the original binary response, and predictions from the regression trees of previous iterations.
The two methods differ in how the pseudo response is computed. GBM uses a first-order derivative of the loss function at the current boosting iteration, while XGBoost uses both the first- and second-order derivatives. The latter is also known as Newton boosting.
A.f.a.i.k. R package gbm uses gradient boosting, by default. At each boosting iteration, the regression tree minimizes the least squares approximation to the negative gradient.
Is the only difference between GBM and XGBoost the regularization terms or does XGBoost use another split criterion to determine the regions of the regression tree?
I do not fully understand what you mean by the regularization term. The split criterion for the regression tree indeed differs, because XGBoost computes the pseudo-response variable differently. At each boosting iteration, it fits a regression tree to minimize a weighted least squares approximation. The pseudo response is calculated so that the more extreme the predictions from previous trees (i.e., linear predictor far from zero; predicted probability close to either 0 or 1), the less influence the observation will receive in the current iteration. Thus, the gradient is weighted by the uncertainty of predictions from previous trees.
Sigrist (2021) provides a good discussion of the differences, both in terms of predictive performance and the functions being optimized.
Sigrist, F. (2021). Gradient and Newton boosting for classification and regression. Expert Systems With Applications, 167, 114080. https://arxiv.org/abs/1808.03064 | XGBoost vs Gradient Boosting Machines | the gradient boosting (GBM) algorithm computes the residuals (negative gradient) and then fit them by using a regression tree with mean square error (MSE) as the splitting criterion. How is that diffe | XGBoost vs Gradient Boosting Machines
the gradient boosting (GBM) algorithm computes the residuals (negative gradient) and then fit them by using a regression tree with mean square error (MSE) as the splitting criterion. How is that different from the XGBoost algorithm?
Both indeed fit a regression tree to minimize MSE w.r.t. a pseudo-response variable in every boosting iteration. This pseudo response is computed based on the original binary response, and predictions from the regression trees of previous iterations.
The two methods differ in how the pseudo response is computed. GBM uses a first-order derivative of the loss function at the current boosting iteration, while XGBoost uses both the first- and second-order derivatives. The latter is also known as Newton boosting.
A.f.a.i.k. R package gbm uses gradient boosting, by default. At each boosting iteration, the regression tree minimizes the least squares approximation to the negative gradient.
Is the only difference between GBM and XGBoost the regularization terms or does XGBoost use another split criterion to determine the regions of the regression tree?
I do not fully understand what you mean by the regularization term. The split criterion for the regression tree indeed differs, because XGBoost computes the pseudo-response variable differently. At each boosting iteration, it fits a regression tree to minimize a weighted least squares approximation. The pseudo response is calculated so that the more extreme the predictions from previous trees (i.e., linear predictor far from zero; predicted probability close to either 0 or 1), the less influence the observation will receive in the current iteration. Thus, the gradient is weighted by the uncertainty of predictions from previous trees.
Sigrist (2021) provides a good discussion of the differences, both in terms of predictive performance and the functions being optimized.
Sigrist, F. (2021). Gradient and Newton boosting for classification and regression. Expert Systems With Applications, 167, 114080. https://arxiv.org/abs/1808.03064 | XGBoost vs Gradient Boosting Machines
the gradient boosting (GBM) algorithm computes the residuals (negative gradient) and then fit them by using a regression tree with mean square error (MSE) as the splitting criterion. How is that diffe |
32,632 | XGBoost vs Gradient Boosting Machines | Both are the same XG boost and GBM, both works on the same principle.
In Xg boost parallel computation is possible, means in XG boost parallelly many GBM's are working.
In Xgboost tunning parameters are more.
Any of them can be used, I choose to go with XG boost due to some few more tuning parameters, giving slightly more accuracy. | XGBoost vs Gradient Boosting Machines | Both are the same XG boost and GBM, both works on the same principle.
In Xg boost parallel computation is possible, means in XG boost parallelly many GBM's are working.
In Xgboost tunning parameters a | XGBoost vs Gradient Boosting Machines
Both are the same XG boost and GBM, both works on the same principle.
In Xg boost parallel computation is possible, means in XG boost parallelly many GBM's are working.
In Xgboost tunning parameters are more.
Any of them can be used, I choose to go with XG boost due to some few more tuning parameters, giving slightly more accuracy. | XGBoost vs Gradient Boosting Machines
Both are the same XG boost and GBM, both works on the same principle.
In Xg boost parallel computation is possible, means in XG boost parallelly many GBM's are working.
In Xgboost tunning parameters a |
32,633 | How to normalize highly volatile time series? | TLDR
Re-read the linked to paper, it's excellent! I think so anyways, and it seems to have some solid options once you've framed the problem you want solved...
Now I'm not going to leave ya with just that, so let's dissect some of what the paper is going on about while keeping your questions in mind.
The paper discusses when they chose to normalize and re-normalize data on page 2 1.2.3 Arbitrary Query Lengths cannot be Indexed and near the end of that page is pretty clear about there not being a technique for similarity searches of arbitrary lengths when dealing with such large datasets.
... though if I understand what they are searching for in this instance, this should only be a problem if you're feeding in lots of sequences that may or may not be correlated in the same or neighboring time intervals... short aside, DTW (Dynamic Time Warp) pretty groovy stuff there, I'll restate that re-reading the paper is probably a good idea... Which for medical related fields totally makes sense, eg. the EEG example from the paper would be sampling n number of points from each of the subject's brain per t time-period multiplied by the length of the time-slice. In other-words a metric-s-ton of data!
Which begs the question of how big you expect your n dimension to be with a crypto-coin?
Regardless, in part 4.2.1 things get interesting as far as data pre-processing, and they even provide a link at the end [43] to some source code to play with.
As far as a best way for pre-processing, that seems to still be a subject of active research, as well as dependent upon model, and what is being trained for and how. Though it seems like you're on the right track as far as wanting to normalize or standardize though each time-range.
One suggestion on a related question How to represent an unbounded variable as number between 0 and 1 (which at the heart seems to be what your question is about, Bitcoin being an unbounded variable along many measurable dimensions), provides some pseudo-code Using a trainable minmax. A version of which I've partially translated to (partially functioning) Python using the following min-maxing method from How to normalize data between -1 and 1?, for demonstration of some of the issues you'll face just with pre-processing.
$$
x''' = (b-a)\frac{x - \min{x}}{\max{x} - \min{x}} + a
$$
#!/usr/bin/env python
from __future__ import division
class Scaler_MinMax(list):
"""
Scaler_MinMax is a `list` that scales inputted, or `append`ed, or `extend`ed values to be between chosen `scale_`s
"""
def __init__(self, l, scale_min = -1, scale_max = 1, auto_recalibrate = False):
self.scale_min = scale_min
self.scale_max = scale_max
self.range_min = min(l)
self.range_max = max(l)
self.initial_values = l
self.auto_recalibrate = auto_recalibrate
super(Scaler_MinMax, self).__init__(self.scale(numbers = l))
def scale(self, numbers = [], scale_min = None, scale_max = None, range_min = None, range_max = None):
"""
Returns list of scaled values
"""
if scale_min is None:
scale_min = self.scale_min
if scale_max is None:
scale_max = self.scale_max
if range_min is None:
range_min = self.range_min
if range_max is None:
range_max = self.range_max
return [((x - range_min) / (range_max - range_min)) * (scale_max - scale_min) + scale_min for x in numbers]
def unscale(self, numbers = [], scale_min = None, scale_max = None, range_min = None, range_max = None):
"""
Returns list of unscaled values
"""
if not numbers:
numbers = self
if scale_min is None:
scale_min = self.scale_min
if scale_max is None:
scale_max = self.scale_max
if range_min is None:
range_min = self.range_min
if range_max is None:
range_max = self.range_max
return [(((y - scale_min) / (scale_max - scale_min)) * (range_max - range_min)) + range_min for y in numbers]
def re_calibrate(self):
"""
Re-sets `self.range_min`, `self.range_max`, and `self`
"""
self.range_min = min(self.initial_values)
self.range_max = max(self.initial_values)
super(Scaler_MinMax, self).__init__(self.scale(
numbers = self.initial_values,
scale_min = self.scale_min,
scale_max = self.scale_max,
range_min = self.range_min,
range_max = self.range_max))
"""
Supered class overrides
"""
def __add__(self, other):
if not isinstance(other, list):
raise TypeError("can only concatenate list (not '{0}') to list".format(type(other)))
return super(Scaler_MinMax, self).__add__(self.scale(numbers = other))
def append(self, value):
"""
Appends to `self.initial_values` and `self.append(self.scale([value])[0])`
"""
self.initial_values.append(value)
if isinstance(value, list):
super(Scaler_MinMax, self).append([self.scale(numbers = [i])[0] for i in value])
else:
super(Scaler_MinMax, self).append(self.scale(numbers = [value])[0])
if self.auto_recalibrate is True:
if self.range_max != max(self.initial_values) or self.range_min != min(self.initial_values):
self.re_calibrate()
def clear(self):
"""
Clears `self.initial_values` and `self` of __ALL__ values
"""
self.initial_values.clear()
super(Scaler_MinMax, self).clear()
def extend(self, iterable):
"""
Extends `self.initial_values` and `self.extend(self.scale(iterable))`
"""
self.initial_values.extend(iterable)
super(Scaler_MinMax, self).extend(self.scale(iterable))
if self.auto_recalibrate is True:
if self.range_max != max(self.initial_values) or self.range_min != min(self.initial_values):
self.re_calibrate()
def insert(self, index, value):
"""
Inserts `value` into `self.initial_values` and `self.scale([value])` into `self`
"""
self.initial_values.insert(index, value)
super(Scaler_MinMax, self).insert(index, self.scale([value]))
if self.auto_recalibrate is True:
if self.range_max != max(self.initial_values) or self.range_min != min(self.initial_values):
self.re_calibrate()
def pop(self, index):
"""
Returns tuple of `pop`ed `index`s values from `self.initial_values`, and `self`
"""
output = self.initial_values.pop(index), super(Scaler_MinMax, self).pop(index)
if self.auto_recalibrate is True:
if self.range_max != max(self.initial_values) or self.range_min != min(self.initial_values):
self.re_calibrate()
return output
def remove(self, value):
"""
Removes value from `self.initial_values` and `self`
> use of `index` calls to ensure paired values are `pop`ed/`remove`d does increase the expense of this method
"""
if value in self.initial_values:
i = self.initial_values.index(value)
self.pop(i)
self.initial_values.remove(value)
elif value in self:
i = self.index(value)
self.initial_values.pop(i)
super(Scaler_MinMax, self).remove(value)
else:
raise ValueError("{0}.remove(x) not in list".format(self.__class__.__name__))
if self.auto_recalibrate is True:
if self.range_max != max(self.initial_values) or self.range_min != min(self.initial_values):
self.re_calibrate()
def reverse(self):
"""
Reverses `self.initial_values` and `self` __IN PLACE__
"""
self.initial_values.reverse()
super(Scaler_MinMax, self).reverse()
def sort(self):
"""
Sorts `self.initial_values` and `self` __IN PLACE__
"""
self.initial_values.sort()
super(Scaler_MinMax, self).sort()
Example usage and limitations
s = Scaler_MinMax([x for x in range(9)])
s
# -> [-1.0, -0.75, -0.5, -0.25, 0.0, 0.25, 0.5, 0.75, 1.0]
s.unscale()
# -> [0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0]
s.initial_values
# -> [0, 1, 2, 3, 4, 5, 6, 7, 8]
s.append(20)
s
# -> [-1.0, -0.75, -0.5, -0.25, 0.0, 0.25, 0.5, 0.75, 1.0, 4.0]
s.re_calibrate()
s
# -> [-1.0, -0.9, -0.8, -0.7, -0.6, -0.5, -0.4, -0.30000000000000004, -0.19999999999999996, 1.0]
s.unscale() # Floating point errors!
# -> [0.0, 0.9999999999999998, 1.9999999999999996, 3.0000000000000004, 4.0, 5.0, 6.0, 7.0, 8.0, 20.0]
s.initial_values
# -> [0, 1, 2, 3, 4, 5, 6, 7, 8, 20]
As shown above, floating point errors are why initial_values are saved and used for re_calibrate() if this is not done and unscale() used instead for re_calibrate() floating point errors would compound! Furthermore, not only are there many places the code can be improved, having an extra copy of the un-parsed data along side the parsed data can more than double memory usage.
Floating point errors adding up are also discussed in the linked to paper as well as their methods of mitigation on page 4.2.1 Early Abandoning Z-Normalization near the end where they suggest a "flush out" of such errors every million sub-sequences.
Side note, so that everyone's on the same page...
$$
\mu = \frac{1}{m} \sum {x_i}
$$
... in python looks sorta like...
#!/usr/bin/env python
from __future__ import division
# Thanks be to https://stackoverflow.com/a/31136897/2632107
try:
# Python 2
range = xrange
except NameError:
# Python 3
pass
def calc_mu(m_denominator=2, x_list=[1.5, 42], start=0, end=2, step=1):
''' μ = 1/m * ∑x_i '''
chosen = []
for i in range(start, end, step):
chosen.append(x_list[i])
return float(1) / m_denominator * sum(chosen)
... well that is if I understood An example of subscript and summation notation, I've had to brush-up on that and dabble with the Khan Academy's Sequences and Series in order to get a fuller grasp of what the SIDKDD trillion paper was mathing about.
Hopefully having some of the math alongside code that preforms similar functions demystifies how you too can start trying out methods of data pre-processing. Because it looks like there'll be lots of trial-n-error to find the best pre-processing methods no-matter what it is that one it trying to force feed to a neural network.
I know big bummer that I've not a complete answer, but if there was a publicly available best method of predicting y for time series forecasting there'd likely be trolling of the markets instead of just flash-crashes from bots ...which, hint, hint, maybe a good idea to train a model to predict ;-)
As stated at the top, I think you'll need to frame the problems you want solved very precisely, for example you could try to predict the relative change in price and not the price directly, in which case percentiles may work okay, kinda like using a similar method as was shown in the paper for genetic information encoding to time series.
Notes about code for future editors
I did not add a threshold that was suggested by terrace in the Scaler_MinMax code, to keep things a bit simpler, furthermore, I did not write the code examples with the intent to be efficient or used in production but to be accessible, please keep any edits to corrections and in the spirit of being accessible for as many readers as possible.
Otherwise I think it'll be far more helpful for readers/editors to attempt to write their own code examples in another language &/or using a different method for squashing data, that and it'd cool to see what everyone else comes up with. | How to normalize highly volatile time series? | TLDR
Re-read the linked to paper, it's excellent! I think so anyways, and it seems to have some solid options once you've framed the problem you want solved...
Now I'm not going to leave ya with just | How to normalize highly volatile time series?
TLDR
Re-read the linked to paper, it's excellent! I think so anyways, and it seems to have some solid options once you've framed the problem you want solved...
Now I'm not going to leave ya with just that, so let's dissect some of what the paper is going on about while keeping your questions in mind.
The paper discusses when they chose to normalize and re-normalize data on page 2 1.2.3 Arbitrary Query Lengths cannot be Indexed and near the end of that page is pretty clear about there not being a technique for similarity searches of arbitrary lengths when dealing with such large datasets.
... though if I understand what they are searching for in this instance, this should only be a problem if you're feeding in lots of sequences that may or may not be correlated in the same or neighboring time intervals... short aside, DTW (Dynamic Time Warp) pretty groovy stuff there, I'll restate that re-reading the paper is probably a good idea... Which for medical related fields totally makes sense, eg. the EEG example from the paper would be sampling n number of points from each of the subject's brain per t time-period multiplied by the length of the time-slice. In other-words a metric-s-ton of data!
Which begs the question of how big you expect your n dimension to be with a crypto-coin?
Regardless, in part 4.2.1 things get interesting as far as data pre-processing, and they even provide a link at the end [43] to some source code to play with.
As far as a best way for pre-processing, that seems to still be a subject of active research, as well as dependent upon model, and what is being trained for and how. Though it seems like you're on the right track as far as wanting to normalize or standardize though each time-range.
One suggestion on a related question How to represent an unbounded variable as number between 0 and 1 (which at the heart seems to be what your question is about, Bitcoin being an unbounded variable along many measurable dimensions), provides some pseudo-code Using a trainable minmax. A version of which I've partially translated to (partially functioning) Python using the following min-maxing method from How to normalize data between -1 and 1?, for demonstration of some of the issues you'll face just with pre-processing.
$$
x''' = (b-a)\frac{x - \min{x}}{\max{x} - \min{x}} + a
$$
#!/usr/bin/env python
from __future__ import division
class Scaler_MinMax(list):
"""
Scaler_MinMax is a `list` that scales inputted, or `append`ed, or `extend`ed values to be between chosen `scale_`s
"""
def __init__(self, l, scale_min = -1, scale_max = 1, auto_recalibrate = False):
self.scale_min = scale_min
self.scale_max = scale_max
self.range_min = min(l)
self.range_max = max(l)
self.initial_values = l
self.auto_recalibrate = auto_recalibrate
super(Scaler_MinMax, self).__init__(self.scale(numbers = l))
def scale(self, numbers = [], scale_min = None, scale_max = None, range_min = None, range_max = None):
"""
Returns list of scaled values
"""
if scale_min is None:
scale_min = self.scale_min
if scale_max is None:
scale_max = self.scale_max
if range_min is None:
range_min = self.range_min
if range_max is None:
range_max = self.range_max
return [((x - range_min) / (range_max - range_min)) * (scale_max - scale_min) + scale_min for x in numbers]
def unscale(self, numbers = [], scale_min = None, scale_max = None, range_min = None, range_max = None):
"""
Returns list of unscaled values
"""
if not numbers:
numbers = self
if scale_min is None:
scale_min = self.scale_min
if scale_max is None:
scale_max = self.scale_max
if range_min is None:
range_min = self.range_min
if range_max is None:
range_max = self.range_max
return [(((y - scale_min) / (scale_max - scale_min)) * (range_max - range_min)) + range_min for y in numbers]
def re_calibrate(self):
"""
Re-sets `self.range_min`, `self.range_max`, and `self`
"""
self.range_min = min(self.initial_values)
self.range_max = max(self.initial_values)
super(Scaler_MinMax, self).__init__(self.scale(
numbers = self.initial_values,
scale_min = self.scale_min,
scale_max = self.scale_max,
range_min = self.range_min,
range_max = self.range_max))
"""
Supered class overrides
"""
def __add__(self, other):
if not isinstance(other, list):
raise TypeError("can only concatenate list (not '{0}') to list".format(type(other)))
return super(Scaler_MinMax, self).__add__(self.scale(numbers = other))
def append(self, value):
"""
Appends to `self.initial_values` and `self.append(self.scale([value])[0])`
"""
self.initial_values.append(value)
if isinstance(value, list):
super(Scaler_MinMax, self).append([self.scale(numbers = [i])[0] for i in value])
else:
super(Scaler_MinMax, self).append(self.scale(numbers = [value])[0])
if self.auto_recalibrate is True:
if self.range_max != max(self.initial_values) or self.range_min != min(self.initial_values):
self.re_calibrate()
def clear(self):
"""
Clears `self.initial_values` and `self` of __ALL__ values
"""
self.initial_values.clear()
super(Scaler_MinMax, self).clear()
def extend(self, iterable):
"""
Extends `self.initial_values` and `self.extend(self.scale(iterable))`
"""
self.initial_values.extend(iterable)
super(Scaler_MinMax, self).extend(self.scale(iterable))
if self.auto_recalibrate is True:
if self.range_max != max(self.initial_values) or self.range_min != min(self.initial_values):
self.re_calibrate()
def insert(self, index, value):
"""
Inserts `value` into `self.initial_values` and `self.scale([value])` into `self`
"""
self.initial_values.insert(index, value)
super(Scaler_MinMax, self).insert(index, self.scale([value]))
if self.auto_recalibrate is True:
if self.range_max != max(self.initial_values) or self.range_min != min(self.initial_values):
self.re_calibrate()
def pop(self, index):
"""
Returns tuple of `pop`ed `index`s values from `self.initial_values`, and `self`
"""
output = self.initial_values.pop(index), super(Scaler_MinMax, self).pop(index)
if self.auto_recalibrate is True:
if self.range_max != max(self.initial_values) or self.range_min != min(self.initial_values):
self.re_calibrate()
return output
def remove(self, value):
"""
Removes value from `self.initial_values` and `self`
> use of `index` calls to ensure paired values are `pop`ed/`remove`d does increase the expense of this method
"""
if value in self.initial_values:
i = self.initial_values.index(value)
self.pop(i)
self.initial_values.remove(value)
elif value in self:
i = self.index(value)
self.initial_values.pop(i)
super(Scaler_MinMax, self).remove(value)
else:
raise ValueError("{0}.remove(x) not in list".format(self.__class__.__name__))
if self.auto_recalibrate is True:
if self.range_max != max(self.initial_values) or self.range_min != min(self.initial_values):
self.re_calibrate()
def reverse(self):
"""
Reverses `self.initial_values` and `self` __IN PLACE__
"""
self.initial_values.reverse()
super(Scaler_MinMax, self).reverse()
def sort(self):
"""
Sorts `self.initial_values` and `self` __IN PLACE__
"""
self.initial_values.sort()
super(Scaler_MinMax, self).sort()
Example usage and limitations
s = Scaler_MinMax([x for x in range(9)])
s
# -> [-1.0, -0.75, -0.5, -0.25, 0.0, 0.25, 0.5, 0.75, 1.0]
s.unscale()
# -> [0.0, 1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0]
s.initial_values
# -> [0, 1, 2, 3, 4, 5, 6, 7, 8]
s.append(20)
s
# -> [-1.0, -0.75, -0.5, -0.25, 0.0, 0.25, 0.5, 0.75, 1.0, 4.0]
s.re_calibrate()
s
# -> [-1.0, -0.9, -0.8, -0.7, -0.6, -0.5, -0.4, -0.30000000000000004, -0.19999999999999996, 1.0]
s.unscale() # Floating point errors!
# -> [0.0, 0.9999999999999998, 1.9999999999999996, 3.0000000000000004, 4.0, 5.0, 6.0, 7.0, 8.0, 20.0]
s.initial_values
# -> [0, 1, 2, 3, 4, 5, 6, 7, 8, 20]
As shown above, floating point errors are why initial_values are saved and used for re_calibrate() if this is not done and unscale() used instead for re_calibrate() floating point errors would compound! Furthermore, not only are there many places the code can be improved, having an extra copy of the un-parsed data along side the parsed data can more than double memory usage.
Floating point errors adding up are also discussed in the linked to paper as well as their methods of mitigation on page 4.2.1 Early Abandoning Z-Normalization near the end where they suggest a "flush out" of such errors every million sub-sequences.
Side note, so that everyone's on the same page...
$$
\mu = \frac{1}{m} \sum {x_i}
$$
... in python looks sorta like...
#!/usr/bin/env python
from __future__ import division
# Thanks be to https://stackoverflow.com/a/31136897/2632107
try:
# Python 2
range = xrange
except NameError:
# Python 3
pass
def calc_mu(m_denominator=2, x_list=[1.5, 42], start=0, end=2, step=1):
''' μ = 1/m * ∑x_i '''
chosen = []
for i in range(start, end, step):
chosen.append(x_list[i])
return float(1) / m_denominator * sum(chosen)
... well that is if I understood An example of subscript and summation notation, I've had to brush-up on that and dabble with the Khan Academy's Sequences and Series in order to get a fuller grasp of what the SIDKDD trillion paper was mathing about.
Hopefully having some of the math alongside code that preforms similar functions demystifies how you too can start trying out methods of data pre-processing. Because it looks like there'll be lots of trial-n-error to find the best pre-processing methods no-matter what it is that one it trying to force feed to a neural network.
I know big bummer that I've not a complete answer, but if there was a publicly available best method of predicting y for time series forecasting there'd likely be trolling of the markets instead of just flash-crashes from bots ...which, hint, hint, maybe a good idea to train a model to predict ;-)
As stated at the top, I think you'll need to frame the problems you want solved very precisely, for example you could try to predict the relative change in price and not the price directly, in which case percentiles may work okay, kinda like using a similar method as was shown in the paper for genetic information encoding to time series.
Notes about code for future editors
I did not add a threshold that was suggested by terrace in the Scaler_MinMax code, to keep things a bit simpler, furthermore, I did not write the code examples with the intent to be efficient or used in production but to be accessible, please keep any edits to corrections and in the spirit of being accessible for as many readers as possible.
Otherwise I think it'll be far more helpful for readers/editors to attempt to write their own code examples in another language &/or using a different method for squashing data, that and it'd cool to see what everyone else comes up with. | How to normalize highly volatile time series?
TLDR
Re-read the linked to paper, it's excellent! I think so anyways, and it seems to have some solid options once you've framed the problem you want solved...
Now I'm not going to leave ya with just |
32,634 | Per Image Normalization vs overall dataset normalization | Each method has their own purposes. In sequential data such as speech [1], the mean and covariance are calculated from an utterance (a recording) and is then subtracted from all the observations in that utterance. This is done for each utterance separately.
In images on the other hand, one image can be seen as a sequence of pixels. Therefore the mean and variance in an image are calculated from individual pixels in that image.
For pixed-wise or per-image normalization, mean and covariance are calculated for each image separately.
In case of the overall normalization, it is better though to calculate the mean and variance from the training data and use it to normalize all the sets including training, validation, test etc.
[1]https://en.wikipedia.org/wiki/Cepstral_mean_and_variance_normalization | Per Image Normalization vs overall dataset normalization | Each method has their own purposes. In sequential data such as speech [1], the mean and covariance are calculated from an utterance (a recording) and is then subtracted from all the observations in th | Per Image Normalization vs overall dataset normalization
Each method has their own purposes. In sequential data such as speech [1], the mean and covariance are calculated from an utterance (a recording) and is then subtracted from all the observations in that utterance. This is done for each utterance separately.
In images on the other hand, one image can be seen as a sequence of pixels. Therefore the mean and variance in an image are calculated from individual pixels in that image.
For pixed-wise or per-image normalization, mean and covariance are calculated for each image separately.
In case of the overall normalization, it is better though to calculate the mean and variance from the training data and use it to normalize all the sets including training, validation, test etc.
[1]https://en.wikipedia.org/wiki/Cepstral_mean_and_variance_normalization | Per Image Normalization vs overall dataset normalization
Each method has their own purposes. In sequential data such as speech [1], the mean and covariance are calculated from an utterance (a recording) and is then subtracted from all the observations in th |
32,635 | Difference between random effetcs and dummy coding of a categorical variable | Fixed effects work mainly through the mean but random effects work mainly through the variance. In particular fixed effects are non-random whereas random effects are random variables with mean 0.
Means
The means of the models are different so the models are not the same. Suppose that $Y_i$ is in group 2. Then for the fixed effects model we have:
$E(Y_i) = A_i\beta + \gamma_2$
but for the mixed effect model we have the following since the mean of the $\gamma$ variables is 0 by assumption.
$E(Y_i) = A_i\beta$
so the means are different and so they cannot be the same model.
Variances
Also the covariances between two Y's in the same group are different.
If $Y_i$ and $Y_j$ are observations such that $i$ and $j$ are not equal, i.e. they are for different observations, and both are in group 2, say, then for fixed effects we must have zero covariance since the assumption is that all observations are independent.
$cov(Y_i, Y_j) = 0$
Now consider the mixed effects case. For simplicity let us assume $A$ is $0$. Then we can expand the covariance as shown below and using bilinearity of the covariance and that all distinct terms are independent and have mean $0$ by assumption the cross terms vanish and we are left with the variance shown:
$cov(Y_i, Y_j) = cov(\gamma_2 + \epsilon_i, \gamma_2 + \epsilon_j) = var({\gamma}_2) > 0$
Thus the models are not the same. The covariance between two distinct Y's in group 2 is 0 for the fixed effect model but is positive for the mixed effects model. | Difference between random effetcs and dummy coding of a categorical variable | Fixed effects work mainly through the mean but random effects work mainly through the variance. In particular fixed effects are non-random whereas random effects are random variables with mean 0.
Mea | Difference between random effetcs and dummy coding of a categorical variable
Fixed effects work mainly through the mean but random effects work mainly through the variance. In particular fixed effects are non-random whereas random effects are random variables with mean 0.
Means
The means of the models are different so the models are not the same. Suppose that $Y_i$ is in group 2. Then for the fixed effects model we have:
$E(Y_i) = A_i\beta + \gamma_2$
but for the mixed effect model we have the following since the mean of the $\gamma$ variables is 0 by assumption.
$E(Y_i) = A_i\beta$
so the means are different and so they cannot be the same model.
Variances
Also the covariances between two Y's in the same group are different.
If $Y_i$ and $Y_j$ are observations such that $i$ and $j$ are not equal, i.e. they are for different observations, and both are in group 2, say, then for fixed effects we must have zero covariance since the assumption is that all observations are independent.
$cov(Y_i, Y_j) = 0$
Now consider the mixed effects case. For simplicity let us assume $A$ is $0$. Then we can expand the covariance as shown below and using bilinearity of the covariance and that all distinct terms are independent and have mean $0$ by assumption the cross terms vanish and we are left with the variance shown:
$cov(Y_i, Y_j) = cov(\gamma_2 + \epsilon_i, \gamma_2 + \epsilon_j) = var({\gamma}_2) > 0$
Thus the models are not the same. The covariance between two distinct Y's in group 2 is 0 for the fixed effect model but is positive for the mixed effects model. | Difference between random effetcs and dummy coding of a categorical variable
Fixed effects work mainly through the mean but random effects work mainly through the variance. In particular fixed effects are non-random whereas random effects are random variables with mean 0.
Mea |
32,636 | Difference between random effetcs and dummy coding of a categorical variable | You have probably long moved past this, but I figured I would clarify what the difference is between categorical variables for fixed and random effects. To keep things simple, I'll use less mathematical names for the equations so its more straight forward. A normal regression with a categorical predictor would look like the following:
$\text{Y} = \text{Fixed Intercept} + \text{Categorical Predictor} + \text{Error}$
For our example, we can make a model predicting the influence of occupation on average income, expressed so:
$\text{Income} = \text{Grand Mean of Income} + \text{Change in Grand Mean Due to Occupation} + \text{Error}$
This equation assumes that the fixed effect of categories predicts a grand mean. So conditional upon whatever the category is, the "y" here should equal the mean outcome plus or minus whatever fixed effects are in the equation. If the reference criterion for occupation is a janitor and their average income is 40k USD, it would be used as the fixed intercept, and every other dummy coding of occupation would add the fixed effect on the grand mean based off each multiplication of the dummy code.
Random effects models, depending on how they're formulated, add an additional layer with random effect intercepts and random effect slopes. I will just use random intercepts as an example. If we are interested in occupation as a fixed effect but we aren't interested in the corporation they work for, this may be a good time to use it as a random effect (which by the way should almost always be categorical). We would simply add the random intercepts, or conditional grand means, of each corporation to the model.
$ \begin{aligned}
\text{Income} = \text{Grand Mean of Income} + \text{Occupation} + \text{Error}
\\ \text{+/- Change in Grand Mean from Corporation 1}
\\ \text{+/- Change in Grand Mean from Corporation 2}
\\ \text{+/- Change in Grand Mean from Corporation ..}
\end{aligned}
$
This essentially accomplishes the major goal in separating the equation into two separate parts. One part is a predictable fixed effect that you can estimate independent from the random effects. The other part is the actual random effect which you can look at to see how much "noise" it brings if it were to be added back into the fixed effect equation. The selection of a categorical predictor is therefore theoretical...its important to know why you think it predicts an outcome or if it is just unnecessarily confounding your estimate. | Difference between random effetcs and dummy coding of a categorical variable | You have probably long moved past this, but I figured I would clarify what the difference is between categorical variables for fixed and random effects. To keep things simple, I'll use less mathematic | Difference between random effetcs and dummy coding of a categorical variable
You have probably long moved past this, but I figured I would clarify what the difference is between categorical variables for fixed and random effects. To keep things simple, I'll use less mathematical names for the equations so its more straight forward. A normal regression with a categorical predictor would look like the following:
$\text{Y} = \text{Fixed Intercept} + \text{Categorical Predictor} + \text{Error}$
For our example, we can make a model predicting the influence of occupation on average income, expressed so:
$\text{Income} = \text{Grand Mean of Income} + \text{Change in Grand Mean Due to Occupation} + \text{Error}$
This equation assumes that the fixed effect of categories predicts a grand mean. So conditional upon whatever the category is, the "y" here should equal the mean outcome plus or minus whatever fixed effects are in the equation. If the reference criterion for occupation is a janitor and their average income is 40k USD, it would be used as the fixed intercept, and every other dummy coding of occupation would add the fixed effect on the grand mean based off each multiplication of the dummy code.
Random effects models, depending on how they're formulated, add an additional layer with random effect intercepts and random effect slopes. I will just use random intercepts as an example. If we are interested in occupation as a fixed effect but we aren't interested in the corporation they work for, this may be a good time to use it as a random effect (which by the way should almost always be categorical). We would simply add the random intercepts, or conditional grand means, of each corporation to the model.
$ \begin{aligned}
\text{Income} = \text{Grand Mean of Income} + \text{Occupation} + \text{Error}
\\ \text{+/- Change in Grand Mean from Corporation 1}
\\ \text{+/- Change in Grand Mean from Corporation 2}
\\ \text{+/- Change in Grand Mean from Corporation ..}
\end{aligned}
$
This essentially accomplishes the major goal in separating the equation into two separate parts. One part is a predictable fixed effect that you can estimate independent from the random effects. The other part is the actual random effect which you can look at to see how much "noise" it brings if it were to be added back into the fixed effect equation. The selection of a categorical predictor is therefore theoretical...its important to know why you think it predicts an outcome or if it is just unnecessarily confounding your estimate. | Difference between random effetcs and dummy coding of a categorical variable
You have probably long moved past this, but I figured I would clarify what the difference is between categorical variables for fixed and random effects. To keep things simple, I'll use less mathematic |
32,637 | Homoscedasticity Assumption in Linear Regression vs. Concept of Studentized Residuals | I do not really understand your confusion, but let me give this a try. Consider a linear regression
$$
y=X\beta+\varepsilon
$$
with errors $\varepsilon$ and residuals $e:=y-X\hat\beta=(I-H)y$ where $I$ is an identity matrix and $H:=X(X^\top X)^{-1}X^\top$ is the hat matrix. Suppose the linear model is correctly specified and all the assumptions, including unconditional and conditional homoskedasticity of errors, are met.
While $\varepsilon$ are homoskedastic by the assumption I just introduced, the model residuals $e$ are conditionally heteroskedastic w.r.t the level of $X$: their variance can be shown to be $\text{Var}(e)=\sigma^2_\varepsilon(I-H)$. This is an artifact of OLS estimation in a linear model.
Now suppose you do not know whether all of the assumptions are met (which is the realistic perspective) and you would like to check them. You would perhaps be tempted to use the residuals $e$ in place of the unobserved errors $\varepsilon$ to do model diagnostics, e.g. assess the assumption of conditional homoskedasticity of $\varepsilon$. Unfortunately, a conditionally homoskedastic $\varepsilon$ translates into a conditionally heteroskedastic $e$ as evidenced by the variance formula above. Thus you cannot learn much about the conditional homoskedasticity of $\varepsilon$ by inspecting the variability in $e$ vs. $X$.
But there is a remedy. You can adjust for the variance distortion in $e$ by "undoing" the scaling due to multiplication by $(I-H)$ in $e$. This results in (internally or externally) studentized residuals $\tilde{e}_{int}:=\frac{e}{\hat\sigma_{int}\sqrt{1-h_{ii}}}$ or $\tilde{e}_{ext}:=\frac{e}{\hat\sigma_{ext}\sqrt{1-h_{ii}}}$ where $\hat\sigma_{int}$ and $\hat\sigma_{ext}$ are internal and external estimates of error variance, respectively. Studentization of residuals allows putting the residuals back to the same level of conditional variance as the unobserved model errors $\varepsilon$ are, up to a scaling factor that is uniform across the data points and thus does not affect conditional homo- or heteroskedasticity.
This is why it makes sense to use studentized residuals $\tilde{e}$ in place of raw residuals $e$ when assessing conditional heteroskedasticity of the model errors $\varepsilon$ w.r.t. to the regressor $X$. | Homoscedasticity Assumption in Linear Regression vs. Concept of Studentized Residuals | I do not really understand your confusion, but let me give this a try. Consider a linear regression
$$
y=X\beta+\varepsilon
$$
with errors $\varepsilon$ and residuals $e:=y-X\hat\beta=(I-H)y$ where $ | Homoscedasticity Assumption in Linear Regression vs. Concept of Studentized Residuals
I do not really understand your confusion, but let me give this a try. Consider a linear regression
$$
y=X\beta+\varepsilon
$$
with errors $\varepsilon$ and residuals $e:=y-X\hat\beta=(I-H)y$ where $I$ is an identity matrix and $H:=X(X^\top X)^{-1}X^\top$ is the hat matrix. Suppose the linear model is correctly specified and all the assumptions, including unconditional and conditional homoskedasticity of errors, are met.
While $\varepsilon$ are homoskedastic by the assumption I just introduced, the model residuals $e$ are conditionally heteroskedastic w.r.t the level of $X$: their variance can be shown to be $\text{Var}(e)=\sigma^2_\varepsilon(I-H)$. This is an artifact of OLS estimation in a linear model.
Now suppose you do not know whether all of the assumptions are met (which is the realistic perspective) and you would like to check them. You would perhaps be tempted to use the residuals $e$ in place of the unobserved errors $\varepsilon$ to do model diagnostics, e.g. assess the assumption of conditional homoskedasticity of $\varepsilon$. Unfortunately, a conditionally homoskedastic $\varepsilon$ translates into a conditionally heteroskedastic $e$ as evidenced by the variance formula above. Thus you cannot learn much about the conditional homoskedasticity of $\varepsilon$ by inspecting the variability in $e$ vs. $X$.
But there is a remedy. You can adjust for the variance distortion in $e$ by "undoing" the scaling due to multiplication by $(I-H)$ in $e$. This results in (internally or externally) studentized residuals $\tilde{e}_{int}:=\frac{e}{\hat\sigma_{int}\sqrt{1-h_{ii}}}$ or $\tilde{e}_{ext}:=\frac{e}{\hat\sigma_{ext}\sqrt{1-h_{ii}}}$ where $\hat\sigma_{int}$ and $\hat\sigma_{ext}$ are internal and external estimates of error variance, respectively. Studentization of residuals allows putting the residuals back to the same level of conditional variance as the unobserved model errors $\varepsilon$ are, up to a scaling factor that is uniform across the data points and thus does not affect conditional homo- or heteroskedasticity.
This is why it makes sense to use studentized residuals $\tilde{e}$ in place of raw residuals $e$ when assessing conditional heteroskedasticity of the model errors $\varepsilon$ w.r.t. to the regressor $X$. | Homoscedasticity Assumption in Linear Regression vs. Concept of Studentized Residuals
I do not really understand your confusion, but let me give this a try. Consider a linear regression
$$
y=X\beta+\varepsilon
$$
with errors $\varepsilon$ and residuals $e:=y-X\hat\beta=(I-H)y$ where $ |
32,638 | If I want to have 95% chance that less than 1% objects are faulty, how many samples do I need? | So it depends on the distribution of your prior belief about the breakage rate, but: about 3600.
import scipy as sp
p = 0.0075
threshold = .01
confidence = .95
f = lambda n: sp.stats.beta(a=n*p, b=n*(1-p)).cdf(threshold) - confidence
print(sp.optimize.fsolve(f, 1000)[0])
>> 3627.45119614
The idea here is to model link breakages as a Bernoulli trial, and model your beliefs about the breakage rate as the beta distribution. The beta distribution is conjugate to the Bernoulli distribution, and the way to update a beta distribution when you run a trial is pretty simple:
if it's a failure, you add one to the first parameter, $\alpha$
if it's a success, you add one to the second parameter, $\beta$
So if we start with a $\text{Beta}(0, 0)$ distribution and see failures about .75% of the time, how many trials will it take before 95% of the distribution's mass is below 0.01? About 3600. | If I want to have 95% chance that less than 1% objects are faulty, how many samples do I need? | So it depends on the distribution of your prior belief about the breakage rate, but: about 3600.
import scipy as sp
p = 0.0075
threshold = .01
confidence = .95
f = lambda n: sp.stats.beta(a=n*p, b= | If I want to have 95% chance that less than 1% objects are faulty, how many samples do I need?
So it depends on the distribution of your prior belief about the breakage rate, but: about 3600.
import scipy as sp
p = 0.0075
threshold = .01
confidence = .95
f = lambda n: sp.stats.beta(a=n*p, b=n*(1-p)).cdf(threshold) - confidence
print(sp.optimize.fsolve(f, 1000)[0])
>> 3627.45119614
The idea here is to model link breakages as a Bernoulli trial, and model your beliefs about the breakage rate as the beta distribution. The beta distribution is conjugate to the Bernoulli distribution, and the way to update a beta distribution when you run a trial is pretty simple:
if it's a failure, you add one to the first parameter, $\alpha$
if it's a success, you add one to the second parameter, $\beta$
So if we start with a $\text{Beta}(0, 0)$ distribution and see failures about .75% of the time, how many trials will it take before 95% of the distribution's mass is below 0.01? About 3600. | If I want to have 95% chance that less than 1% objects are faulty, how many samples do I need?
So it depends on the distribution of your prior belief about the breakage rate, but: about 3600.
import scipy as sp
p = 0.0075
threshold = .01
confidence = .95
f = lambda n: sp.stats.beta(a=n*p, b= |
32,639 | If I want to have 95% chance that less than 1% objects are faulty, how many samples do I need? | For $n$ samples with $p=0.0075$ chance of failure, the variance for number of failures is $n p (1-p)$. So using central limit theorem, with $Z$ a standard normal,
\begin{align*}
\mathbb{P}(\text{failures} < .01 n) \approx \mathbb{P}(Z < \frac{n (.01 - p)}{\sqrt{n p (1-p)}}) \approx \mathbb{P}(Z < \sqrt{n} .02898)
\end{align*}
Now we want the above to equal 95%, which corresponds to $Z = 1.645$. Solving for $\sqrt{n} .02898 = 1.645$, I get $n=3222$. | If I want to have 95% chance that less than 1% objects are faulty, how many samples do I need? | For $n$ samples with $p=0.0075$ chance of failure, the variance for number of failures is $n p (1-p)$. So using central limit theorem, with $Z$ a standard normal,
\begin{align*}
\mathbb{P}(\text{failu | If I want to have 95% chance that less than 1% objects are faulty, how many samples do I need?
For $n$ samples with $p=0.0075$ chance of failure, the variance for number of failures is $n p (1-p)$. So using central limit theorem, with $Z$ a standard normal,
\begin{align*}
\mathbb{P}(\text{failures} < .01 n) \approx \mathbb{P}(Z < \frac{n (.01 - p)}{\sqrt{n p (1-p)}}) \approx \mathbb{P}(Z < \sqrt{n} .02898)
\end{align*}
Now we want the above to equal 95%, which corresponds to $Z = 1.645$. Solving for $\sqrt{n} .02898 = 1.645$, I get $n=3222$. | If I want to have 95% chance that less than 1% objects are faulty, how many samples do I need?
For $n$ samples with $p=0.0075$ chance of failure, the variance for number of failures is $n p (1-p)$. So using central limit theorem, with $Z$ a standard normal,
\begin{align*}
\mathbb{P}(\text{failu |
32,640 | What if all the nodes are dropped when using dropout? | This is a concern which will very rarely every be realized. For a moderately sized neural network whose hidden layers each have $1000$ units, if the dropout probability is set to $p=0.5$ (the high end of what's typically used) then the probability of all $1000$ units being zero is $0.5^{1000} = 9.3\times10^{-302}$ which is a mind-bogglingly tiny value. Even for a very small neural network with only $50$ units in the hidden layer, the probability of all units being zero is $.5^{50}=8.9\times10^{-16}$, or less than $\frac{1}{1\ \text{thousand trillion}}$
So in short, this isn't something you ever need to worry about in most real-world situations, and in the rare instances where it does happen, you could simply rerun the dropout step to obtain a new set of dropped weights.
UPDATE:
Digging through the source code for TensorFlow, I found the implementation of dropout here. TensorFlow doesn't even bother accounting for the special case where all of the units are zero. If this happens to occur, then the output from that layer will simply be zero. The units don't "disappear" when dropped, they just take on the value zero, which from the perspective of the other layers in the network is perfectly fine. They can perform their subsequent operations on a vector of zeros just as well as on a vector of non-zero values. | What if all the nodes are dropped when using dropout? | This is a concern which will very rarely every be realized. For a moderately sized neural network whose hidden layers each have $1000$ units, if the dropout probability is set to $p=0.5$ (the high end | What if all the nodes are dropped when using dropout?
This is a concern which will very rarely every be realized. For a moderately sized neural network whose hidden layers each have $1000$ units, if the dropout probability is set to $p=0.5$ (the high end of what's typically used) then the probability of all $1000$ units being zero is $0.5^{1000} = 9.3\times10^{-302}$ which is a mind-bogglingly tiny value. Even for a very small neural network with only $50$ units in the hidden layer, the probability of all units being zero is $.5^{50}=8.9\times10^{-16}$, or less than $\frac{1}{1\ \text{thousand trillion}}$
So in short, this isn't something you ever need to worry about in most real-world situations, and in the rare instances where it does happen, you could simply rerun the dropout step to obtain a new set of dropped weights.
UPDATE:
Digging through the source code for TensorFlow, I found the implementation of dropout here. TensorFlow doesn't even bother accounting for the special case where all of the units are zero. If this happens to occur, then the output from that layer will simply be zero. The units don't "disappear" when dropped, they just take on the value zero, which from the perspective of the other layers in the network is perfectly fine. They can perform their subsequent operations on a vector of zeros just as well as on a vector of non-zero values. | What if all the nodes are dropped when using dropout?
This is a concern which will very rarely every be realized. For a moderately sized neural network whose hidden layers each have $1000$ units, if the dropout probability is set to $p=0.5$ (the high end |
32,641 | What if all the nodes are dropped when using dropout? | That situation should be avoided. If all the neurons in one of the hidden layers are dropped, signals would not proceed towards the output neuron, and your neural network would not function as wanted. As you could see in below picture, only a part of your neurons in a layer are dropped.
You normally set the dropout rates for each hidden layer. So, if you set the dropout rates below 1, that sort of situation is avoided.
Below is how dropout layer is implemented in Tensorflow and Keras. You generally set the dropout rates for all the hidden layers as the same number (0.x) as it is convenient to tune the hyperparameter.
# set the dropout rate as any number between 0 and 1
dropout_rate = 0.4
# tensorflow implementation
dropout = tf.nn.dropout(x, keep_prob = dropout_rate)
# keras implementation
dropout = keras.layers.Dropout(dropout_rate) | What if all the nodes are dropped when using dropout? | That situation should be avoided. If all the neurons in one of the hidden layers are dropped, signals would not proceed towards the output neuron, and your neural network would not function as wanted. | What if all the nodes are dropped when using dropout?
That situation should be avoided. If all the neurons in one of the hidden layers are dropped, signals would not proceed towards the output neuron, and your neural network would not function as wanted. As you could see in below picture, only a part of your neurons in a layer are dropped.
You normally set the dropout rates for each hidden layer. So, if you set the dropout rates below 1, that sort of situation is avoided.
Below is how dropout layer is implemented in Tensorflow and Keras. You generally set the dropout rates for all the hidden layers as the same number (0.x) as it is convenient to tune the hyperparameter.
# set the dropout rate as any number between 0 and 1
dropout_rate = 0.4
# tensorflow implementation
dropout = tf.nn.dropout(x, keep_prob = dropout_rate)
# keras implementation
dropout = keras.layers.Dropout(dropout_rate) | What if all the nodes are dropped when using dropout?
That situation should be avoided. If all the neurons in one of the hidden layers are dropped, signals would not proceed towards the output neuron, and your neural network would not function as wanted. |
32,642 | Why is sqrt(6) used to calculate epsilon for random initialisation of neural networks? | I believe this is Xavier normalized initialization (implemented in several deep learning frameworks eg Keras, Cafe, ...)
from Understanding the difficulty of training deep feedforward neural networks by Xavier Glorot & Yoshua Bengio.
See equations 12, 15 and 16 in the paper linked: they aim to satisfy equation 12:
$$\text{Var}[W_i] = \frac{2}{n_i + n_{i+1}}$$
and the variance of a uniform RV in $[-\epsilon,\epsilon]$ is $\epsilon^2/3$ (mean is zero, pdf = $1/(2\epsilon)$ so variance $=\int_{-\epsilon}^{\epsilon}x^2 \frac{1}{2\epsilon}dx$ | Why is sqrt(6) used to calculate epsilon for random initialisation of neural networks? | I believe this is Xavier normalized initialization (implemented in several deep learning frameworks eg Keras, Cafe, ...)
from Understanding the difficulty of training deep feedforward neural networks | Why is sqrt(6) used to calculate epsilon for random initialisation of neural networks?
I believe this is Xavier normalized initialization (implemented in several deep learning frameworks eg Keras, Cafe, ...)
from Understanding the difficulty of training deep feedforward neural networks by Xavier Glorot & Yoshua Bengio.
See equations 12, 15 and 16 in the paper linked: they aim to satisfy equation 12:
$$\text{Var}[W_i] = \frac{2}{n_i + n_{i+1}}$$
and the variance of a uniform RV in $[-\epsilon,\epsilon]$ is $\epsilon^2/3$ (mean is zero, pdf = $1/(2\epsilon)$ so variance $=\int_{-\epsilon}^{\epsilon}x^2 \frac{1}{2\epsilon}dx$ | Why is sqrt(6) used to calculate epsilon for random initialisation of neural networks?
I believe this is Xavier normalized initialization (implemented in several deep learning frameworks eg Keras, Cafe, ...)
from Understanding the difficulty of training deep feedforward neural networks |
32,643 | Optimise SVM to avoid false-negative in binary classification | Scikit learn implementation of the SVM binary classifier does not let you set a cutoff threshold as the other comments/replies have suggested. Instead of giving class probabilities, it straighaway applies a default cutoff to give you the class membership e.g. 1 or 2.
To minimize false negatives, you could set higher weights for training samples labeled as the positive class, by default the weights are set to 1 for all classes. To change this, use the hyper-parameter class_weight .
Ideally, you should avoid choosing a cutoff and simply provide the class probabilities to the end users who can then decide on which cutoff to apply when making decisions based on the classifier.
A better metric to compare classifiers is a proper scoring function, see https://en.wikipedia.org/wiki/Scoring_rule and the score() method in the svm classifier module sklearn.svm.SVC. | Optimise SVM to avoid false-negative in binary classification | Scikit learn implementation of the SVM binary classifier does not let you set a cutoff threshold as the other comments/replies have suggested. Instead of giving class probabilities, it straighaway app | Optimise SVM to avoid false-negative in binary classification
Scikit learn implementation of the SVM binary classifier does not let you set a cutoff threshold as the other comments/replies have suggested. Instead of giving class probabilities, it straighaway applies a default cutoff to give you the class membership e.g. 1 or 2.
To minimize false negatives, you could set higher weights for training samples labeled as the positive class, by default the weights are set to 1 for all classes. To change this, use the hyper-parameter class_weight .
Ideally, you should avoid choosing a cutoff and simply provide the class probabilities to the end users who can then decide on which cutoff to apply when making decisions based on the classifier.
A better metric to compare classifiers is a proper scoring function, see https://en.wikipedia.org/wiki/Scoring_rule and the score() method in the svm classifier module sklearn.svm.SVC. | Optimise SVM to avoid false-negative in binary classification
Scikit learn implementation of the SVM binary classifier does not let you set a cutoff threshold as the other comments/replies have suggested. Instead of giving class probabilities, it straighaway app |
32,644 | Optimise SVM to avoid false-negative in binary classification | Like many predictive model, SVM will output probability scores and the apply threshold to probability to convert it into positive or negative labels.
As, @Sycorax mentioned in comment, you can adjust the cut-off threshold to adjust the trade-off between false positive and false negative.
Here is some example in R.
library(kernlab)
library(mlbench)
graphics.off()
set.seed(0)
d=mlbench.2dnormals(500)
plot(d)
# using 2nd order polynominal expansion
svp <- ksvm(d$x,d$classes,type="C-svc",kernel="polydot",
kpar=list(degree=2),C=10,prob.model=T)
plot(svp)
p=predict(svp,d$x, type="prob")[,1]
cut_off=0.5
caret::confusionMatrix(d$classes,ifelse(p<cut_off,2,1))
cut_off=0.8
caret::confusionMatrix(d$classes,ifelse(p<cut_off,2,1))
Note when we change cut_off, the confusion matrix (false postive, false negative etc.) changes
> caret::confusionMatrix(d$classes,ifelse(p<cut_off,2,1))
Confusion Matrix and Statistics
Reference
Prediction 1 2
1 253 16
2 38 193
Accuracy : 0.892
95% CI : (0.8614, 0.9178)
No Information Rate : 0.582
P-Value [Acc > NIR] : < 2.2e-16
Kappa : 0.7813
Mcnemar's Test P-Value : 0.004267
Sensitivity : 0.8694
Specificity : 0.9234
Pos Pred Value : 0.9405
Neg Pred Value : 0.8355
Prevalence : 0.5820
Detection Rate : 0.5060
Detection Prevalence : 0.5380
Balanced Accuracy : 0.8964
'Positive' Class : 1
> cut_off=0.8
> caret::confusionMatrix(d$classes,ifelse(p<cut_off,2,1))
Confusion Matrix and Statistics
Reference
Prediction 1 2
1 223 46
2 10 221
Accuracy : 0.888
95% CI : (0.857, 0.9143)
No Information Rate : 0.534
P-Value [Acc > NIR] : < 2.2e-16
Kappa : 0.7772
Mcnemar's Test P-Value : 2.91e-06
Sensitivity : 0.9571
Specificity : 0.8277
Pos Pred Value : 0.8290
Neg Pred Value : 0.9567
Prevalence : 0.4660
Detection Rate : 0.4460
Detection Prevalence : 0.5380
Balanced Accuracy : 0.8924
'Positive' Class : 1 | Optimise SVM to avoid false-negative in binary classification | Like many predictive model, SVM will output probability scores and the apply threshold to probability to convert it into positive or negative labels.
As, @Sycorax mentioned in comment, you can adjust | Optimise SVM to avoid false-negative in binary classification
Like many predictive model, SVM will output probability scores and the apply threshold to probability to convert it into positive or negative labels.
As, @Sycorax mentioned in comment, you can adjust the cut-off threshold to adjust the trade-off between false positive and false negative.
Here is some example in R.
library(kernlab)
library(mlbench)
graphics.off()
set.seed(0)
d=mlbench.2dnormals(500)
plot(d)
# using 2nd order polynominal expansion
svp <- ksvm(d$x,d$classes,type="C-svc",kernel="polydot",
kpar=list(degree=2),C=10,prob.model=T)
plot(svp)
p=predict(svp,d$x, type="prob")[,1]
cut_off=0.5
caret::confusionMatrix(d$classes,ifelse(p<cut_off,2,1))
cut_off=0.8
caret::confusionMatrix(d$classes,ifelse(p<cut_off,2,1))
Note when we change cut_off, the confusion matrix (false postive, false negative etc.) changes
> caret::confusionMatrix(d$classes,ifelse(p<cut_off,2,1))
Confusion Matrix and Statistics
Reference
Prediction 1 2
1 253 16
2 38 193
Accuracy : 0.892
95% CI : (0.8614, 0.9178)
No Information Rate : 0.582
P-Value [Acc > NIR] : < 2.2e-16
Kappa : 0.7813
Mcnemar's Test P-Value : 0.004267
Sensitivity : 0.8694
Specificity : 0.9234
Pos Pred Value : 0.9405
Neg Pred Value : 0.8355
Prevalence : 0.5820
Detection Rate : 0.5060
Detection Prevalence : 0.5380
Balanced Accuracy : 0.8964
'Positive' Class : 1
> cut_off=0.8
> caret::confusionMatrix(d$classes,ifelse(p<cut_off,2,1))
Confusion Matrix and Statistics
Reference
Prediction 1 2
1 223 46
2 10 221
Accuracy : 0.888
95% CI : (0.857, 0.9143)
No Information Rate : 0.534
P-Value [Acc > NIR] : < 2.2e-16
Kappa : 0.7772
Mcnemar's Test P-Value : 2.91e-06
Sensitivity : 0.9571
Specificity : 0.8277
Pos Pred Value : 0.8290
Neg Pred Value : 0.9567
Prevalence : 0.4660
Detection Rate : 0.4460
Detection Prevalence : 0.5380
Balanced Accuracy : 0.8924
'Positive' Class : 1 | Optimise SVM to avoid false-negative in binary classification
Like many predictive model, SVM will output probability scores and the apply threshold to probability to convert it into positive or negative labels.
As, @Sycorax mentioned in comment, you can adjust |
32,645 | Difference between offset and exposure in Poisson Regression | Let's take a quick look at Wikipedia:
For example, biologists may count the number of tree species in a forest: events would be tree observations, exposure would be unit area, and rate would be the number of species per unit area. Demographers may model death rates in geographic areas as the count of deaths divided by person−years. More generally, event rates can be calculated as events per unit time, which allows the observation window to vary for each unit. In these examples, exposure is respectively unit area, person−years and unit time. In Poisson regression this is handled as an offset,
Exposure is a measure on how you want to divide your counts to. Do you want to divide by unit area? volume size? It has nothing to do with Poisson regression. It's something you want to do with your data.
Offset is a modelling technique in Poisson regression. If you don't want to use Poisson regression, you won't have an offset in your model. It's a simple trick in Poisson regression that allows you model for rates without a new statistical framework.
We use offset with the Poisson regression model to adjust for counts of events over time periods, areas and volumes. Details on what exactly offset is mathematically, goto:
When to use an offset in a Poisson regression?
Note how the offset goes to the right side of the equation. The offset is the log of exposure (because we're using the log link). | Difference between offset and exposure in Poisson Regression | Let's take a quick look at Wikipedia:
For example, biologists may count the number of tree species in a forest: events would be tree observations, exposure would be unit area, and rate would be the n | Difference between offset and exposure in Poisson Regression
Let's take a quick look at Wikipedia:
For example, biologists may count the number of tree species in a forest: events would be tree observations, exposure would be unit area, and rate would be the number of species per unit area. Demographers may model death rates in geographic areas as the count of deaths divided by person−years. More generally, event rates can be calculated as events per unit time, which allows the observation window to vary for each unit. In these examples, exposure is respectively unit area, person−years and unit time. In Poisson regression this is handled as an offset,
Exposure is a measure on how you want to divide your counts to. Do you want to divide by unit area? volume size? It has nothing to do with Poisson regression. It's something you want to do with your data.
Offset is a modelling technique in Poisson regression. If you don't want to use Poisson regression, you won't have an offset in your model. It's a simple trick in Poisson regression that allows you model for rates without a new statistical framework.
We use offset with the Poisson regression model to adjust for counts of events over time periods, areas and volumes. Details on what exactly offset is mathematically, goto:
When to use an offset in a Poisson regression?
Note how the offset goes to the right side of the equation. The offset is the log of exposure (because we're using the log link). | Difference between offset and exposure in Poisson Regression
Let's take a quick look at Wikipedia:
For example, biologists may count the number of tree species in a forest: events would be tree observations, exposure would be unit area, and rate would be the n |
32,646 | Why are most of my points classified as noise using DBSCAN? | The scatter-plot of the SVD projection scores of the original TFIDF data does suggest that indeed some density structure should be detected. Nevertheless these data are not the inputs DBSCAN is presented with. It appears you are using as input the original TFIDF data.
It is very plausible that the original TFIDF dataset is sparse and high-dimensional. Detecting density-based clusters in such a domain would very demanding. High-dimensional density estimation is a properly hard problem; it is a typical scenario where the curse of dimensionality kicks in. We are just seeing a manifestation of this problem ("curse"); the resulting clustering returned by DBSCAN is rather sparse itself and assumes (probably wrongly) that the data at hand are riddled with outliers.
I would suggest that, at first instance at least, DBSCAN is provided with the projection scores used to create the scatter-plot shown as inputs. This approach would be effectively Latent Semantic Analysis (LSA). In LSA we use the SVD decomposition of a matrix containing word counts of the text corpus analysed (or a normalised term-document matrix of as the one returned by TFIDF) to investigate the relations between the text-units of the corpus at hand. | Why are most of my points classified as noise using DBSCAN? | The scatter-plot of the SVD projection scores of the original TFIDF data does suggest that indeed some density structure should be detected. Nevertheless these data are not the inputs DBSCAN is prese | Why are most of my points classified as noise using DBSCAN?
The scatter-plot of the SVD projection scores of the original TFIDF data does suggest that indeed some density structure should be detected. Nevertheless these data are not the inputs DBSCAN is presented with. It appears you are using as input the original TFIDF data.
It is very plausible that the original TFIDF dataset is sparse and high-dimensional. Detecting density-based clusters in such a domain would very demanding. High-dimensional density estimation is a properly hard problem; it is a typical scenario where the curse of dimensionality kicks in. We are just seeing a manifestation of this problem ("curse"); the resulting clustering returned by DBSCAN is rather sparse itself and assumes (probably wrongly) that the data at hand are riddled with outliers.
I would suggest that, at first instance at least, DBSCAN is provided with the projection scores used to create the scatter-plot shown as inputs. This approach would be effectively Latent Semantic Analysis (LSA). In LSA we use the SVD decomposition of a matrix containing word counts of the text corpus analysed (or a normalised term-document matrix of as the one returned by TFIDF) to investigate the relations between the text-units of the corpus at hand. | Why are most of my points classified as noise using DBSCAN?
The scatter-plot of the SVD projection scores of the original TFIDF data does suggest that indeed some density structure should be detected. Nevertheless these data are not the inputs DBSCAN is prese |
32,647 | Why are most of my points classified as noise using DBSCAN? | DBSCAN min sample should be more than the number of features by 1 and then increase by the number you want this is the number of points in the circle and eps is the radius of the circle. | Why are most of my points classified as noise using DBSCAN? | DBSCAN min sample should be more than the number of features by 1 and then increase by the number you want this is the number of points in the circle and eps is the radius of the circle. | Why are most of my points classified as noise using DBSCAN?
DBSCAN min sample should be more than the number of features by 1 and then increase by the number you want this is the number of points in the circle and eps is the radius of the circle. | Why are most of my points classified as noise using DBSCAN?
DBSCAN min sample should be more than the number of features by 1 and then increase by the number you want this is the number of points in the circle and eps is the radius of the circle. |
32,648 | Incorporating prior knowledge into artificial neural networks | Actually, there are many ways to incorporate prior knowledge into neural networks. The simplest type of prior knowledge often used is weight decay. Weight decay assumes the weights come from a normal distribution with zero mean and some fixed variance. This type of prior is added as an extra term to the loss function, having the form:
$$\mathcal{L}(w) = E(w) + \lambda\frac{1}{2}||w||_2^2,$$
where $E(w)$ is the data term (e.g. a MSE loss) and $\lambda$ controls the relative importance of the two terms; it is also proportional to the prior variance. This corresponds to the negative log-likelihood of the following probability:
$$p(w|\mathcal{D})\propto p(\mathcal D|w)p(w),$$
where $p(w)=\mathcal N(w|0,\lambda^{-1}I)$ and $-\log p(w)\propto -\log\,\exp(-\frac{\lambda}{2}||w||_2^2)=\frac{\lambda}{2}||w||_2^2$. This is the same as the bayesian approach to modeling prior knowledge.
However, there are also other, less straight-forward methods to incorporate prior knowledge into neural networks. They are very important: prior knowledge is what really bridges the gap between huge neural networks and (relatively) small datasets. Some examples are:
Data augmentation: By training the network on data perturbed by various class-preserving transformations, you are incorporating your prior knowledge about the domain, namely the transformations that the network should be invariant to.
Network architecture: One of the most successful neural network techniques of the past decades are the convolutional networks. Their architecture sharing limited field-of-view kernels over spatial locations brilliantly exploits our knowledge about data in image space. This is also a form of prior knowledge incorporated into the model.
Regularization loss terms: Similar to weight decay, it is possible to construct other loss terms which penalize mappings contradicting our domain knowledge.
For an in-depth analysis/overview of these methods, I can point you to my article Regularization for Deep Learning: A Taxonomy. Also, I recommend looking into bayesian neural networks, meta-learning (finding meaningful prior information from other tasks in the same domain, see e.g. (Baxter, 2000)), possibly also one-shot learning (e.g. (Lake et al., 2015)). | Incorporating prior knowledge into artificial neural networks | Actually, there are many ways to incorporate prior knowledge into neural networks. The simplest type of prior knowledge often used is weight decay. Weight decay assumes the weights come from a normal | Incorporating prior knowledge into artificial neural networks
Actually, there are many ways to incorporate prior knowledge into neural networks. The simplest type of prior knowledge often used is weight decay. Weight decay assumes the weights come from a normal distribution with zero mean and some fixed variance. This type of prior is added as an extra term to the loss function, having the form:
$$\mathcal{L}(w) = E(w) + \lambda\frac{1}{2}||w||_2^2,$$
where $E(w)$ is the data term (e.g. a MSE loss) and $\lambda$ controls the relative importance of the two terms; it is also proportional to the prior variance. This corresponds to the negative log-likelihood of the following probability:
$$p(w|\mathcal{D})\propto p(\mathcal D|w)p(w),$$
where $p(w)=\mathcal N(w|0,\lambda^{-1}I)$ and $-\log p(w)\propto -\log\,\exp(-\frac{\lambda}{2}||w||_2^2)=\frac{\lambda}{2}||w||_2^2$. This is the same as the bayesian approach to modeling prior knowledge.
However, there are also other, less straight-forward methods to incorporate prior knowledge into neural networks. They are very important: prior knowledge is what really bridges the gap between huge neural networks and (relatively) small datasets. Some examples are:
Data augmentation: By training the network on data perturbed by various class-preserving transformations, you are incorporating your prior knowledge about the domain, namely the transformations that the network should be invariant to.
Network architecture: One of the most successful neural network techniques of the past decades are the convolutional networks. Their architecture sharing limited field-of-view kernels over spatial locations brilliantly exploits our knowledge about data in image space. This is also a form of prior knowledge incorporated into the model.
Regularization loss terms: Similar to weight decay, it is possible to construct other loss terms which penalize mappings contradicting our domain knowledge.
For an in-depth analysis/overview of these methods, I can point you to my article Regularization for Deep Learning: A Taxonomy. Also, I recommend looking into bayesian neural networks, meta-learning (finding meaningful prior information from other tasks in the same domain, see e.g. (Baxter, 2000)), possibly also one-shot learning (e.g. (Lake et al., 2015)). | Incorporating prior knowledge into artificial neural networks
Actually, there are many ways to incorporate prior knowledge into neural networks. The simplest type of prior knowledge often used is weight decay. Weight decay assumes the weights come from a normal |
32,649 | Comparing distributions of unequal sample sizes | The answer to your bolded question is no. (And you don't need to upsample or downsample anything.)
If A and B are both random samples from their respective populations the sample cdf will converge to the population cdf; that doesn't creep up the axis with sample size.
If A and B are both random samples from their respective populations, taking a larger sample from B wouldn't tend to move the distribution along the line (except as random variation allows there could be a little movement in either direction); as you sample more you just get a more precise estimate of the shape of B's distribution. The whole thing would be higher, not just the upper part.
You can adjust for A's larger count by scaling your histograms to have area 1. That will make the picture of the shapes more constant as sample sizes might change (in vanilla R, hist with freq=FALSE does that).
If the distribution of B didn't have finite mean, then its possible that larger samples could possibly tend to look more extreme than smaller ones, if you compare something like the sample mean (rather than the overall distribution), but then your t-test wouldn't be valid either. This would require a very heavy upper tail though.
Scaling the heights to make them (roughly) close to comparable on the left side (below about 80), we have:
-- and now we can see that although they're not so much different (in percentage terms) on the left side, A's right tail (above 80-odd) is still much higher.
It's possible to do this rescaling "by eye", without having to physically do it. Given the large samples, this means that it's immediately obvious by looking at your first plot that A tends to be bigger than B. | Comparing distributions of unequal sample sizes | The answer to your bolded question is no. (And you don't need to upsample or downsample anything.)
If A and B are both random samples from their respective populations the sample cdf will converge to | Comparing distributions of unequal sample sizes
The answer to your bolded question is no. (And you don't need to upsample or downsample anything.)
If A and B are both random samples from their respective populations the sample cdf will converge to the population cdf; that doesn't creep up the axis with sample size.
If A and B are both random samples from their respective populations, taking a larger sample from B wouldn't tend to move the distribution along the line (except as random variation allows there could be a little movement in either direction); as you sample more you just get a more precise estimate of the shape of B's distribution. The whole thing would be higher, not just the upper part.
You can adjust for A's larger count by scaling your histograms to have area 1. That will make the picture of the shapes more constant as sample sizes might change (in vanilla R, hist with freq=FALSE does that).
If the distribution of B didn't have finite mean, then its possible that larger samples could possibly tend to look more extreme than smaller ones, if you compare something like the sample mean (rather than the overall distribution), but then your t-test wouldn't be valid either. This would require a very heavy upper tail though.
Scaling the heights to make them (roughly) close to comparable on the left side (below about 80), we have:
-- and now we can see that although they're not so much different (in percentage terms) on the left side, A's right tail (above 80-odd) is still much higher.
It's possible to do this rescaling "by eye", without having to physically do it. Given the large samples, this means that it's immediately obvious by looking at your first plot that A tends to be bigger than B. | Comparing distributions of unequal sample sizes
The answer to your bolded question is no. (And you don't need to upsample or downsample anything.)
If A and B are both random samples from their respective populations the sample cdf will converge to |
32,650 | test if a markov chain is equal to a theoretical one | Assuming your matrices are something like
$$P_{ij}=\Pr[j\mid\!i] \,,\, Q_{ij}=\sum_{t=1}^N\big[x_t=i\,\&\,x_{t+1}=j\,\big]$$
then you could interpret each row $i$ as a multinomial distribution with parameters
$$p_i=P_{i,:} \,,\, n_i=\sum_{j=1}^{K}Q_{ij}$$
I am not sure that you can lump all of the rows together, because the "number of trials" will vary between rows.
For example say $K=3$ and your data is $x=[1,1,2,1,2,3,1,2]$. So there are $N=7$ transitions, with $n_1=4$ coming from $x=1$, but $n_2=2$ from $x=2$ and only and $n_3=1$ from $x=3$. So I would think your confidence in $\hat{p}_1$ should generally be higher than your confidence in $\hat{p}_3$.
(In the extreme case, maybe for this example $K$ was actually $4$, but you have no data at all on those transitions, as $n_4=0$. Treating "absence of evidence as evidence of absence" would seem problematic to me here.)
I am not very familiar with chi-squared tests, but this suggests you might want to treat the rows independently (i.e. sum only over $j$, and use $n_i$ rather than $N$). This reasoning does not seem specific to the chi-squared test, so should also apply to any other significance test you might use (e.g. exact multinomial).
The key issue is that the transition probabilities are conditional, so for each matrix-entry only the transitions which satisfy its pre-condition are relevant. Indeed, presumably the transition matrix will satisfy $\sum_jP_{ij}=1$, hence the "empirical transition matrix" should be $\hat{P}_{ij}=Q_{ij}/n_i$.
Update: In response to query by OP, a clarification on the "test parameters".
If there are $K$ states in the Markov chain, i.e. $P\in\mathbb{R}^{K\times{K}}$, then for row $i$, the corresponding multinomial distribution will have probability vector $p_i\in\mathbb{R}^K$ and number of trials $n_i\in\mathbb{N}$, given above.
So there will be $K$ categories, and the probability vector $p_i$ will have $K-1$ degrees of freedom, as $\sum_{j=1}^K(p_i)_j=1$. So for row $i$ the corresponding $\chi^2$ statistic would be
$$\chi^2_i=\sum_j\frac{\left(Q_{ij}-n_iP_{ij}\right)^2}{n_iP_{ij}}$$
which will asymptotically follow a chi-squared distributed with $K-1$ degrees of freedom (as stated here and here). See also here for a discussion of when the $\chi^2$ test is appropriate, and alternative tests which may be more appropriate.
It may be possible to do a "lumped test", assuming $\chi^2_P=\sum_i\chi^2_i$ follows a chi-squared distribution with $K(K-1)$ dof's (i.e. summing dofs over rows). However I am not certain if the $\chi^2_i$ can be treated as independent. In any case, the row-wise tests would seem to be more informative, so may be preferable to a lumped test. | test if a markov chain is equal to a theoretical one | Assuming your matrices are something like
$$P_{ij}=\Pr[j\mid\!i] \,,\, Q_{ij}=\sum_{t=1}^N\big[x_t=i\,\&\,x_{t+1}=j\,\big]$$
then you could interpret each row $i$ as a multinomial distribution with pa | test if a markov chain is equal to a theoretical one
Assuming your matrices are something like
$$P_{ij}=\Pr[j\mid\!i] \,,\, Q_{ij}=\sum_{t=1}^N\big[x_t=i\,\&\,x_{t+1}=j\,\big]$$
then you could interpret each row $i$ as a multinomial distribution with parameters
$$p_i=P_{i,:} \,,\, n_i=\sum_{j=1}^{K}Q_{ij}$$
I am not sure that you can lump all of the rows together, because the "number of trials" will vary between rows.
For example say $K=3$ and your data is $x=[1,1,2,1,2,3,1,2]$. So there are $N=7$ transitions, with $n_1=4$ coming from $x=1$, but $n_2=2$ from $x=2$ and only and $n_3=1$ from $x=3$. So I would think your confidence in $\hat{p}_1$ should generally be higher than your confidence in $\hat{p}_3$.
(In the extreme case, maybe for this example $K$ was actually $4$, but you have no data at all on those transitions, as $n_4=0$. Treating "absence of evidence as evidence of absence" would seem problematic to me here.)
I am not very familiar with chi-squared tests, but this suggests you might want to treat the rows independently (i.e. sum only over $j$, and use $n_i$ rather than $N$). This reasoning does not seem specific to the chi-squared test, so should also apply to any other significance test you might use (e.g. exact multinomial).
The key issue is that the transition probabilities are conditional, so for each matrix-entry only the transitions which satisfy its pre-condition are relevant. Indeed, presumably the transition matrix will satisfy $\sum_jP_{ij}=1$, hence the "empirical transition matrix" should be $\hat{P}_{ij}=Q_{ij}/n_i$.
Update: In response to query by OP, a clarification on the "test parameters".
If there are $K$ states in the Markov chain, i.e. $P\in\mathbb{R}^{K\times{K}}$, then for row $i$, the corresponding multinomial distribution will have probability vector $p_i\in\mathbb{R}^K$ and number of trials $n_i\in\mathbb{N}$, given above.
So there will be $K$ categories, and the probability vector $p_i$ will have $K-1$ degrees of freedom, as $\sum_{j=1}^K(p_i)_j=1$. So for row $i$ the corresponding $\chi^2$ statistic would be
$$\chi^2_i=\sum_j\frac{\left(Q_{ij}-n_iP_{ij}\right)^2}{n_iP_{ij}}$$
which will asymptotically follow a chi-squared distributed with $K-1$ degrees of freedom (as stated here and here). See also here for a discussion of when the $\chi^2$ test is appropriate, and alternative tests which may be more appropriate.
It may be possible to do a "lumped test", assuming $\chi^2_P=\sum_i\chi^2_i$ follows a chi-squared distribution with $K(K-1)$ dof's (i.e. summing dofs over rows). However I am not certain if the $\chi^2_i$ can be treated as independent. In any case, the row-wise tests would seem to be more informative, so may be preferable to a lumped test. | test if a markov chain is equal to a theoretical one
Assuming your matrices are something like
$$P_{ij}=\Pr[j\mid\!i] \,,\, Q_{ij}=\sum_{t=1}^N\big[x_t=i\,\&\,x_{t+1}=j\,\big]$$
then you could interpret each row $i$ as a multinomial distribution with pa |
32,651 | deep learning - word embedding with parts of speech | 1. Concatenating word2vec and POS features
Adding POS information to your classifier is fine. You will of course want to create a train/dev/test split, eg 5-way cross-validation, to test to what extent adding this information improves your results (it's data dependent, really depends on your data, only you can test this, using your own data).
To combine the POS and word2vec features, you can simply concatenate them. I assume when you say 'CNN', you mean '1-dimensional CNN', is that right? So your input data if you were just using word2vec features, would be something like:
[batch size][sequence length][word2vec dimensions (ie 300)]
ie, a batch size * sequence_length * word2vec_dim sized tensor. So, concatenating with the POS features, your input data tensor will become:
[batch size][sequence length][
word2vec dimensions (ie 300) + POS dimensions (ie 20)]
ie a batch size * sequence_length * 320 sized tensor.
2. sense2vec
You might also want to check out sense2vec, from Trask et al, 2016, https://arxiv.org/pdf/1511.06388.pdf , which makes use of POS information to disambiguate word2vec embeddings:
"This paper presents a novel approach which addresses these concerns by
modeling multiple embeddings for each word based on supervised disambiguation,
which provides a fast and accurate way for a consuming NLP model to select
a sense-disambiguated embedding. We demonstrate that these embeddings can
disambiguate both contrastive senses such as nominal and verbal senses as well
as nuanced senses such as sarcasm. We further evaluate Part-of-Speech disambiguated
embeddings on neural dependency parsing, yielding a greater than 8%
average error reduction in unlabeled attachment scores across 6 languages." | deep learning - word embedding with parts of speech | 1. Concatenating word2vec and POS features
Adding POS information to your classifier is fine. You will of course want to create a train/dev/test split, eg 5-way cross-validation, to test to what exten | deep learning - word embedding with parts of speech
1. Concatenating word2vec and POS features
Adding POS information to your classifier is fine. You will of course want to create a train/dev/test split, eg 5-way cross-validation, to test to what extent adding this information improves your results (it's data dependent, really depends on your data, only you can test this, using your own data).
To combine the POS and word2vec features, you can simply concatenate them. I assume when you say 'CNN', you mean '1-dimensional CNN', is that right? So your input data if you were just using word2vec features, would be something like:
[batch size][sequence length][word2vec dimensions (ie 300)]
ie, a batch size * sequence_length * word2vec_dim sized tensor. So, concatenating with the POS features, your input data tensor will become:
[batch size][sequence length][
word2vec dimensions (ie 300) + POS dimensions (ie 20)]
ie a batch size * sequence_length * 320 sized tensor.
2. sense2vec
You might also want to check out sense2vec, from Trask et al, 2016, https://arxiv.org/pdf/1511.06388.pdf , which makes use of POS information to disambiguate word2vec embeddings:
"This paper presents a novel approach which addresses these concerns by
modeling multiple embeddings for each word based on supervised disambiguation,
which provides a fast and accurate way for a consuming NLP model to select
a sense-disambiguated embedding. We demonstrate that these embeddings can
disambiguate both contrastive senses such as nominal and verbal senses as well
as nuanced senses such as sarcasm. We further evaluate Part-of-Speech disambiguated
embeddings on neural dependency parsing, yielding a greater than 8%
average error reduction in unlabeled attachment scores across 6 languages." | deep learning - word embedding with parts of speech
1. Concatenating word2vec and POS features
Adding POS information to your classifier is fine. You will of course want to create a train/dev/test split, eg 5-way cross-validation, to test to what exten |
32,652 | deep learning - word embedding with parts of speech | I understand that this is an old question, but since there's no answer yet I'll add one.
The intuition here is a well-motivated one: feed the grammatical structure to your model by using a good existing parser, so that your model doesn't have to learn to understand the grammatical structure from scratch.
Something along these lines was (somewhat recently) implemented by the SEST/SEDT model from CMU. Instead of tagging the parts of speech they actually feed in a full parse tree. The results appear to be favorable. | deep learning - word embedding with parts of speech | I understand that this is an old question, but since there's no answer yet I'll add one.
The intuition here is a well-motivated one: feed the grammatical structure to your model by using a good existi | deep learning - word embedding with parts of speech
I understand that this is an old question, but since there's no answer yet I'll add one.
The intuition here is a well-motivated one: feed the grammatical structure to your model by using a good existing parser, so that your model doesn't have to learn to understand the grammatical structure from scratch.
Something along these lines was (somewhat recently) implemented by the SEST/SEDT model from CMU. Instead of tagging the parts of speech they actually feed in a full parse tree. The results appear to be favorable. | deep learning - word embedding with parts of speech
I understand that this is an old question, but since there's no answer yet I'll add one.
The intuition here is a well-motivated one: feed the grammatical structure to your model by using a good existi |
32,653 | Should we always do CV? | In general, you don't have to use cross validation all the time. Point of CV is to get more stable estimate of generalizability of your classifier that you would get using only one test set. You don't have to use CV if your data set is enormous, so adding data to your training set won't improve your model much, and few missclassifications in your test set just by random chance, won't really change your performance metric.
By having a small training set and a big test set, your estimation will be biased. So it will be probably worse than what you would get using more training data and optimal hyperparameters that you found might be different for bigger dataset, simply because more data will require less regularization.
However, getting optimal hyperparamters is not the important part anyway and it won't improve the performance dramatically. You should focus your energy to understand the problem, creating good features and getting data to good shape.
Here are few things you can consider to speed things up:
Train it with fewer features. Use feature selection and/or dimensionality reduction to decrease the size of your problem
Use precached kernel for SVM
Use algorithms that doesn't need to select hyper parameters in a grid. Especially linear ones like logistic regression with ridge/lasso/elastic net penalty or even linear SVM. Depending on implementation, those classifiers can fit models for all hyperparameters in selected path for the cost of fitting only one
use faster implementation for your type of problem (you will have to google it)
and even with slower computer, you can:
Use more cores
Use GPU | Should we always do CV? | In general, you don't have to use cross validation all the time. Point of CV is to get more stable estimate of generalizability of your classifier that you would get using only one test set. You don't | Should we always do CV?
In general, you don't have to use cross validation all the time. Point of CV is to get more stable estimate of generalizability of your classifier that you would get using only one test set. You don't have to use CV if your data set is enormous, so adding data to your training set won't improve your model much, and few missclassifications in your test set just by random chance, won't really change your performance metric.
By having a small training set and a big test set, your estimation will be biased. So it will be probably worse than what you would get using more training data and optimal hyperparameters that you found might be different for bigger dataset, simply because more data will require less regularization.
However, getting optimal hyperparamters is not the important part anyway and it won't improve the performance dramatically. You should focus your energy to understand the problem, creating good features and getting data to good shape.
Here are few things you can consider to speed things up:
Train it with fewer features. Use feature selection and/or dimensionality reduction to decrease the size of your problem
Use precached kernel for SVM
Use algorithms that doesn't need to select hyper parameters in a grid. Especially linear ones like logistic regression with ridge/lasso/elastic net penalty or even linear SVM. Depending on implementation, those classifiers can fit models for all hyperparameters in selected path for the cost of fitting only one
use faster implementation for your type of problem (you will have to google it)
and even with slower computer, you can:
Use more cores
Use GPU | Should we always do CV?
In general, you don't have to use cross validation all the time. Point of CV is to get more stable estimate of generalizability of your classifier that you would get using only one test set. You don't |
32,654 | Should we always do CV? | Cross-validation is a tool to estimate the variance of your performance metric due to randomness in the data (and maybe in the learning algorithm if it not deterministic).
So if you use only one split, e.g. 80% train + 20% test and report your performance metric from this single experiment there are good chances that anyone trying to reproduce your experiment using exactly the same parameters will find a different performance figure (sometimes very different). Unless of course you provide the same exact split which is meaningless.
To come back to your question I think you should definitely use CV to report your performance (e.g. do a 10 folds CV and report the mean and standard deviation of the performance metric). Now for tuning your algorithm you may use a much smaller validation set sampled from the training set (make sure it is not included in the test set).
If you're afraid that you won't find the best hyperparameters using a small set then you're probably overfitting your algorithm to the specifics of the dataset. If you can't find a configuration using a small sample that gives a reasonable performance among all folds then the algorithm is probably not very useful in practice.
Also keep in mind some algorithms are simply too slow / don't scale well in some configurations. This is also a part of practical model selection.
Since you mention SVMs, of course most implementations will be slow when trying to find parameters for non-linear kernels by grid search. Grid search has exponential complexity, so use it with very few parameters. Also keep in mind that most libraries provide sensible default parameters (or at least you set one parameter and there are heuristics to set the others). | Should we always do CV? | Cross-validation is a tool to estimate the variance of your performance metric due to randomness in the data (and maybe in the learning algorithm if it not deterministic).
So if you use only one split | Should we always do CV?
Cross-validation is a tool to estimate the variance of your performance metric due to randomness in the data (and maybe in the learning algorithm if it not deterministic).
So if you use only one split, e.g. 80% train + 20% test and report your performance metric from this single experiment there are good chances that anyone trying to reproduce your experiment using exactly the same parameters will find a different performance figure (sometimes very different). Unless of course you provide the same exact split which is meaningless.
To come back to your question I think you should definitely use CV to report your performance (e.g. do a 10 folds CV and report the mean and standard deviation of the performance metric). Now for tuning your algorithm you may use a much smaller validation set sampled from the training set (make sure it is not included in the test set).
If you're afraid that you won't find the best hyperparameters using a small set then you're probably overfitting your algorithm to the specifics of the dataset. If you can't find a configuration using a small sample that gives a reasonable performance among all folds then the algorithm is probably not very useful in practice.
Also keep in mind some algorithms are simply too slow / don't scale well in some configurations. This is also a part of practical model selection.
Since you mention SVMs, of course most implementations will be slow when trying to find parameters for non-linear kernels by grid search. Grid search has exponential complexity, so use it with very few parameters. Also keep in mind that most libraries provide sensible default parameters (or at least you set one parameter and there are heuristics to set the others). | Should we always do CV?
Cross-validation is a tool to estimate the variance of your performance metric due to randomness in the data (and maybe in the learning algorithm if it not deterministic).
So if you use only one split |
32,655 | Why regularize all parameters in the same way? | Well, the parameters that represent higher exponentials (x3,x4) are drasticly increasing the complexity of our model. So shouldn't we penalize more for high w3,w4 values than we penalize for high w1,w2 values?
The reason we say that adding quadratic or cubic terms increases model complexity is that it leads to a model with more parameters overall. We don't expect a quadratic term to be in and of itself more complex than a linear term. The one thing that's clear is that, all other things being equal, a model with more covariates is more complex.
For the purposes of regularization, one generally rescales all the covariates to have equal mean and variance so that, a priori, they are treated as equally important. If some covariates do in fact have a stronger relationship with the dependent variable than others, then, of course, the regularization procedure won't penalize those covariates as strongly, because they'll have greater contributions to the model fit.
But what if you really do think a priori that one covariate is more important than another, and you can quantify this belief, and you want the model to reflect it? Then what you probably want to do is use a Bayesian model and adjust the priors for the coefficients to match your preexisting belief. Not coincidentally, some familiar regularization procedures can be construed as special cases of Bayesian models. In particular, ridge regression is equivalent to a normal prior on the coefficients, and lasso regression is equivalent to a Laplacian prior. | Why regularize all parameters in the same way? | Well, the parameters that represent higher exponentials (x3,x4) are drasticly increasing the complexity of our model. So shouldn't we penalize more for high w3,w4 values than we penalize for high w1,w | Why regularize all parameters in the same way?
Well, the parameters that represent higher exponentials (x3,x4) are drasticly increasing the complexity of our model. So shouldn't we penalize more for high w3,w4 values than we penalize for high w1,w2 values?
The reason we say that adding quadratic or cubic terms increases model complexity is that it leads to a model with more parameters overall. We don't expect a quadratic term to be in and of itself more complex than a linear term. The one thing that's clear is that, all other things being equal, a model with more covariates is more complex.
For the purposes of regularization, one generally rescales all the covariates to have equal mean and variance so that, a priori, they are treated as equally important. If some covariates do in fact have a stronger relationship with the dependent variable than others, then, of course, the regularization procedure won't penalize those covariates as strongly, because they'll have greater contributions to the model fit.
But what if you really do think a priori that one covariate is more important than another, and you can quantify this belief, and you want the model to reflect it? Then what you probably want to do is use a Bayesian model and adjust the priors for the coefficients to match your preexisting belief. Not coincidentally, some familiar regularization procedures can be construed as special cases of Bayesian models. In particular, ridge regression is equivalent to a normal prior on the coefficients, and lasso regression is equivalent to a Laplacian prior. | Why regularize all parameters in the same way?
Well, the parameters that represent higher exponentials (x3,x4) are drasticly increasing the complexity of our model. So shouldn't we penalize more for high w3,w4 values than we penalize for high w1,w |
32,656 | Why regularize all parameters in the same way? | Great observations. To answer your question "Should we penalise 'more'?" Well, do we gain anything from imposing a priori penalty on some variables?
We kind of do the opposite in practice, remember re-scaling the input variables to the same magnitude. Different magnitudes gives different a priori 'importance' to some of the variables. We don't know which ones are important and which not. There's an entire line of research about finding the right 'features' or feature selection / representation learning.
So, here's two ways to think about it.
One could start with a simple linear basis hypothesis and no regularisation. Then have a different hypothesis of the model, taking quadratic and other interactions of the input space. Sure. Then add regularisation and so on. So this 'search' is simple to complex. More of a parametric way to do it since you produce the hypotheses about the basis.
Or, an alternative 'non-parametric' way would be to start with a really complex hypothesis, and let the regularisation do the work (e.g. penalise the complexity and arrive at something simpler) via cross-validation.
The point of regularisation and nonparametrics is to do things automatically. Let the machine do the work.
Here is a good resource on basis functions.
And finally, $L^p$ spaces and norms will clear things up even more. | Why regularize all parameters in the same way? | Great observations. To answer your question "Should we penalise 'more'?" Well, do we gain anything from imposing a priori penalty on some variables?
We kind of do the opposite in practice, remember re | Why regularize all parameters in the same way?
Great observations. To answer your question "Should we penalise 'more'?" Well, do we gain anything from imposing a priori penalty on some variables?
We kind of do the opposite in practice, remember re-scaling the input variables to the same magnitude. Different magnitudes gives different a priori 'importance' to some of the variables. We don't know which ones are important and which not. There's an entire line of research about finding the right 'features' or feature selection / representation learning.
So, here's two ways to think about it.
One could start with a simple linear basis hypothesis and no regularisation. Then have a different hypothesis of the model, taking quadratic and other interactions of the input space. Sure. Then add regularisation and so on. So this 'search' is simple to complex. More of a parametric way to do it since you produce the hypotheses about the basis.
Or, an alternative 'non-parametric' way would be to start with a really complex hypothesis, and let the regularisation do the work (e.g. penalise the complexity and arrive at something simpler) via cross-validation.
The point of regularisation and nonparametrics is to do things automatically. Let the machine do the work.
Here is a good resource on basis functions.
And finally, $L^p$ spaces and norms will clear things up even more. | Why regularize all parameters in the same way?
Great observations. To answer your question "Should we penalise 'more'?" Well, do we gain anything from imposing a priori penalty on some variables?
We kind of do the opposite in practice, remember re |
32,657 | Simple example of how "Bayesian Model Averaging" actually works | I think it might help to think of this as a two-level "meta-model". You have some collection of individual models (indexed by $m$), and then you have a meta-model, which is a distribution over the individual models (or equivalently, a distribution over values of $m$).
You can think about the model averaging as working in two steps:
First, you get the posterior predictive distribution for each model $m$ by integrating out its model-specific parameters $\theta$:
$$ P(y|x, D, m) = \int P(y|x, D, \theta, m)P(\theta| D, m)d\theta $$
Then you get the posterior predictive distribution for the meta-model, now integrating out the distribution over the models:
$$ P(y|x,D) = \int P(y|x, D, m)P(m|x, D)dm $$
Then in the machine learning context you would make predictions about $y$ based on its posterior predictive distribution given the observed covariates $x$.
To answer your question, the second step is where this is model averaging. When you "integrate out" or "sum out" a parameter (incidentally, you can think of these as the same operation for continuous and discrete distributions respectively), that's equivalent to taking the expected value of some quantity (i.e. averaging) over that parameter. In this case, you're taking the expected value of the posterior density of $y$, which is the definition of a posterior predictive distribution.
As for priors, you're going to have two sets of them in this model: a prior for each model $m$, and a prior for the meta-model over different $m$. They will factor into determining the posterior distributions over parameters that we've integrated out (i.e. $P(\theta|D,m)$ and $P(m|x,D)$).
I will point out that in this model the authors have apparently specified that the posterior over $m$ might depend on the test predictors $x$, but the posterior over $\theta$ does not. That is, $x$ might influence how you weight the different models, but not how you weight the parameters of each individual model. I don't think that's a crazy choice, but it's not the only way to do this.
Okay. An example. I can't think of a machine learning example that's simple, but here's an easier textbook statistics example. In this model the individual models are going to be normal distributions with a fixed variance $\sigma^2$, and a random mean $\mu$. The collection of distributions (the meta-model) is over different values of $\sigma^2$. So here $\theta = \mu$ and $m = \sigma^2$. The standard prior for $\mu|\sigma^2$ is a normal distribution, and then the prior over $\sigma^2$ is an inverse-gamma distribution. You can show that the posterior predictive distribution $y$ over $\mu$ given a fixed value of $\sigma^2$ is another normal distribution with its mean pulled in the direction of the sample mean. Then you integrate out (model average) $\sigma^2$, and the posterior predictive distribution becomes a Student-t distribution over $y$. Essentially, you get something that looks kind of like a normal distribution, but it has fat tails because you've averaged over different possibilities for the variance. | Simple example of how "Bayesian Model Averaging" actually works | I think it might help to think of this as a two-level "meta-model". You have some collection of individual models (indexed by $m$), and then you have a meta-model, which is a distribution over the ind | Simple example of how "Bayesian Model Averaging" actually works
I think it might help to think of this as a two-level "meta-model". You have some collection of individual models (indexed by $m$), and then you have a meta-model, which is a distribution over the individual models (or equivalently, a distribution over values of $m$).
You can think about the model averaging as working in two steps:
First, you get the posterior predictive distribution for each model $m$ by integrating out its model-specific parameters $\theta$:
$$ P(y|x, D, m) = \int P(y|x, D, \theta, m)P(\theta| D, m)d\theta $$
Then you get the posterior predictive distribution for the meta-model, now integrating out the distribution over the models:
$$ P(y|x,D) = \int P(y|x, D, m)P(m|x, D)dm $$
Then in the machine learning context you would make predictions about $y$ based on its posterior predictive distribution given the observed covariates $x$.
To answer your question, the second step is where this is model averaging. When you "integrate out" or "sum out" a parameter (incidentally, you can think of these as the same operation for continuous and discrete distributions respectively), that's equivalent to taking the expected value of some quantity (i.e. averaging) over that parameter. In this case, you're taking the expected value of the posterior density of $y$, which is the definition of a posterior predictive distribution.
As for priors, you're going to have two sets of them in this model: a prior for each model $m$, and a prior for the meta-model over different $m$. They will factor into determining the posterior distributions over parameters that we've integrated out (i.e. $P(\theta|D,m)$ and $P(m|x,D)$).
I will point out that in this model the authors have apparently specified that the posterior over $m$ might depend on the test predictors $x$, but the posterior over $\theta$ does not. That is, $x$ might influence how you weight the different models, but not how you weight the parameters of each individual model. I don't think that's a crazy choice, but it's not the only way to do this.
Okay. An example. I can't think of a machine learning example that's simple, but here's an easier textbook statistics example. In this model the individual models are going to be normal distributions with a fixed variance $\sigma^2$, and a random mean $\mu$. The collection of distributions (the meta-model) is over different values of $\sigma^2$. So here $\theta = \mu$ and $m = \sigma^2$. The standard prior for $\mu|\sigma^2$ is a normal distribution, and then the prior over $\sigma^2$ is an inverse-gamma distribution. You can show that the posterior predictive distribution $y$ over $\mu$ given a fixed value of $\sigma^2$ is another normal distribution with its mean pulled in the direction of the sample mean. Then you integrate out (model average) $\sigma^2$, and the posterior predictive distribution becomes a Student-t distribution over $y$. Essentially, you get something that looks kind of like a normal distribution, but it has fat tails because you've averaged over different possibilities for the variance. | Simple example of how "Bayesian Model Averaging" actually works
I think it might help to think of this as a two-level "meta-model". You have some collection of individual models (indexed by $m$), and then you have a meta-model, which is a distribution over the ind |
32,658 | Simple example of how "Bayesian Model Averaging" actually works | One simple example of model averaging is when you are deciding the order of a polynomial model
$$y_i=\sum_{j=0}^kx_i^j\beta_j+e_i $$
So you don't know the betas and you also don't know the value of $k $. And $e_i\sim N (0,\sigma^2) $. For fixed $k $ you have a least squares problem - with a proper prior it is "regularized" least squares. When you do model averaging, you can think of a weighted average of the predictions for each $k $. The weights will be proportional to something like $\exp (-\frac {1}{2}BIC_k) $ in cases where the prior on the betas and the polynomial order are fairly uniform ($BIC_k $ is the bayesian information criterion for the least squares model of order $k $). | Simple example of how "Bayesian Model Averaging" actually works | One simple example of model averaging is when you are deciding the order of a polynomial model
$$y_i=\sum_{j=0}^kx_i^j\beta_j+e_i $$
So you don't know the betas and you also don't know the value of $k | Simple example of how "Bayesian Model Averaging" actually works
One simple example of model averaging is when you are deciding the order of a polynomial model
$$y_i=\sum_{j=0}^kx_i^j\beta_j+e_i $$
So you don't know the betas and you also don't know the value of $k $. And $e_i\sim N (0,\sigma^2) $. For fixed $k $ you have a least squares problem - with a proper prior it is "regularized" least squares. When you do model averaging, you can think of a weighted average of the predictions for each $k $. The weights will be proportional to something like $\exp (-\frac {1}{2}BIC_k) $ in cases where the prior on the betas and the polynomial order are fairly uniform ($BIC_k $ is the bayesian information criterion for the least squares model of order $k $). | Simple example of how "Bayesian Model Averaging" actually works
One simple example of model averaging is when you are deciding the order of a polynomial model
$$y_i=\sum_{j=0}^kx_i^j\beta_j+e_i $$
So you don't know the betas and you also don't know the value of $k |
32,659 | Gaussian likelihood + which prior = Gaussian Marginal? | Your conjecture seems to be true: only a constant variance can lead to
a normal margin. My proof is limited to the case where the
expectation $\boldsymbol{\mu}$ is known, and hence can be assumed to
be zero. For the general case, more sophisticated arguments from
functional analysis seem to be required.
Note that the question is actually about continuous mixture of normals as well
as about Bayes. The statement proved here it that a (continuous) scale
mixture of normals can be normal only for a trivial mixture.
First consider the case of a one-dimensional normal with known mean
$\mu = 0$ and precision parameter $\omega := 1 / \Sigma >0$. Without loss of
generality, we can assume that the parameter $\boldsymbol{\theta}$ is
the precision $\omega$ itself. If the marginal distribution of $y$ is
normal, then $\int \exp\{-y^2 \omega / 2\}\,\omega^{1/2}
p(\omega)\,\text{d}\omega$ is a normal density up to a multiplicative
constant. This density being an even function of $y$ must take the
form $c\exp\{ -y^2 \omega_0 / 2\}$ for some $\omega_0 >0$ and some
constant $c >0$. Since this holds for any $y$ we get with $s := y^2$
$$ \int_0^\infty \exp\{-s \omega \,/ 2\}\,\omega^{1/2}
p(\omega)\text{d}\omega = c \exp\{ -s \omega_0 \,/ 2\} $$ for all $s
\geq 0$, which shows that the finite measure with density function
$\omega \mapsto \omega^{1/2} p(\omega)$ is proportional to the Dirac
mass at $\omega_0$ because these two measures have the same Laplace
transform, up to a multiplicative constant. Thus $\omega$ is almost
surely (a.s.) equal to $\omega_0$.
This proof extends to the
$d$-dimensional normal with mean zero and precision matrix
$\boldsymbol{\Omega}:=\boldsymbol{\Sigma}^{-1}$. The margin then writes
as $\propto \int \exp\{-\mathbf{y}^\top \boldsymbol{\Omega}\,\mathbf{y} \,/
2\}\,
\left|\boldsymbol{\Omega}\right|^{1/2}p(\boldsymbol{\Omega})\,\text{d}\boldsymbol{\Omega}$
where the integral is on the set $\mathcal{P}$ of positive definite
symmetric $d \times d$ matrices. If this integral is identical to
$c\exp\{ -\mathbf{y}^\top \boldsymbol{\Omega}_0 \mathbf{y} / 2\}$,
then by taking $\mathbf{y}:= \sqrt{s} \,\boldsymbol{u}$ for a scalar
$s \geq 0$ and an arbitrary vector $\mathbf{u}$, we find as above that
$\mathbf{u}^\top \boldsymbol{\Omega}\, \mathbf{u}$ must be a.s. equal
to $\mathbf{u}^\top \boldsymbol{\Omega}_0 \mathbf{u}$, which shows
that $\boldsymbol{\Omega}$ is a.s. equal to $\boldsymbol{\Omega}_0$.
The proof works even if the measure conveniently written as having
density $|\boldsymbol{\Omega}|^{1/2} p(\boldsymbol{\Omega})$
concentrates on a subset of $\mathcal{P}$ with Lebesgue measure zero,
because the Laplace transform argument still applies. So the proof
works for a general parameterisation of the precision (or variance)
matrix. | Gaussian likelihood + which prior = Gaussian Marginal? | Your conjecture seems to be true: only a constant variance can lead to
a normal margin. My proof is limited to the case where the
expectation $\boldsymbol{\mu}$ is known, and hence can be assumed to
b | Gaussian likelihood + which prior = Gaussian Marginal?
Your conjecture seems to be true: only a constant variance can lead to
a normal margin. My proof is limited to the case where the
expectation $\boldsymbol{\mu}$ is known, and hence can be assumed to
be zero. For the general case, more sophisticated arguments from
functional analysis seem to be required.
Note that the question is actually about continuous mixture of normals as well
as about Bayes. The statement proved here it that a (continuous) scale
mixture of normals can be normal only for a trivial mixture.
First consider the case of a one-dimensional normal with known mean
$\mu = 0$ and precision parameter $\omega := 1 / \Sigma >0$. Without loss of
generality, we can assume that the parameter $\boldsymbol{\theta}$ is
the precision $\omega$ itself. If the marginal distribution of $y$ is
normal, then $\int \exp\{-y^2 \omega / 2\}\,\omega^{1/2}
p(\omega)\,\text{d}\omega$ is a normal density up to a multiplicative
constant. This density being an even function of $y$ must take the
form $c\exp\{ -y^2 \omega_0 / 2\}$ for some $\omega_0 >0$ and some
constant $c >0$. Since this holds for any $y$ we get with $s := y^2$
$$ \int_0^\infty \exp\{-s \omega \,/ 2\}\,\omega^{1/2}
p(\omega)\text{d}\omega = c \exp\{ -s \omega_0 \,/ 2\} $$ for all $s
\geq 0$, which shows that the finite measure with density function
$\omega \mapsto \omega^{1/2} p(\omega)$ is proportional to the Dirac
mass at $\omega_0$ because these two measures have the same Laplace
transform, up to a multiplicative constant. Thus $\omega$ is almost
surely (a.s.) equal to $\omega_0$.
This proof extends to the
$d$-dimensional normal with mean zero and precision matrix
$\boldsymbol{\Omega}:=\boldsymbol{\Sigma}^{-1}$. The margin then writes
as $\propto \int \exp\{-\mathbf{y}^\top \boldsymbol{\Omega}\,\mathbf{y} \,/
2\}\,
\left|\boldsymbol{\Omega}\right|^{1/2}p(\boldsymbol{\Omega})\,\text{d}\boldsymbol{\Omega}$
where the integral is on the set $\mathcal{P}$ of positive definite
symmetric $d \times d$ matrices. If this integral is identical to
$c\exp\{ -\mathbf{y}^\top \boldsymbol{\Omega}_0 \mathbf{y} / 2\}$,
then by taking $\mathbf{y}:= \sqrt{s} \,\boldsymbol{u}$ for a scalar
$s \geq 0$ and an arbitrary vector $\mathbf{u}$, we find as above that
$\mathbf{u}^\top \boldsymbol{\Omega}\, \mathbf{u}$ must be a.s. equal
to $\mathbf{u}^\top \boldsymbol{\Omega}_0 \mathbf{u}$, which shows
that $\boldsymbol{\Omega}$ is a.s. equal to $\boldsymbol{\Omega}_0$.
The proof works even if the measure conveniently written as having
density $|\boldsymbol{\Omega}|^{1/2} p(\boldsymbol{\Omega})$
concentrates on a subset of $\mathcal{P}$ with Lebesgue measure zero,
because the Laplace transform argument still applies. So the proof
works for a general parameterisation of the precision (or variance)
matrix. | Gaussian likelihood + which prior = Gaussian Marginal?
Your conjecture seems to be true: only a constant variance can lead to
a normal margin. My proof is limited to the case where the
expectation $\boldsymbol{\mu}$ is known, and hence can be assumed to
b |
32,660 | Gaussian likelihood + which prior = Gaussian Marginal? | Assume that $\mu$ and $\Sigma$ are a priori independent and that $y$
has a normal margin with mean $\mu_0$ and variance $\Sigma_0$. I
will prove that then the variance $\Sigma$ must be constant, and
the mean $\mu$ must have a normal prior (possibly degenerate).
I will stick to the one-dimensional case for simplicity, using the
characteristic function (c.f.) of $y$, i.e. $\phi_y(t) :=
\mathbb{E}[e^{yit}]$. We know that $\phi_y(t) = \exp\{\mu_0 it - \Sigma_0 t^2 /2$} and
a similar formula holds for the distribution of $y$ conditional on $\mu$
and $\Sigma$, which is normal by assumption. So for any real $t$
$$
\mathbb{E}[e^{yit}] = \int \mathbb{E}\left[e^{yit} \, \vert \,\mu,\,\Sigma\right]\,
p(\mu) p(\Sigma)
\,\text{d}\mu \text{d} \Sigma =
\int \exp\left\{ \mu it - \Sigma t^2/2 \right\} \,p(\mu) p(\Sigma)
\,\text{d}\mu \text{d}\Sigma,
$$
and by rearranging the integral, we must have
$$
\exp\left\{ \mu_0 it - \Sigma_0 t^2 /2 \right\} =
\left[\int \exp\left\{ \mu it \right\} p(\mu)
\,\text{d}\mu \right]
\left[\int \exp\left\{ -\Sigma t^2/2\right\} p(\Sigma)
\,\text{d}\Sigma \right].
$$
The assumptions needed for such a rearrangement are easily checked.
The first integral at right hand side, say $\phi_1(t)$, is the c.f. of
$\mu$. Note that since $\phi_1(t) e^{-\mu_0 it}$ is found to be real, we see that
the distribution of $\mu$ is symmetric w.r.t. $\mu_0$, and hence that
$\mathbb{E}[\mu] = \mu_0$, as it might have been anticipated.
Now it turns out that the second integral at right hand side, say
$\phi_2(t)$, is also a c.f. To see that, we must check that $\phi_2(0)
= 1$, that $\phi_2$ is continuous at $t=0$ and also that the function
$\phi_2$ is positive definite (p.d.). The first requirement is
obvious, the second is proved by dominated convergence. Now turn to
the p.d. requirement: if the prior distribution written as
$p(\Sigma)\text{d}\Sigma$ is a Dirac mass, then $\phi_2$ is
p.d. because $\phi_2$ is then the c.f. of a normal distribution. If
the prior is a discrete mixture of Dirac masses, this is true as well
since $\phi_2$ then is the c.f. of a mixture of normals. By a continuity
argument, we see that $\phi_2$ is p.d.
Now let us use the powerful Lévy-Cramér theorem which tells that
both functions $\phi_j$ for $j=1$, $2$ must take the form $\exp\{a_j i t -
b_jt^2 /2 \}$ with $a_j$ real and $b_j \geq 0$. So $\mu$
must be normal (possibly degenerate) with mean $a_1 = \mu_0$.
By simple algebra we then have
$$
\exp\{ -(\Sigma_0 - b_1) t^2 /2 \} = \int_0^\infty \exp\{ - \Sigma t^2 /2\} p(\Sigma)
\, \text{d} \Sigma
$$
which holds for any real $t$. Since any non-negative real writes as $t^2/2$, we see that
the Laplace transform of the prior of $\Sigma$ must be equal to that
of the Dirac mass at $\Sigma_0 - b_1$ and we are done. | Gaussian likelihood + which prior = Gaussian Marginal? | Assume that $\mu$ and $\Sigma$ are a priori independent and that $y$
has a normal margin with mean $\mu_0$ and variance $\Sigma_0$. I
will prove that then the variance $\Sigma$ must be constant, and
t | Gaussian likelihood + which prior = Gaussian Marginal?
Assume that $\mu$ and $\Sigma$ are a priori independent and that $y$
has a normal margin with mean $\mu_0$ and variance $\Sigma_0$. I
will prove that then the variance $\Sigma$ must be constant, and
the mean $\mu$ must have a normal prior (possibly degenerate).
I will stick to the one-dimensional case for simplicity, using the
characteristic function (c.f.) of $y$, i.e. $\phi_y(t) :=
\mathbb{E}[e^{yit}]$. We know that $\phi_y(t) = \exp\{\mu_0 it - \Sigma_0 t^2 /2$} and
a similar formula holds for the distribution of $y$ conditional on $\mu$
and $\Sigma$, which is normal by assumption. So for any real $t$
$$
\mathbb{E}[e^{yit}] = \int \mathbb{E}\left[e^{yit} \, \vert \,\mu,\,\Sigma\right]\,
p(\mu) p(\Sigma)
\,\text{d}\mu \text{d} \Sigma =
\int \exp\left\{ \mu it - \Sigma t^2/2 \right\} \,p(\mu) p(\Sigma)
\,\text{d}\mu \text{d}\Sigma,
$$
and by rearranging the integral, we must have
$$
\exp\left\{ \mu_0 it - \Sigma_0 t^2 /2 \right\} =
\left[\int \exp\left\{ \mu it \right\} p(\mu)
\,\text{d}\mu \right]
\left[\int \exp\left\{ -\Sigma t^2/2\right\} p(\Sigma)
\,\text{d}\Sigma \right].
$$
The assumptions needed for such a rearrangement are easily checked.
The first integral at right hand side, say $\phi_1(t)$, is the c.f. of
$\mu$. Note that since $\phi_1(t) e^{-\mu_0 it}$ is found to be real, we see that
the distribution of $\mu$ is symmetric w.r.t. $\mu_0$, and hence that
$\mathbb{E}[\mu] = \mu_0$, as it might have been anticipated.
Now it turns out that the second integral at right hand side, say
$\phi_2(t)$, is also a c.f. To see that, we must check that $\phi_2(0)
= 1$, that $\phi_2$ is continuous at $t=0$ and also that the function
$\phi_2$ is positive definite (p.d.). The first requirement is
obvious, the second is proved by dominated convergence. Now turn to
the p.d. requirement: if the prior distribution written as
$p(\Sigma)\text{d}\Sigma$ is a Dirac mass, then $\phi_2$ is
p.d. because $\phi_2$ is then the c.f. of a normal distribution. If
the prior is a discrete mixture of Dirac masses, this is true as well
since $\phi_2$ then is the c.f. of a mixture of normals. By a continuity
argument, we see that $\phi_2$ is p.d.
Now let us use the powerful Lévy-Cramér theorem which tells that
both functions $\phi_j$ for $j=1$, $2$ must take the form $\exp\{a_j i t -
b_jt^2 /2 \}$ with $a_j$ real and $b_j \geq 0$. So $\mu$
must be normal (possibly degenerate) with mean $a_1 = \mu_0$.
By simple algebra we then have
$$
\exp\{ -(\Sigma_0 - b_1) t^2 /2 \} = \int_0^\infty \exp\{ - \Sigma t^2 /2\} p(\Sigma)
\, \text{d} \Sigma
$$
which holds for any real $t$. Since any non-negative real writes as $t^2/2$, we see that
the Laplace transform of the prior of $\Sigma$ must be equal to that
of the Dirac mass at $\Sigma_0 - b_1$ and we are done. | Gaussian likelihood + which prior = Gaussian Marginal?
Assume that $\mu$ and $\Sigma$ are a priori independent and that $y$
has a normal margin with mean $\mu_0$ and variance $\Sigma_0$. I
will prove that then the variance $\Sigma$ must be constant, and
t |
32,661 | Gaussian likelihood + which prior = Gaussian Marginal? | I have a proposition of proof for you, but you need to check it.
Assume that the marginal likelihood is Gaussian :
$p(y)=\mathcal{N}(y,m,\Gamma)$
then the prior density can be defined by
$p(\theta)=\mathcal{N}(y,\mu(\theta),\Sigma(\theta))^{-1}\mathcal{N}(y,m,\Gamma)f(\theta)$
where $f$ checks $\int_{\theta\in\Theta}f(\theta)d\theta =1$ and $f(\theta)\geq 0$ for $\theta\in\Theta$. ($f(\theta)$ is $p(\theta|y)$).
To be a density, the integral of the prior density $p(\theta)$ on $\Theta$ has to be equal to 1.
In other words,
$\int_{\theta\in\Theta}\mathcal{N}(y,\mu(\theta),\Sigma(\theta))^{-1}\mathcal{N}(y,m,\Gamma)f(\theta)d\theta =1$.
It leads to
$\int_{\theta\in\Theta}\mathcal{N}(y,\mu(\theta),\Sigma(\theta))^{-1}\mathcal{N}(y,m,\Gamma)f(\theta)d\theta = \int_{\theta\in\Theta}f(\theta)d\theta$
This equality being true if and only if $\mu(\theta)=m$ and $\Sigma(\theta)=\Gamma$. | Gaussian likelihood + which prior = Gaussian Marginal? | I have a proposition of proof for you, but you need to check it.
Assume that the marginal likelihood is Gaussian :
$p(y)=\mathcal{N}(y,m,\Gamma)$
then the prior density can be defined by
$p(\theta)=\m | Gaussian likelihood + which prior = Gaussian Marginal?
I have a proposition of proof for you, but you need to check it.
Assume that the marginal likelihood is Gaussian :
$p(y)=\mathcal{N}(y,m,\Gamma)$
then the prior density can be defined by
$p(\theta)=\mathcal{N}(y,\mu(\theta),\Sigma(\theta))^{-1}\mathcal{N}(y,m,\Gamma)f(\theta)$
where $f$ checks $\int_{\theta\in\Theta}f(\theta)d\theta =1$ and $f(\theta)\geq 0$ for $\theta\in\Theta$. ($f(\theta)$ is $p(\theta|y)$).
To be a density, the integral of the prior density $p(\theta)$ on $\Theta$ has to be equal to 1.
In other words,
$\int_{\theta\in\Theta}\mathcal{N}(y,\mu(\theta),\Sigma(\theta))^{-1}\mathcal{N}(y,m,\Gamma)f(\theta)d\theta =1$.
It leads to
$\int_{\theta\in\Theta}\mathcal{N}(y,\mu(\theta),\Sigma(\theta))^{-1}\mathcal{N}(y,m,\Gamma)f(\theta)d\theta = \int_{\theta\in\Theta}f(\theta)d\theta$
This equality being true if and only if $\mu(\theta)=m$ and $\Sigma(\theta)=\Gamma$. | Gaussian likelihood + which prior = Gaussian Marginal?
I have a proposition of proof for you, but you need to check it.
Assume that the marginal likelihood is Gaussian :
$p(y)=\mathcal{N}(y,m,\Gamma)$
then the prior density can be defined by
$p(\theta)=\m |
32,662 | priors for Gamma shape and scale parameters | Alternatively, the reference prior for the ordering $\alpha$, $\beta$ is (http://www.stats.org.uk/priors/noninformative/YangBerger1998.pdf page 13):
$$
\Pi(\alpha,\beta) \propto \frac{\sqrt{\alpha PG(1,\alpha)-1}}{\sqrt{\alpha}\beta}
$$
where $PG(1,x)=\sum_{i=0}^{\infty} (x+ i)^{-2}$ is the polygamma function. It results in proper posteriors. | priors for Gamma shape and scale parameters | Alternatively, the reference prior for the ordering $\alpha$, $\beta$ is (http://www.stats.org.uk/priors/noninformative/YangBerger1998.pdf page 13):
$$
\Pi(\alpha,\beta) \propto \frac{\sqrt{\alpha PG( | priors for Gamma shape and scale parameters
Alternatively, the reference prior for the ordering $\alpha$, $\beta$ is (http://www.stats.org.uk/priors/noninformative/YangBerger1998.pdf page 13):
$$
\Pi(\alpha,\beta) \propto \frac{\sqrt{\alpha PG(1,\alpha)-1}}{\sqrt{\alpha}\beta}
$$
where $PG(1,x)=\sum_{i=0}^{\infty} (x+ i)^{-2}$ is the polygamma function. It results in proper posteriors. | priors for Gamma shape and scale parameters
Alternatively, the reference prior for the ordering $\alpha$, $\beta$ is (http://www.stats.org.uk/priors/noninformative/YangBerger1998.pdf page 13):
$$
\Pi(\alpha,\beta) \propto \frac{\sqrt{\alpha PG( |
32,663 | priors for Gamma shape and scale parameters | Half-Cauchy family was recommended instead of the inverse-gamma prior for scale parameters. Gelman (2006) recommended this because the inverse-gamma prior could be sensitive in inference problems if the variance estimates are close to zero. The density function of half-Cauchy is as follows (it only takes one parameter, $d$):
$$
f(x|d) = \frac{2d}{\pi\left(d^{2} + x^{2}\right)}, \quad x>0, d>0
$$
Therefore, you can use half-Cauchy(2.5) for parameters greater than zero. | priors for Gamma shape and scale parameters | Half-Cauchy family was recommended instead of the inverse-gamma prior for scale parameters. Gelman (2006) recommended this because the inverse-gamma prior could be sensitive in inference problems if t | priors for Gamma shape and scale parameters
Half-Cauchy family was recommended instead of the inverse-gamma prior for scale parameters. Gelman (2006) recommended this because the inverse-gamma prior could be sensitive in inference problems if the variance estimates are close to zero. The density function of half-Cauchy is as follows (it only takes one parameter, $d$):
$$
f(x|d) = \frac{2d}{\pi\left(d^{2} + x^{2}\right)}, \quad x>0, d>0
$$
Therefore, you can use half-Cauchy(2.5) for parameters greater than zero. | priors for Gamma shape and scale parameters
Half-Cauchy family was recommended instead of the inverse-gamma prior for scale parameters. Gelman (2006) recommended this because the inverse-gamma prior could be sensitive in inference problems if t |
32,664 | Two methods of using bootstraping to test the difference between two sample means | The issue is that your bootstrap in boot_t_B isn't correctly done. If you're not correcting the means to be the same (i.e., forcing the null hypothesis to be true by re-centering each sample), you force the null hypothesis to be true by sampling from the two samples combined:
boot.c <- sample(c(x1,y1), size=length(x), replace=T)
boot.p <- sample(c(x1,y1), size=length(y), replace=T)
The reason for this is that if the means ARE different, in your original formulation boot.c and boot.p are actually samples from the alternative hypothesis where the alternative distributions are "centered" at the data. You can think of it as bootstrap sampling from the alternative distribution that is most likely given the data, only you're being nonparametric instead of using a parametric bootstrap. Consequently, you don't get p-values, which of course are calculated assuming the null hypothesis.
If you do it this way, you get:
> set.seed(1678)
> boot_t_B(rnorm(25,0,10), rnorm(25,5,10))
[1] 0.05
> set.seed(1678)
> boot_t_F(rnorm(25,0,10), rnorm(25,5,10))
[1] 0.0507 | Two methods of using bootstraping to test the difference between two sample means | The issue is that your bootstrap in boot_t_B isn't correctly done. If you're not correcting the means to be the same (i.e., forcing the null hypothesis to be true by re-centering each sample), you fo | Two methods of using bootstraping to test the difference between two sample means
The issue is that your bootstrap in boot_t_B isn't correctly done. If you're not correcting the means to be the same (i.e., forcing the null hypothesis to be true by re-centering each sample), you force the null hypothesis to be true by sampling from the two samples combined:
boot.c <- sample(c(x1,y1), size=length(x), replace=T)
boot.p <- sample(c(x1,y1), size=length(y), replace=T)
The reason for this is that if the means ARE different, in your original formulation boot.c and boot.p are actually samples from the alternative hypothesis where the alternative distributions are "centered" at the data. You can think of it as bootstrap sampling from the alternative distribution that is most likely given the data, only you're being nonparametric instead of using a parametric bootstrap. Consequently, you don't get p-values, which of course are calculated assuming the null hypothesis.
If you do it this way, you get:
> set.seed(1678)
> boot_t_B(rnorm(25,0,10), rnorm(25,5,10))
[1] 0.05
> set.seed(1678)
> boot_t_F(rnorm(25,0,10), rnorm(25,5,10))
[1] 0.0507 | Two methods of using bootstraping to test the difference between two sample means
The issue is that your bootstrap in boot_t_B isn't correctly done. If you're not correcting the means to be the same (i.e., forcing the null hypothesis to be true by re-centering each sample), you fo |
32,665 | Two methods of using bootstraping to test the difference between two sample means | Adding to @jbowman's answer, for gaining better intuition of bootstrap procedure for $t$-test you can think of permutation test that you would use in this situation (check e.g. one of the Introduction to Statistics Through Resampling Methods books by Phillip I. Good, or other books on this topic by this author). For conducting permutation test we would assume that under null distribution all the values are randomly redistributed between the groups, so permutation procedure would be to randomly reassign group labels. You have to sample from the hull distribution and this can be achieved by reassigning group labels, or by subtracting group means as suggested by Efron and Tibshirani.
perm_test <- function(x, y) {
B <- 10000
nx <- length(x)
ny <- length(y)
N <- nx+ny
xy <- c(x, y)
orig <- mean(x) - mean(y)
res <- numeric(B)
for (i in 1:B) {
idx <- sample(N, nx)
tmpx <- xy[idx]
tmpy <- xy[-idx]
res[i] <- mean(tmpx) - mean(tmpy)
}
mean(orig > res)
}
The result is similar to the "proper" bootstrap, or used as suggested by @jbowman:
> set.seed(1678)
> perm_test(rnorm(25,0,10), rnorm(25,5,10))
[1] 0.051 | Two methods of using bootstraping to test the difference between two sample means | Adding to @jbowman's answer, for gaining better intuition of bootstrap procedure for $t$-test you can think of permutation test that you would use in this situation (check e.g. one of the Introduction | Two methods of using bootstraping to test the difference between two sample means
Adding to @jbowman's answer, for gaining better intuition of bootstrap procedure for $t$-test you can think of permutation test that you would use in this situation (check e.g. one of the Introduction to Statistics Through Resampling Methods books by Phillip I. Good, or other books on this topic by this author). For conducting permutation test we would assume that under null distribution all the values are randomly redistributed between the groups, so permutation procedure would be to randomly reassign group labels. You have to sample from the hull distribution and this can be achieved by reassigning group labels, or by subtracting group means as suggested by Efron and Tibshirani.
perm_test <- function(x, y) {
B <- 10000
nx <- length(x)
ny <- length(y)
N <- nx+ny
xy <- c(x, y)
orig <- mean(x) - mean(y)
res <- numeric(B)
for (i in 1:B) {
idx <- sample(N, nx)
tmpx <- xy[idx]
tmpy <- xy[-idx]
res[i] <- mean(tmpx) - mean(tmpy)
}
mean(orig > res)
}
The result is similar to the "proper" bootstrap, or used as suggested by @jbowman:
> set.seed(1678)
> perm_test(rnorm(25,0,10), rnorm(25,5,10))
[1] 0.051 | Two methods of using bootstraping to test the difference between two sample means
Adding to @jbowman's answer, for gaining better intuition of bootstrap procedure for $t$-test you can think of permutation test that you would use in this situation (check e.g. one of the Introduction |
32,666 | Confidence Intervals for ECDF | I see no way of using the delta method, but...
Reading about the convergence of the empirical distribution function we read that the central limit theorem gives us:
$\sqrt{n}(\hat{F}_n(x)-F(x)) \rightarrow \mathcal{N}(0,F(x)(1-F(x)))$
We can use this to create varying CI's around each $\hat{F}_n(x)$:
$\hat{F}_n(x) \pm 1.96\frac{\hat{F}_n(x)(1-\hat{F}_n(x))}{n}$,
since $E(\hat{F}_n(x))=F(x)$, $\hat{F}_n(x)$ is our best estimate of $F(x)$.
Using the following R-code:
#confidenc ebands calculation:
sim_norm<-rnorm(100)
plot(sim_norm)
hist(sim_norm)
sim_norm_sort<-sort(sim_norm)
n = sum(!is.na(sim_norm_sort))
plot(sim_norm_sort, (1:n)/n, type = 's', ylim = c(0, 1),
xlab = 'sample', ylab = '', main = 'Empirical Cumluative Distribution')
# Dvoretzky–Kiefer–Wolfowitz inequality:
# P ( sup|F_n - F| > epsilon ) leq 2*exp(-2n*epsilon^2)
# set alpha to 0.05 and alpha=2*exp(-2n*epsilon^2):
# --> epsilon_n = sqrt(-log(0.5*0.05)/(2*n))
#
#lower and upper bands:
L<-1:n
U<-1:n
epsilon_i = sqrt(log(2/0.05)/(2*n))
L=pmax(1:n/n-epsilon_i, 0)
U=pmin(1:n/n+epsilon_i, 1)
lines(sim_norm_sort, U, col="blue")
lines(sim_norm_sort, L, col="blue")
#using clt:
U2=(1:n/n)+1.96*sqrt( (1:n/n)*(1-1:n/n)/n )
L2=(1:n/n)-1.96*sqrt( (1:n/n)*(1-1:n/n)/n )
lines(sim_norm_sort, L2, col="red")
lines(sim_norm_sort, U2, col="red")
We get:
We see that the red bands (from the CLT method) gives us more narrow confidence bands.
EDIT:
As @Kjetil B Halvorsen pointed out - these two types of bands are different types. I had @Glen_b explain exactly what he meant:
Very different kinds of confidence bands. With a pointwise confidence band you'd expect a number of points outside the band even
if it was the distribution from which the data were drawn. With
simultaneous bands you wouldn't. If you have a 95% pointwise band,
on average 5% of the points for the correct distribution would be
outside the bands. With simultaneous bands, there's a 5% chance that
the point with the biggest deviation is outside.
Many thanks to both! | Confidence Intervals for ECDF | I see no way of using the delta method, but...
Reading about the convergence of the empirical distribution function we read that the central limit theorem gives us:
$\sqrt{n}(\hat{F}_n(x)-F(x)) \right | Confidence Intervals for ECDF
I see no way of using the delta method, but...
Reading about the convergence of the empirical distribution function we read that the central limit theorem gives us:
$\sqrt{n}(\hat{F}_n(x)-F(x)) \rightarrow \mathcal{N}(0,F(x)(1-F(x)))$
We can use this to create varying CI's around each $\hat{F}_n(x)$:
$\hat{F}_n(x) \pm 1.96\frac{\hat{F}_n(x)(1-\hat{F}_n(x))}{n}$,
since $E(\hat{F}_n(x))=F(x)$, $\hat{F}_n(x)$ is our best estimate of $F(x)$.
Using the following R-code:
#confidenc ebands calculation:
sim_norm<-rnorm(100)
plot(sim_norm)
hist(sim_norm)
sim_norm_sort<-sort(sim_norm)
n = sum(!is.na(sim_norm_sort))
plot(sim_norm_sort, (1:n)/n, type = 's', ylim = c(0, 1),
xlab = 'sample', ylab = '', main = 'Empirical Cumluative Distribution')
# Dvoretzky–Kiefer–Wolfowitz inequality:
# P ( sup|F_n - F| > epsilon ) leq 2*exp(-2n*epsilon^2)
# set alpha to 0.05 and alpha=2*exp(-2n*epsilon^2):
# --> epsilon_n = sqrt(-log(0.5*0.05)/(2*n))
#
#lower and upper bands:
L<-1:n
U<-1:n
epsilon_i = sqrt(log(2/0.05)/(2*n))
L=pmax(1:n/n-epsilon_i, 0)
U=pmin(1:n/n+epsilon_i, 1)
lines(sim_norm_sort, U, col="blue")
lines(sim_norm_sort, L, col="blue")
#using clt:
U2=(1:n/n)+1.96*sqrt( (1:n/n)*(1-1:n/n)/n )
L2=(1:n/n)-1.96*sqrt( (1:n/n)*(1-1:n/n)/n )
lines(sim_norm_sort, L2, col="red")
lines(sim_norm_sort, U2, col="red")
We get:
We see that the red bands (from the CLT method) gives us more narrow confidence bands.
EDIT:
As @Kjetil B Halvorsen pointed out - these two types of bands are different types. I had @Glen_b explain exactly what he meant:
Very different kinds of confidence bands. With a pointwise confidence band you'd expect a number of points outside the band even
if it was the distribution from which the data were drawn. With
simultaneous bands you wouldn't. If you have a 95% pointwise band,
on average 5% of the points for the correct distribution would be
outside the bands. With simultaneous bands, there's a 5% chance that
the point with the biggest deviation is outside.
Many thanks to both! | Confidence Intervals for ECDF
I see no way of using the delta method, but...
Reading about the convergence of the empirical distribution function we read that the central limit theorem gives us:
$\sqrt{n}(\hat{F}_n(x)-F(x)) \right |
32,667 | Difference between weights and prior in rpart and how to use them | I see two questions here.
1) What is the difference between weights and parms in rpart?
If you look at the code, weights argument is passed to the model.frame object, so it should be applied towards each observation of your dataset, just like in lm.
if (is.data.frame(model)) {
m <- model ## <---- m is defined here
model <- FALSE
}
else {
indx <- match(c("formula", "data", "weights", "subset"),
names(Call), nomatch = 0L)
if (indx[1] == 0L)
stop("a 'formula' argument is required")
temp <- Call[c(1L, indx)]
temp$na.action <- na.action
temp[[1L]] <- quote(stats::model.frame) ## <---- passed to model.frame
m <- eval.parent(temp)
}
Terms <- attr(m, "terms")
if (any(attr(Terms, "order") > 1L))
stop("Trees cannot handle interaction terms")
Y <- model.response(m)
wt <- model.weights(m) ## <---- used as observation weights
On the other hand, parms is for the class weights, which deals with unbalanced class size. I believe this is what you are looking for.
2) How to use the parms argument?
If you look at the description of parms:
For classification splitting, the list can contain any of: the vector of prior probabilities (component prior), ...
Hence, you want to store your prior probability vector in a list with name "prior". The order of probability should be exactly the same as the output of levels(data$y), where y indicates your response variable. For example, you might want to try something like the following:
fit <- rpart(y ~ x1 + x2 + x3, data = data, parms = list(prior = c(0.000066, 1 - 0.000066))) | Difference between weights and prior in rpart and how to use them | I see two questions here.
1) What is the difference between weights and parms in rpart?
If you look at the code, weights argument is passed to the model.frame object, so it should be applied towards e | Difference between weights and prior in rpart and how to use them
I see two questions here.
1) What is the difference between weights and parms in rpart?
If you look at the code, weights argument is passed to the model.frame object, so it should be applied towards each observation of your dataset, just like in lm.
if (is.data.frame(model)) {
m <- model ## <---- m is defined here
model <- FALSE
}
else {
indx <- match(c("formula", "data", "weights", "subset"),
names(Call), nomatch = 0L)
if (indx[1] == 0L)
stop("a 'formula' argument is required")
temp <- Call[c(1L, indx)]
temp$na.action <- na.action
temp[[1L]] <- quote(stats::model.frame) ## <---- passed to model.frame
m <- eval.parent(temp)
}
Terms <- attr(m, "terms")
if (any(attr(Terms, "order") > 1L))
stop("Trees cannot handle interaction terms")
Y <- model.response(m)
wt <- model.weights(m) ## <---- used as observation weights
On the other hand, parms is for the class weights, which deals with unbalanced class size. I believe this is what you are looking for.
2) How to use the parms argument?
If you look at the description of parms:
For classification splitting, the list can contain any of: the vector of prior probabilities (component prior), ...
Hence, you want to store your prior probability vector in a list with name "prior". The order of probability should be exactly the same as the output of levels(data$y), where y indicates your response variable. For example, you might want to try something like the following:
fit <- rpart(y ~ x1 + x2 + x3, data = data, parms = list(prior = c(0.000066, 1 - 0.000066))) | Difference between weights and prior in rpart and how to use them
I see two questions here.
1) What is the difference between weights and parms in rpart?
If you look at the code, weights argument is passed to the model.frame object, so it should be applied towards e |
32,668 | Multiple ARIMA models fit data well. How to determine order? Correct approach? | 1)Can you still describe the ACF of the time series as cutting of despite the spikes around lag 26?
26 and 27 suggest to me that the data is weekly some sort of annual cycle pf order 26 or 52
Are these outliers an indicator that a mixed ARMA model might be more appropriate?
If there are outliers in the observed series then the ARIMA model becomes a Transfer Function Model with dummy inputs.
Outliers in the acf/pacf are usually non-interpretable. Rathe use the acf/paf of a tentative model suggested by the dominant acf/pacf abd then ITERATE to a more complex model.
Which Information Criterion should I choose? AIC? AICC?
The residuals of the three models with the highest AIC do all show white noise behavior, but the difference in the AIC is only very small. Should I use the one with the fewest parameters, i.e. an ARIMA(0,1,1)?
None as it is based upon a trial set of assumed models.
Is my argumentation in general plausible?
Vague question ... even vaguer response.
Are their further possibilities to determine which model might be better or should I for example, the two with the highest AIC and perform backtests to test the plausibility of forecasts?
Simply ITERATE (slowly !) to more/less complicated models incorporating both auto-regessive structure and determinstic structure. See http://www.autobox.com/cms/index.php/blog/entry/build-or-make-your-own-arima-forecasting-mode for a logic flow diagram
EDIT AFTER RECEIPT OF DATA:
I was misled by your comment , you used the word lag of 26 and I incorrectly understood you were talking about the acf but you were talking about time point 26. A data set can be non-stationary in a number of ways. If the mean shifts the remedy for this non-stationarity is de-meaning . In your case the non-stationarity is caused by two separate and distinct trends and one significant increase in error variance. Both of these findings are easily supported by the eye.
Your data has non-stationarity but the remedy for your data's non-stationarity in the mean is not differencing but de-trending as two trends are found (1-29 and 30-65 ) found via Intervention Detection. Furthermore your error variance is non-stationary significantly increasing at period 28 found via Tsay's test for non-constant error variance, See this reference for both procedures http://www.unc.edu/~jbhill/tsay.pdf . After adjusting for the two trends and error variance change and a few pulses, a simple AR(1) model was found to be adequate. Here is the plot of Actual/Fit/Forecast . The equation is here with estimation results here
. The variance change test is here and the plot of the model's residuals is here . I used AUTOBOX a piece of software that I have helped develop to automatically separate signal from noise. Your data set is the "poster boy" for why simple ARIMA modelling is not widely used because simple methods don't work on complex problems. Note well that the change in error variance is not linkable to the level of the observes series thus power transformations such as logs are not relevant even though published papers present models using that structure. See Log or square-root transformation for ARIMA for a discussion on when to take power transformations. | Multiple ARIMA models fit data well. How to determine order? Correct approach? | 1)Can you still describe the ACF of the time series as cutting of despite the spikes around lag 26?
26 and 27 suggest to me that the data is weekly some sort of annual cycle pf order 26 or 52
Are thes | Multiple ARIMA models fit data well. How to determine order? Correct approach?
1)Can you still describe the ACF of the time series as cutting of despite the spikes around lag 26?
26 and 27 suggest to me that the data is weekly some sort of annual cycle pf order 26 or 52
Are these outliers an indicator that a mixed ARMA model might be more appropriate?
If there are outliers in the observed series then the ARIMA model becomes a Transfer Function Model with dummy inputs.
Outliers in the acf/pacf are usually non-interpretable. Rathe use the acf/paf of a tentative model suggested by the dominant acf/pacf abd then ITERATE to a more complex model.
Which Information Criterion should I choose? AIC? AICC?
The residuals of the three models with the highest AIC do all show white noise behavior, but the difference in the AIC is only very small. Should I use the one with the fewest parameters, i.e. an ARIMA(0,1,1)?
None as it is based upon a trial set of assumed models.
Is my argumentation in general plausible?
Vague question ... even vaguer response.
Are their further possibilities to determine which model might be better or should I for example, the two with the highest AIC and perform backtests to test the plausibility of forecasts?
Simply ITERATE (slowly !) to more/less complicated models incorporating both auto-regessive structure and determinstic structure. See http://www.autobox.com/cms/index.php/blog/entry/build-or-make-your-own-arima-forecasting-mode for a logic flow diagram
EDIT AFTER RECEIPT OF DATA:
I was misled by your comment , you used the word lag of 26 and I incorrectly understood you were talking about the acf but you were talking about time point 26. A data set can be non-stationary in a number of ways. If the mean shifts the remedy for this non-stationarity is de-meaning . In your case the non-stationarity is caused by two separate and distinct trends and one significant increase in error variance. Both of these findings are easily supported by the eye.
Your data has non-stationarity but the remedy for your data's non-stationarity in the mean is not differencing but de-trending as two trends are found (1-29 and 30-65 ) found via Intervention Detection. Furthermore your error variance is non-stationary significantly increasing at period 28 found via Tsay's test for non-constant error variance, See this reference for both procedures http://www.unc.edu/~jbhill/tsay.pdf . After adjusting for the two trends and error variance change and a few pulses, a simple AR(1) model was found to be adequate. Here is the plot of Actual/Fit/Forecast . The equation is here with estimation results here
. The variance change test is here and the plot of the model's residuals is here . I used AUTOBOX a piece of software that I have helped develop to automatically separate signal from noise. Your data set is the "poster boy" for why simple ARIMA modelling is not widely used because simple methods don't work on complex problems. Note well that the change in error variance is not linkable to the level of the observes series thus power transformations such as logs are not relevant even though published papers present models using that structure. See Log or square-root transformation for ARIMA for a discussion on when to take power transformations. | Multiple ARIMA models fit data well. How to determine order? Correct approach?
1)Can you still describe the ACF of the time series as cutting of despite the spikes around lag 26?
26 and 27 suggest to me that the data is weekly some sort of annual cycle pf order 26 or 52
Are thes |
32,669 | Interpretation of log(1 + x) transformed predictor [duplicate] | It depends. According to Wooldridge (2012) the percentage change interpretations are often closely preserved, except for changes
beginning at $y = 0$ (where the percentage change is not defined).
Strictly speaking, using $\log(1+y)$ and then interpreting the estimates as if the variable were $\log(y)$ is acceptable only if the data on $y$ contain relatively few zeros. | Interpretation of log(1 + x) transformed predictor [duplicate] | It depends. According to Wooldridge (2012) the percentage change interpretations are often closely preserved, except for changes
beginning at $y = 0$ (where the percentage change is not defined).
Stri | Interpretation of log(1 + x) transformed predictor [duplicate]
It depends. According to Wooldridge (2012) the percentage change interpretations are often closely preserved, except for changes
beginning at $y = 0$ (where the percentage change is not defined).
Strictly speaking, using $\log(1+y)$ and then interpreting the estimates as if the variable were $\log(y)$ is acceptable only if the data on $y$ contain relatively few zeros. | Interpretation of log(1 + x) transformed predictor [duplicate]
It depends. According to Wooldridge (2012) the percentage change interpretations are often closely preserved, except for changes
beginning at $y = 0$ (where the percentage change is not defined).
Stri |
32,670 | Range of lambda in elastic net regression | I think you should use a range of $0$ to
$$\lambda_\text{max}^\prime = \frac{1}{1-\alpha}\lambda_\text{max}$$
My reasoning comes from extending the lasso case, and a full derivation is below. The qualifier is it doesn't capture the $\text{dof}$ constraint contributed by the $\ell_2$ regularization. If I work out how to fix that (and decide whether it actually needs fixing), I'll come back and edit it in.
Define the objective
$$f(b) = \frac{1}{2} \|y - Xb\|^2 + \frac{1}{2} \gamma \|b\|^2 + \delta \|b\|_1$$
This is the objective you described, but with some parameters substituted to improve clarity.
Conventionally, $b=0$ can only be a solution to the optimization problem $\min f(b)$ if the gradient at $b = 0$ is zero. The term $\|b\|_1$ is non-smooth though, so the condition is actually that $0$ lies in the subgradient at $b = 0$.
The subgradient of $f$ is
$$\partial f = -X^T(y - Xb) + \gamma b + \delta \partial \|b\|_1$$
where $\partial$ denotes the subgradient with respect to $b$. At $b=0$, this becomes
$$\partial f|_{b=0} = -X^Ty + \delta[-1, 1]^d$$
where $d$ is the dimension of $b$, and a $[-1,1]^d$ is a $d$-dimensional cube. So for the optimization problem to have a solution of $b = 0$, it must be that
$$(X^Ty)_i \in \delta [-1, 1]$$
for each component $i$. This is equivalent to
$$\delta > \max_i \left|\sum_j y_j X_{ij} \right|$$
which is the defintion you gave for $\lambda_\text{max}$. If $\delta = (1-\alpha)\lambda$ is now swapped in, the formula from the top of the post falls out. | Range of lambda in elastic net regression | I think you should use a range of $0$ to
$$\lambda_\text{max}^\prime = \frac{1}{1-\alpha}\lambda_\text{max}$$
My reasoning comes from extending the lasso case, and a full derivation is below. The qual | Range of lambda in elastic net regression
I think you should use a range of $0$ to
$$\lambda_\text{max}^\prime = \frac{1}{1-\alpha}\lambda_\text{max}$$
My reasoning comes from extending the lasso case, and a full derivation is below. The qualifier is it doesn't capture the $\text{dof}$ constraint contributed by the $\ell_2$ regularization. If I work out how to fix that (and decide whether it actually needs fixing), I'll come back and edit it in.
Define the objective
$$f(b) = \frac{1}{2} \|y - Xb\|^2 + \frac{1}{2} \gamma \|b\|^2 + \delta \|b\|_1$$
This is the objective you described, but with some parameters substituted to improve clarity.
Conventionally, $b=0$ can only be a solution to the optimization problem $\min f(b)$ if the gradient at $b = 0$ is zero. The term $\|b\|_1$ is non-smooth though, so the condition is actually that $0$ lies in the subgradient at $b = 0$.
The subgradient of $f$ is
$$\partial f = -X^T(y - Xb) + \gamma b + \delta \partial \|b\|_1$$
where $\partial$ denotes the subgradient with respect to $b$. At $b=0$, this becomes
$$\partial f|_{b=0} = -X^Ty + \delta[-1, 1]^d$$
where $d$ is the dimension of $b$, and a $[-1,1]^d$ is a $d$-dimensional cube. So for the optimization problem to have a solution of $b = 0$, it must be that
$$(X^Ty)_i \in \delta [-1, 1]$$
for each component $i$. This is equivalent to
$$\delta > \max_i \left|\sum_j y_j X_{ij} \right|$$
which is the defintion you gave for $\lambda_\text{max}$. If $\delta = (1-\alpha)\lambda$ is now swapped in, the formula from the top of the post falls out. | Range of lambda in elastic net regression
I think you should use a range of $0$ to
$$\lambda_\text{max}^\prime = \frac{1}{1-\alpha}\lambda_\text{max}$$
My reasoning comes from extending the lasso case, and a full derivation is below. The qual |
32,671 | Is OLS Asymptotically Efficient Under Heteroscedasticity | The article never assumed homoskadasticity in the definition. To put it in the context of the article, homoskedasticity would be saying
$$
E\{(\hat x-x)(\hat x-x)^T\}=\sigma I
$$
Where $I$ is the $n\times n$ identity matrix and $\sigma$ is a scalar positive number. Heteroscadasticity allows for
$$
E\{(\hat x-x)(\hat x-x)^T\}=D
$$
Any $D$ diaganol positive definite. The article defines the covariance matrix in the most general way possible, as the centered second moment of some implicit multi-variate distribution. we must know the multivariate distribution of $e$ to obtain an asymptotically efficient and consistent estimate of $\hat x$. This will come from a likelihood function (which is a mandatory component of the posterior). For example, assume $e \sim N(0,\Sigma)$ (i.e $E\{(\hat x-x)(\hat x-x)^T\}=\Sigma$. Then the implied likelihood function is
$$
\log[L]=\log[\phi(\hat x -x, \Sigma)]
$$
Where $\phi$ is the multivariate normal pdf.
The fisher information matrix may be written as
$$
I(x)=E\bigg[\bigg(\frac{\partial}{\partial x}\log[L]\bigg)^2 \,\bigg|\,x \bigg]
$$
see en.wikipedia.org/wiki/Fisher_information for more. It is from here that we can derive
$$
\sqrt{n}(\hat x -x) \rightarrow^d N(0, I^{-1}(x))
$$
The above is using a quadratic loss function but does not assuming homoscedasticity.
In the context of OLS, where we regress $y$ on $x$ we assume
$$
E\{y|x\}=x'\beta
$$
The likelihood implied is
$$
\log[L]=\log[\phi(y-x'\beta, \sigma I)]
$$
Which may be conveniently rewritten as
$$
\log[L]=\sum_{i=1}^n\log[\varphi(y-x'\beta, \sigma)]
$$
$\varphi$ the univariate normal pdf. The fisher information is then
$$
I(\beta)=[\sigma (xx')^{-1}]^{-1}
$$
If homoskedasticity is not meet, then the Fisher information as stated is miss specified (but the conditional expectation function is still correct) so the estimates of $\beta$ will be consistent but inefficient. We could rewrite the likelihood to account for heteroskacticity and the regression is efficient i.e. we can write
$$
\log[L]=\log[\phi(y-x'\beta, D)]
$$
This is equivalent to certain forms of Generalized Least Squares, such as Weighted least squares. However, this will change the Fisher information matrix. In practice we often don't know the form of heteroscedasticity so we sometimes prefer accept the inefficiency rather than chance biasing the regression by miss specifying weighting schemes. In such cases the asymptotic covariance of $\beta$ is not $\frac{1}{n}I^{-1}(\beta)$ as specified above. | Is OLS Asymptotically Efficient Under Heteroscedasticity | The article never assumed homoskadasticity in the definition. To put it in the context of the article, homoskedasticity would be saying
$$
E\{(\hat x-x)(\hat x-x)^T\}=\sigma I
$$
Where $I$ is the $ | Is OLS Asymptotically Efficient Under Heteroscedasticity
The article never assumed homoskadasticity in the definition. To put it in the context of the article, homoskedasticity would be saying
$$
E\{(\hat x-x)(\hat x-x)^T\}=\sigma I
$$
Where $I$ is the $n\times n$ identity matrix and $\sigma$ is a scalar positive number. Heteroscadasticity allows for
$$
E\{(\hat x-x)(\hat x-x)^T\}=D
$$
Any $D$ diaganol positive definite. The article defines the covariance matrix in the most general way possible, as the centered second moment of some implicit multi-variate distribution. we must know the multivariate distribution of $e$ to obtain an asymptotically efficient and consistent estimate of $\hat x$. This will come from a likelihood function (which is a mandatory component of the posterior). For example, assume $e \sim N(0,\Sigma)$ (i.e $E\{(\hat x-x)(\hat x-x)^T\}=\Sigma$. Then the implied likelihood function is
$$
\log[L]=\log[\phi(\hat x -x, \Sigma)]
$$
Where $\phi$ is the multivariate normal pdf.
The fisher information matrix may be written as
$$
I(x)=E\bigg[\bigg(\frac{\partial}{\partial x}\log[L]\bigg)^2 \,\bigg|\,x \bigg]
$$
see en.wikipedia.org/wiki/Fisher_information for more. It is from here that we can derive
$$
\sqrt{n}(\hat x -x) \rightarrow^d N(0, I^{-1}(x))
$$
The above is using a quadratic loss function but does not assuming homoscedasticity.
In the context of OLS, where we regress $y$ on $x$ we assume
$$
E\{y|x\}=x'\beta
$$
The likelihood implied is
$$
\log[L]=\log[\phi(y-x'\beta, \sigma I)]
$$
Which may be conveniently rewritten as
$$
\log[L]=\sum_{i=1}^n\log[\varphi(y-x'\beta, \sigma)]
$$
$\varphi$ the univariate normal pdf. The fisher information is then
$$
I(\beta)=[\sigma (xx')^{-1}]^{-1}
$$
If homoskedasticity is not meet, then the Fisher information as stated is miss specified (but the conditional expectation function is still correct) so the estimates of $\beta$ will be consistent but inefficient. We could rewrite the likelihood to account for heteroskacticity and the regression is efficient i.e. we can write
$$
\log[L]=\log[\phi(y-x'\beta, D)]
$$
This is equivalent to certain forms of Generalized Least Squares, such as Weighted least squares. However, this will change the Fisher information matrix. In practice we often don't know the form of heteroscedasticity so we sometimes prefer accept the inefficiency rather than chance biasing the regression by miss specifying weighting schemes. In such cases the asymptotic covariance of $\beta$ is not $\frac{1}{n}I^{-1}(\beta)$ as specified above. | Is OLS Asymptotically Efficient Under Heteroscedasticity
The article never assumed homoskadasticity in the definition. To put it in the context of the article, homoskedasticity would be saying
$$
E\{(\hat x-x)(\hat x-x)^T\}=\sigma I
$$
Where $I$ is the $ |
32,672 | Is OLS Asymptotically Efficient Under Heteroscedasticity | No, OLS is not efficient under heteroscedasticity. Efficiency of an estimator is obtained if the estimator has the least variance among other possible estimators. Statements about efficiency in OLS are made regardless of the limiting distribution of an estimator. | Is OLS Asymptotically Efficient Under Heteroscedasticity | No, OLS is not efficient under heteroscedasticity. Efficiency of an estimator is obtained if the estimator has the least variance among other possible estimators. Statements about efficiency in OLS ar | Is OLS Asymptotically Efficient Under Heteroscedasticity
No, OLS is not efficient under heteroscedasticity. Efficiency of an estimator is obtained if the estimator has the least variance among other possible estimators. Statements about efficiency in OLS are made regardless of the limiting distribution of an estimator. | Is OLS Asymptotically Efficient Under Heteroscedasticity
No, OLS is not efficient under heteroscedasticity. Efficiency of an estimator is obtained if the estimator has the least variance among other possible estimators. Statements about efficiency in OLS ar |
32,673 | How to correctly implement iteratively reweighted least squares algorithm for multiple logistic regression? | In an expression like
$$
\beta^{new}\leftarrow \text{argmin}_{b}(\textbf{z}-\textbf{X}b)^T\textbf{W}(\textbf{z}-\textbf{X}b)
$$
the point is that the output, $\beta^{new}$, is the result of considering all possible $b\in \mathbb{R}^p$ or whatever other space you are optimizing over. That's why there's no superscript: in the optimization problem $\beta$ is a dummy variable, just like with an integral (and I'm deliberately writing $b$ not $\beta$ to reflect $b$ being a dummy variable, not the target parameter).
The overall procedure involves getting a $\beta^{(t)}$, computing the "response" for the WLS, and then solving the WLS problem for $\beta^{(t+1)}$; as you know, we can use derivatives to get a nice closed-form solution for the optimal $\hat \beta$ for this problem. Thus $\beta^{old}$, which is fixed, appears in the vector $\textbf{z}$ in the WLS computation and then leads to $\beta^{new}$. That's the "iteration" part, that we use our current solution to create a new response vector; the WLS part then is solving for the new $\hat \beta$ vector. We keep doing this until there's no "significant" change.
Remember that the WLS procedure doesn't know that it is being used iteratively; as far as it is concerned, it is presented with an $X$, $y$, and $W$ and then outputs
$$
\hat{\beta} = (X^T W X)^{-1} X^T W y
$$
like it would do in any other instance. We are being clever with our choice of $y$ and $W$ and iterating.
Update:
We can derive the solution to the WLS problem without using any component-wise derivatives. Note that if $Y \sim \mathcal N(X\beta, I)$ then $W^{1/2}Y \sim \mathcal N(W^{1/2}X\beta, W)$ from which we have that
$$
\frac{\text d}{\text d\beta}\|W^{1/2}Y - W^{1/2}X\beta\|^2 = -2X^TWY + 2X^TWX\beta.
$$
Setting the derivative equal to 0 and solving we obtain
$$
\hat{\beta} = (X^TWX)^{-1} X^TWY.
$$
Thus for any inputs $W$, $X$, and $Y$ (provided W is positive definite and $X$ is full column rank) we get our optimal $\hat{\beta}$. It doesn't matter what these inputs are. So what we do is we use our $\beta^{old}$ to create our $Y$ vector and then we plug that in to this formula which outputs the optimal $\hat \beta$ for the given inputs. The whole point of the WLS procedure is to solve for $\hat \beta$. It in and of itself doesn't require plugging in a $\hat \beta$. | How to correctly implement iteratively reweighted least squares algorithm for multiple logistic regr | In an expression like
$$
\beta^{new}\leftarrow \text{argmin}_{b}(\textbf{z}-\textbf{X}b)^T\textbf{W}(\textbf{z}-\textbf{X}b)
$$
the point is that the output, $\beta^{new}$, is the result of considerin | How to correctly implement iteratively reweighted least squares algorithm for multiple logistic regression?
In an expression like
$$
\beta^{new}\leftarrow \text{argmin}_{b}(\textbf{z}-\textbf{X}b)^T\textbf{W}(\textbf{z}-\textbf{X}b)
$$
the point is that the output, $\beta^{new}$, is the result of considering all possible $b\in \mathbb{R}^p$ or whatever other space you are optimizing over. That's why there's no superscript: in the optimization problem $\beta$ is a dummy variable, just like with an integral (and I'm deliberately writing $b$ not $\beta$ to reflect $b$ being a dummy variable, not the target parameter).
The overall procedure involves getting a $\beta^{(t)}$, computing the "response" for the WLS, and then solving the WLS problem for $\beta^{(t+1)}$; as you know, we can use derivatives to get a nice closed-form solution for the optimal $\hat \beta$ for this problem. Thus $\beta^{old}$, which is fixed, appears in the vector $\textbf{z}$ in the WLS computation and then leads to $\beta^{new}$. That's the "iteration" part, that we use our current solution to create a new response vector; the WLS part then is solving for the new $\hat \beta$ vector. We keep doing this until there's no "significant" change.
Remember that the WLS procedure doesn't know that it is being used iteratively; as far as it is concerned, it is presented with an $X$, $y$, and $W$ and then outputs
$$
\hat{\beta} = (X^T W X)^{-1} X^T W y
$$
like it would do in any other instance. We are being clever with our choice of $y$ and $W$ and iterating.
Update:
We can derive the solution to the WLS problem without using any component-wise derivatives. Note that if $Y \sim \mathcal N(X\beta, I)$ then $W^{1/2}Y \sim \mathcal N(W^{1/2}X\beta, W)$ from which we have that
$$
\frac{\text d}{\text d\beta}\|W^{1/2}Y - W^{1/2}X\beta\|^2 = -2X^TWY + 2X^TWX\beta.
$$
Setting the derivative equal to 0 and solving we obtain
$$
\hat{\beta} = (X^TWX)^{-1} X^TWY.
$$
Thus for any inputs $W$, $X$, and $Y$ (provided W is positive definite and $X$ is full column rank) we get our optimal $\hat{\beta}$. It doesn't matter what these inputs are. So what we do is we use our $\beta^{old}$ to create our $Y$ vector and then we plug that in to this formula which outputs the optimal $\hat \beta$ for the given inputs. The whole point of the WLS procedure is to solve for $\hat \beta$. It in and of itself doesn't require plugging in a $\hat \beta$. | How to correctly implement iteratively reweighted least squares algorithm for multiple logistic regr
In an expression like
$$
\beta^{new}\leftarrow \text{argmin}_{b}(\textbf{z}-\textbf{X}b)^T\textbf{W}(\textbf{z}-\textbf{X}b)
$$
the point is that the output, $\beta^{new}$, is the result of considerin |
32,674 | On the utility of the intercept-slope correlation in multilevel models | I have emailed several scholars (almost 30 persons) several weeks ago. Few of them sent their mail (always collective emails). Eugene Demidenko was the first to answer :
cov/sqrt(var1*var2) is always within [-1,1] regardless of the interpretation: it may be estimates of intercept and slope, two slopes, etc. The fact that -1<=cov/sqrt(var1*var2)<=1 follows from the Cauchy inequality and it is always true. Thus I dismiss the Snijders & Bosker statement. Maybe some other piece of information is missing?
This was followed by an email from Thomas Snijders :
The information that is missing is what was actually written about this on page 122, 123, 124, 129 of Snijders & Bosker (2nd edition 2012). This is not about two competing claims of which no more than one can be true, it is about two different interpretations.
On p. 123 a quadratic variance function is introduced,
\sigma_0^2 + 2 \sigma_{01} * x + \sigma_1^2 * x^2
and the following remark is made:
"This formula can be used without the interpretation that \sigma_0^2 and \sigma_1^2 are variances and \sigma_{01} a covariance;
these parameters might be any numbers. The formula only implies that the residual variance is a quadratic function of x.
Let me quote a full paragraph of p. 129, about a quadratic variance function at level two; note that ONE MIGHT INTERPRET that \tau_0^2 and \tau_1^2 are the level-two variances of the random intercept and random slope, and \tau_{01} is their covariance, but this is explicitly put behind the horizon:
"The parameters \tau_0^2, \tau_1^2, and \tau_{01} are, as in the preceding section, not to be interpreted themselves as variances and a corresponding covariance. The interpretation is by means of the variance function (8.7) [note t.s.: in the book this is mistakenly reported as 8.8].
Therefore it is not required that \tau_{01}^2 <= \tau_0^2 * \tau_1^2.
To put it another way, 'correlations' defined formally by \tau_{01}/(\tau_0 * \tau_1) may be larger than 1 or smaller than -1, even infinite, because the idea of a correlation does not make sense here.
An example of this is provided by the linear variance function for which \tau_1^2 = 0 and only the parameters \tau_0^2 and \tau_{01} are used."
The variance function is a quadratic function of x (the variable "with the random slope"), and the variance of the outcome is this plus the level-1 variance. As long as this is positive for all x, the modelled variance is positive. (An extra requirement is that the corresponding covariance matrix is positive definite.)
Some further background of this is the existence of differences in parameter estimation algorithms in software. In some multilevel (random effects) software, the requirement is made that the covariance matrices of the random effects are positive semi-definite on all levels. In other software, the requirement is made only that the resulting estimated covariance matrix for the observed data is positive semi-definite. This implies that the idea of random coefficients of latent variables is relinquished, and the model specifies a certain covariance structure for the observed data; no more, no less; in that case the cited interpretation of Joop Hox does not apply. Note that Harvey Goldstein already long ago used linear variance functions at level one, represented by a zero slope variance and nonzero slope-intercept correlation at level one; this was and is called "complex variation"; see, e.g.,
http://www.bristol.ac.uk/media-library/sites/cmm/migrated/documents/modelling-complex-variation.pdf
And then, Joop Hox replied :
In the software MLwiN it is actually possible to estimate a covariance term and at the same time constrain one of the variances to zero, which would make the "correlation" infinite. And yes, some software will allow estimates such as negative variances (SEM software usually allows this). So my statements were not completely accurate. I refered to "normal" unstructured random structures. Let me add that if you rescale the variable with the random slope to have a different zero-point, the variances and covariances generally change. So the correlation is only interpretable if the predictor variable has a fixed zero-point, i.e. is measured on a ratio scale. This applies to growth curve models, where the correlation between initial status and rate of growth is sometimes interpreted. In that case the value zero should be the 'real' time point where the process starts.
And he sent another mail :
Anyway, I think Tom's explanation below fits the style of the Snijders/Bosker collaboration better than my more informal style. I would add to page 90 a footnote stating something like "Note that the parameter values in the random part are estimates. Interpreting the standardized covariances as ordinary correlations assumes that there are no constraints on the variances and that the software does not allow negative estimates. If the random part is unstructured the interpretation as ordinary (co)variances is generally tenable.".
Note that I wrote about the correlation interpretation in the longitudinal chapter. In growth curve modeling it is very tempting to interpret this correlation as a substantive result, and that is dangerous because the value depends on the "metric of time". If you are interested in that I recommend to go to Lesa Hoffman's website (http://www.lesahoffman.com/).
So I think in my situation, where I've specified an unstructured covariance for the random effects, I should interpret the intercept-slope correlation as an ordinary correlation. | On the utility of the intercept-slope correlation in multilevel models | I have emailed several scholars (almost 30 persons) several weeks ago. Few of them sent their mail (always collective emails). Eugene Demidenko was the first to answer :
cov/sqrt(var1*var2) is always | On the utility of the intercept-slope correlation in multilevel models
I have emailed several scholars (almost 30 persons) several weeks ago. Few of them sent their mail (always collective emails). Eugene Demidenko was the first to answer :
cov/sqrt(var1*var2) is always within [-1,1] regardless of the interpretation: it may be estimates of intercept and slope, two slopes, etc. The fact that -1<=cov/sqrt(var1*var2)<=1 follows from the Cauchy inequality and it is always true. Thus I dismiss the Snijders & Bosker statement. Maybe some other piece of information is missing?
This was followed by an email from Thomas Snijders :
The information that is missing is what was actually written about this on page 122, 123, 124, 129 of Snijders & Bosker (2nd edition 2012). This is not about two competing claims of which no more than one can be true, it is about two different interpretations.
On p. 123 a quadratic variance function is introduced,
\sigma_0^2 + 2 \sigma_{01} * x + \sigma_1^2 * x^2
and the following remark is made:
"This formula can be used without the interpretation that \sigma_0^2 and \sigma_1^2 are variances and \sigma_{01} a covariance;
these parameters might be any numbers. The formula only implies that the residual variance is a quadratic function of x.
Let me quote a full paragraph of p. 129, about a quadratic variance function at level two; note that ONE MIGHT INTERPRET that \tau_0^2 and \tau_1^2 are the level-two variances of the random intercept and random slope, and \tau_{01} is their covariance, but this is explicitly put behind the horizon:
"The parameters \tau_0^2, \tau_1^2, and \tau_{01} are, as in the preceding section, not to be interpreted themselves as variances and a corresponding covariance. The interpretation is by means of the variance function (8.7) [note t.s.: in the book this is mistakenly reported as 8.8].
Therefore it is not required that \tau_{01}^2 <= \tau_0^2 * \tau_1^2.
To put it another way, 'correlations' defined formally by \tau_{01}/(\tau_0 * \tau_1) may be larger than 1 or smaller than -1, even infinite, because the idea of a correlation does not make sense here.
An example of this is provided by the linear variance function for which \tau_1^2 = 0 and only the parameters \tau_0^2 and \tau_{01} are used."
The variance function is a quadratic function of x (the variable "with the random slope"), and the variance of the outcome is this plus the level-1 variance. As long as this is positive for all x, the modelled variance is positive. (An extra requirement is that the corresponding covariance matrix is positive definite.)
Some further background of this is the existence of differences in parameter estimation algorithms in software. In some multilevel (random effects) software, the requirement is made that the covariance matrices of the random effects are positive semi-definite on all levels. In other software, the requirement is made only that the resulting estimated covariance matrix for the observed data is positive semi-definite. This implies that the idea of random coefficients of latent variables is relinquished, and the model specifies a certain covariance structure for the observed data; no more, no less; in that case the cited interpretation of Joop Hox does not apply. Note that Harvey Goldstein already long ago used linear variance functions at level one, represented by a zero slope variance and nonzero slope-intercept correlation at level one; this was and is called "complex variation"; see, e.g.,
http://www.bristol.ac.uk/media-library/sites/cmm/migrated/documents/modelling-complex-variation.pdf
And then, Joop Hox replied :
In the software MLwiN it is actually possible to estimate a covariance term and at the same time constrain one of the variances to zero, which would make the "correlation" infinite. And yes, some software will allow estimates such as negative variances (SEM software usually allows this). So my statements were not completely accurate. I refered to "normal" unstructured random structures. Let me add that if you rescale the variable with the random slope to have a different zero-point, the variances and covariances generally change. So the correlation is only interpretable if the predictor variable has a fixed zero-point, i.e. is measured on a ratio scale. This applies to growth curve models, where the correlation between initial status and rate of growth is sometimes interpreted. In that case the value zero should be the 'real' time point where the process starts.
And he sent another mail :
Anyway, I think Tom's explanation below fits the style of the Snijders/Bosker collaboration better than my more informal style. I would add to page 90 a footnote stating something like "Note that the parameter values in the random part are estimates. Interpreting the standardized covariances as ordinary correlations assumes that there are no constraints on the variances and that the software does not allow negative estimates. If the random part is unstructured the interpretation as ordinary (co)variances is generally tenable.".
Note that I wrote about the correlation interpretation in the longitudinal chapter. In growth curve modeling it is very tempting to interpret this correlation as a substantive result, and that is dangerous because the value depends on the "metric of time". If you are interested in that I recommend to go to Lesa Hoffman's website (http://www.lesahoffman.com/).
So I think in my situation, where I've specified an unstructured covariance for the random effects, I should interpret the intercept-slope correlation as an ordinary correlation. | On the utility of the intercept-slope correlation in multilevel models
I have emailed several scholars (almost 30 persons) several weeks ago. Few of them sent their mail (always collective emails). Eugene Demidenko was the first to answer :
cov/sqrt(var1*var2) is always |
32,675 | On the utility of the intercept-slope correlation in multilevel models | I can only applaud your effort in going to check with the folks in the field. I would like to just a small comment regarding the utility of the correlation between the intercept and the slope. Skrondal and Rabe-Hesketh (2004) provide a simple, silly example of how one can manipulate that correlation by the shift/centering of the variable that enters the model with a random slope. See p. 54 -- search "Figure 3.1" in Amazon preview. It is worth at least a couple dozen words. | On the utility of the intercept-slope correlation in multilevel models | I can only applaud your effort in going to check with the folks in the field. I would like to just a small comment regarding the utility of the correlation between the intercept and the slope. Skronda | On the utility of the intercept-slope correlation in multilevel models
I can only applaud your effort in going to check with the folks in the field. I would like to just a small comment regarding the utility of the correlation between the intercept and the slope. Skrondal and Rabe-Hesketh (2004) provide a simple, silly example of how one can manipulate that correlation by the shift/centering of the variable that enters the model with a random slope. See p. 54 -- search "Figure 3.1" in Amazon preview. It is worth at least a couple dozen words. | On the utility of the intercept-slope correlation in multilevel models
I can only applaud your effort in going to check with the folks in the field. I would like to just a small comment regarding the utility of the correlation between the intercept and the slope. Skronda |
32,676 | Proportion data - beta distribution v. GLM with binomial distribution and logit link | With count data of that form, I'd actually fit a multinomial model (at least to start with*), because several numerators are present in the denominator - each '+1' count could have gone into any of $k$ cells ('sets').
(e.g. see here)
You'll need the denominator you divided by; the model is still for the proportion, but the variability depends on the denominator you used to obtain the proportion.
* a particular concern is that you'll have dependence over both space and time (e.g. adjacent locations and adjacent times will tend to be more related than more distant locations or times - at least if there's unmodelled variation that would be accounted for by such effects)
Once you have fitted a multinomial model, you would want to assess whether you have both the variance and the correlation modelled reasonably well -- you might need mixed models (GLMM) and possibly also to account for potential remaining overdispersion in addition.
You will find a number of discussions of multinomial models here on CV.
Another possibility is to model the counts as Poisson, by allowing for offsets, factors or continuous predictors related to the variation you mentioned as the reason you scaled to proportions. | Proportion data - beta distribution v. GLM with binomial distribution and logit link | With count data of that form, I'd actually fit a multinomial model (at least to start with*), because several numerators are present in the denominator - each '+1' count could have gone into any of $k | Proportion data - beta distribution v. GLM with binomial distribution and logit link
With count data of that form, I'd actually fit a multinomial model (at least to start with*), because several numerators are present in the denominator - each '+1' count could have gone into any of $k$ cells ('sets').
(e.g. see here)
You'll need the denominator you divided by; the model is still for the proportion, but the variability depends on the denominator you used to obtain the proportion.
* a particular concern is that you'll have dependence over both space and time (e.g. adjacent locations and adjacent times will tend to be more related than more distant locations or times - at least if there's unmodelled variation that would be accounted for by such effects)
Once you have fitted a multinomial model, you would want to assess whether you have both the variance and the correlation modelled reasonably well -- you might need mixed models (GLMM) and possibly also to account for potential remaining overdispersion in addition.
You will find a number of discussions of multinomial models here on CV.
Another possibility is to model the counts as Poisson, by allowing for offsets, factors or continuous predictors related to the variation you mentioned as the reason you scaled to proportions. | Proportion data - beta distribution v. GLM with binomial distribution and logit link
With count data of that form, I'd actually fit a multinomial model (at least to start with*), because several numerators are present in the denominator - each '+1' count could have gone into any of $k |
32,677 | Proportion data - beta distribution v. GLM with binomial distribution and logit link | Based on your answer of how the proportion is calculated I believe the beta regression is most appropriate. The logistic regression for count binomial would only make sense if you have counts out of a total that is constant. Since your total changes from month to month you have a continuous proportion. Therefore beta regression is the way to go! | Proportion data - beta distribution v. GLM with binomial distribution and logit link | Based on your answer of how the proportion is calculated I believe the beta regression is most appropriate. The logistic regression for count binomial would only make sense if you have counts out of a | Proportion data - beta distribution v. GLM with binomial distribution and logit link
Based on your answer of how the proportion is calculated I believe the beta regression is most appropriate. The logistic regression for count binomial would only make sense if you have counts out of a total that is constant. Since your total changes from month to month you have a continuous proportion. Therefore beta regression is the way to go! | Proportion data - beta distribution v. GLM with binomial distribution and logit link
Based on your answer of how the proportion is calculated I believe the beta regression is most appropriate. The logistic regression for count binomial would only make sense if you have counts out of a |
32,678 | From Identification to Estimation | This is a very good question. First let's verify if your formula is correct. The information you have given corresponds to the following causal model:
And as you have said we can derive the estimand for $P(Y|do(X))$ using the rules of do-calculus. In R we can easily do that with the package causaleffect. We first load igraph to create an object with the causal diagram you are proposing:
library(igraph)
g <- graph.formula(X-+Y, Y-+X, X-+Z-+Y, W-+X, W-+Z, W-+Y, simplify = FALSE)
g <- set.edge.attribute(graph = g, name = "description", index = 1:2, value = "U")
Where the first two terms X-+Y, Y-+X represents the unobserved confounders of $X$ and $Y$ and the rest of the terms represent the directed edges you mentioned.
Then we ask for our estimand:
library(causaleffect)
cat(causal.effect("Y", "X", G = g, primes = TRUE, simp = T, expr = TRUE))
$$
\sum_{W,Z}\left(\sum_{X^{\prime}}P(Y|W,X^{\prime},Z)P(X^{\prime}|W)\right)P(Z|W,X)P(W)
$$
Which indeed coincides with your formula --- a case of frontdoor with an observed confounder.
Now let's go to the estimation part. If you assume linearity (and normality), things are greatly simplified. Basically what you want to do is to estimate the coefficients of the path $X \rightarrow Z \rightarrow Y$.
Let's simulate some data:
set.seed(1)
n <- 1e3
u <- rnorm(n) # y -> x unobserved confounder
w <- rnorm(n)
x <- w + u + rnorm(n)
z <- 3*x + 5*w + rnorm(n)
y <- 7*z + 11*w + 13*u + rnorm(n)
Notice in our simulation the true causal effect of a change of $X$ on $Y$ is 21. You can estimate this by running two regressions. First $Y \sim Z+W+X$ to get the effect of $Z$ on $Y$ and then $Z \sim X + W$ to get the effect of $X$ on $Z$. Your estimate will be the product of both coefficients:
yz_model <- lm(y ~ z + w + x)
zx_model <- lm(z ~ x + w)
yz <- coef(yz_model)[2]
zx <- coef(zx_model)[2]
effect <- zx*yz
effect
x
21.37626
And for inference you may compute the (asymptotic) standard error of the product:
se_yz <- coef(summary(yz_model))[2, 2]
se_zx <- coef(summary(zx_model))[2, 2]
se <- sqrt(yz^2*se_zx^2 + zx^2*se_yz^2)
Which you may use for tests or confidence intervals:
c(effect - 1.96*se, effect + 1.96*se) # 95% CI
x x
19.66441 23.08811
You can also perform (non/semi)-parametric estimation, I will try to update this answer including other procedures later. | From Identification to Estimation | This is a very good question. First let's verify if your formula is correct. The information you have given corresponds to the following causal model:
And as you have said we can derive the estimand | From Identification to Estimation
This is a very good question. First let's verify if your formula is correct. The information you have given corresponds to the following causal model:
And as you have said we can derive the estimand for $P(Y|do(X))$ using the rules of do-calculus. In R we can easily do that with the package causaleffect. We first load igraph to create an object with the causal diagram you are proposing:
library(igraph)
g <- graph.formula(X-+Y, Y-+X, X-+Z-+Y, W-+X, W-+Z, W-+Y, simplify = FALSE)
g <- set.edge.attribute(graph = g, name = "description", index = 1:2, value = "U")
Where the first two terms X-+Y, Y-+X represents the unobserved confounders of $X$ and $Y$ and the rest of the terms represent the directed edges you mentioned.
Then we ask for our estimand:
library(causaleffect)
cat(causal.effect("Y", "X", G = g, primes = TRUE, simp = T, expr = TRUE))
$$
\sum_{W,Z}\left(\sum_{X^{\prime}}P(Y|W,X^{\prime},Z)P(X^{\prime}|W)\right)P(Z|W,X)P(W)
$$
Which indeed coincides with your formula --- a case of frontdoor with an observed confounder.
Now let's go to the estimation part. If you assume linearity (and normality), things are greatly simplified. Basically what you want to do is to estimate the coefficients of the path $X \rightarrow Z \rightarrow Y$.
Let's simulate some data:
set.seed(1)
n <- 1e3
u <- rnorm(n) # y -> x unobserved confounder
w <- rnorm(n)
x <- w + u + rnorm(n)
z <- 3*x + 5*w + rnorm(n)
y <- 7*z + 11*w + 13*u + rnorm(n)
Notice in our simulation the true causal effect of a change of $X$ on $Y$ is 21. You can estimate this by running two regressions. First $Y \sim Z+W+X$ to get the effect of $Z$ on $Y$ and then $Z \sim X + W$ to get the effect of $X$ on $Z$. Your estimate will be the product of both coefficients:
yz_model <- lm(y ~ z + w + x)
zx_model <- lm(z ~ x + w)
yz <- coef(yz_model)[2]
zx <- coef(zx_model)[2]
effect <- zx*yz
effect
x
21.37626
And for inference you may compute the (asymptotic) standard error of the product:
se_yz <- coef(summary(yz_model))[2, 2]
se_zx <- coef(summary(zx_model))[2, 2]
se <- sqrt(yz^2*se_zx^2 + zx^2*se_yz^2)
Which you may use for tests or confidence intervals:
c(effect - 1.96*se, effect + 1.96*se) # 95% CI
x x
19.66441 23.08811
You can also perform (non/semi)-parametric estimation, I will try to update this answer including other procedures later. | From Identification to Estimation
This is a very good question. First let's verify if your formula is correct. The information you have given corresponds to the following causal model:
And as you have said we can derive the estimand |
32,679 | Visually summarising a mess of directed line segments | You could also split into time chunks and make facetted plots. This would help if the flow goes back and forward over itself a few times.
It may be possible to smooth the segments to get some sense of the flow too, but I think that facetting would be a first step. | Visually summarising a mess of directed line segments | You could also split into time chunks and make facetted plots. This would help if the flow goes back and forward over itself a few times.
It may be possible to smooth the segments to get some sense of | Visually summarising a mess of directed line segments
You could also split into time chunks and make facetted plots. This would help if the flow goes back and forward over itself a few times.
It may be possible to smooth the segments to get some sense of the flow too, but I think that facetting would be a first step. | Visually summarising a mess of directed line segments
You could also split into time chunks and make facetted plots. This would help if the flow goes back and forward over itself a few times.
It may be possible to smooth the segments to get some sense of |
32,680 | Visually summarising a mess of directed line segments | One possibility would be to average across days, to get one or a few "average" daily cycles instead of 1,000+ individual days.
Another idea might be a 3D representation like in Tominski, Schumann, Andrienko, and Andrienko (2012).
Reference:
Tominski, C., Schumann, H., Andrienko, G., Andrienko, N. (2012). Stacking-based visualization of trajectory attribute data. Visualization and Computer Graphics, IEEE Transactions on , vol.18, no.12, pp.2565,2574, Dec. 2012. doi: 10.1109/TVCG.2012.265. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6327262&isnumber=6327196 | Visually summarising a mess of directed line segments | One possibility would be to average across days, to get one or a few "average" daily cycles instead of 1,000+ individual days.
Another idea might be a 3D representation like in Tominski, Schumann, And | Visually summarising a mess of directed line segments
One possibility would be to average across days, to get one or a few "average" daily cycles instead of 1,000+ individual days.
Another idea might be a 3D representation like in Tominski, Schumann, Andrienko, and Andrienko (2012).
Reference:
Tominski, C., Schumann, H., Andrienko, G., Andrienko, N. (2012). Stacking-based visualization of trajectory attribute data. Visualization and Computer Graphics, IEEE Transactions on , vol.18, no.12, pp.2565,2574, Dec. 2012. doi: 10.1109/TVCG.2012.265. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6327262&isnumber=6327196 | Visually summarising a mess of directed line segments
One possibility would be to average across days, to get one or a few "average" daily cycles instead of 1,000+ individual days.
Another idea might be a 3D representation like in Tominski, Schumann, And |
32,681 | How are CP (Cost Complexity) values calculated in RPART (or decision trees in general) | I was searching for same from many days and I came to know one thing that cp value calculation is taken care by package.
By default if you do not specify "CP" value then rpart will take its as 0.01.
Cp value is cost of adding node to the tree. | How are CP (Cost Complexity) values calculated in RPART (or decision trees in general) | I was searching for same from many days and I came to know one thing that cp value calculation is taken care by package.
By default if you do not specify "CP" value then rpart will take its as 0.01.
C | How are CP (Cost Complexity) values calculated in RPART (or decision trees in general)
I was searching for same from many days and I came to know one thing that cp value calculation is taken care by package.
By default if you do not specify "CP" value then rpart will take its as 0.01.
Cp value is cost of adding node to the tree. | How are CP (Cost Complexity) values calculated in RPART (or decision trees in general)
I was searching for same from many days and I came to know one thing that cp value calculation is taken care by package.
By default if you do not specify "CP" value then rpart will take its as 0.01.
C |
32,682 | How are CP (Cost Complexity) values calculated in RPART (or decision trees in general) | The complexity parameter (cp) in rpart is the minimum improvement in the model needed at each node. It’s based on the cost complexity of the model defined as
For the given tree, add up the misclassification at every terminal node.
Then multiply the number of splits time a penalty term (lambda) and add it to the total misclassification.
The lambda is determined through cross-validation and not reported in R.
For regression
models (see next section) the scaled cp has a very direct interpretation: if any split does
not increase the overall R2 of the model by at least cp (where R2
is the usual linear-models
definition) then that split is decreed to be, a priori, not worth pursuing. See the longintro document for rpart | How are CP (Cost Complexity) values calculated in RPART (or decision trees in general) | The complexity parameter (cp) in rpart is the minimum improvement in the model needed at each node. It’s based on the cost complexity of the model defined as
For the given tree, add up the misclassif | How are CP (Cost Complexity) values calculated in RPART (or decision trees in general)
The complexity parameter (cp) in rpart is the minimum improvement in the model needed at each node. It’s based on the cost complexity of the model defined as
For the given tree, add up the misclassification at every terminal node.
Then multiply the number of splits time a penalty term (lambda) and add it to the total misclassification.
The lambda is determined through cross-validation and not reported in R.
For regression
models (see next section) the scaled cp has a very direct interpretation: if any split does
not increase the overall R2 of the model by at least cp (where R2
is the usual linear-models
definition) then that split is decreed to be, a priori, not worth pursuing. See the longintro document for rpart | How are CP (Cost Complexity) values calculated in RPART (or decision trees in general)
The complexity parameter (cp) in rpart is the minimum improvement in the model needed at each node. It’s based on the cost complexity of the model defined as
For the given tree, add up the misclassif |
32,683 | What is a good oob score for random forests with sklearn, three-class classification? [duplicate] | sklearn's RF oob_score_ (note the trailing underscore) seriously isn't very intelligible compared to R's, after reading the sklearn doc and source code.
My advice on how to improve your model is as follows:
sklearn's RF used to use the terrible default of max_features=1 (as in "try every feature on every node"). Then it's no longer doing random column(/feature)-selection like a random-forest. Change this to e.g.max_features=0.33 (like R's mtry) and rerun. Tell us the new scores.
"Most of the features have shown negligible importance". Then you need to do Feature Selection, as per the doc - for classification. See the doc and other articles here on CrossValidated.SE. Do the FS on a different (say 20-30%) holdout set than the rest of the training, using e.g. sklearn.cross_validation.train_test_split() (yes the name is a bit misleading). Now tell us the scores you get after FS?
You said "after removing bad data (about third of the data), the labels were more or less 2% for 0 and 49% for each of -1/+1" ; then you have a severe class imbalance. Also: "confusion matrix shows model only succeeds for class 0, and fails in about 50% of the cases between +1 and -1". This is a symptom of the class imbalance. Either you use stratified sampling, or train a classifier with examples for +1 and -1 class. You can either do a OAA (One-Against-All) or OAO (One-Against-One) classifier. Try three OAA classifiers, one for each class. Finally tell us those scores? | What is a good oob score for random forests with sklearn, three-class classification? [duplicate] | sklearn's RF oob_score_ (note the trailing underscore) seriously isn't very intelligible compared to R's, after reading the sklearn doc and source code.
My advice on how to improve your model is as fo | What is a good oob score for random forests with sklearn, three-class classification? [duplicate]
sklearn's RF oob_score_ (note the trailing underscore) seriously isn't very intelligible compared to R's, after reading the sklearn doc and source code.
My advice on how to improve your model is as follows:
sklearn's RF used to use the terrible default of max_features=1 (as in "try every feature on every node"). Then it's no longer doing random column(/feature)-selection like a random-forest. Change this to e.g.max_features=0.33 (like R's mtry) and rerun. Tell us the new scores.
"Most of the features have shown negligible importance". Then you need to do Feature Selection, as per the doc - for classification. See the doc and other articles here on CrossValidated.SE. Do the FS on a different (say 20-30%) holdout set than the rest of the training, using e.g. sklearn.cross_validation.train_test_split() (yes the name is a bit misleading). Now tell us the scores you get after FS?
You said "after removing bad data (about third of the data), the labels were more or less 2% for 0 and 49% for each of -1/+1" ; then you have a severe class imbalance. Also: "confusion matrix shows model only succeeds for class 0, and fails in about 50% of the cases between +1 and -1". This is a symptom of the class imbalance. Either you use stratified sampling, or train a classifier with examples for +1 and -1 class. You can either do a OAA (One-Against-All) or OAO (One-Against-One) classifier. Try three OAA classifiers, one for each class. Finally tell us those scores? | What is a good oob score for random forests with sklearn, three-class classification? [duplicate]
sklearn's RF oob_score_ (note the trailing underscore) seriously isn't very intelligible compared to R's, after reading the sklearn doc and source code.
My advice on how to improve your model is as fo |
32,684 | What is a good oob score for random forests with sklearn, three-class classification? [duplicate] | There's no such thing as good oob_score, its the difference between valid_score and oob_score that matters.
Think of oob_score as a score for some subset(say, oob_set) of training set. To learn how its created refer this.
oob_set is taken from your training set. And you already have your validation set(say, valid_set).
Lets assume a scenario where, your validation_score is 0.7365 and oob_score is 0.8329
In this scenario, your model is performing better on oob_set, which is take directly from your training dataset. Indicating, validation_set is for a different time period. (say training_set has records for the month of "January" and validation_set has records for the month of "July"). So, more than a test for model's performance, oob_score is test for "how representative is your Validation_set".
You should always make sure that you have a good representative validation_set, because it's score is used as an indicator for our model's performance. So your goal should be, to have as little difference between oob_score and valid_score as possible.
I generally use oob_score with validation_score to see how good is my validation_set. I learnt this technique from Jeremy Howard. | What is a good oob score for random forests with sklearn, three-class classification? [duplicate] | There's no such thing as good oob_score, its the difference between valid_score and oob_score that matters.
Think of oob_score as a score for some subset(say, oob_set) of training set. To learn how it | What is a good oob score for random forests with sklearn, three-class classification? [duplicate]
There's no such thing as good oob_score, its the difference between valid_score and oob_score that matters.
Think of oob_score as a score for some subset(say, oob_set) of training set. To learn how its created refer this.
oob_set is taken from your training set. And you already have your validation set(say, valid_set).
Lets assume a scenario where, your validation_score is 0.7365 and oob_score is 0.8329
In this scenario, your model is performing better on oob_set, which is take directly from your training dataset. Indicating, validation_set is for a different time period. (say training_set has records for the month of "January" and validation_set has records for the month of "July"). So, more than a test for model's performance, oob_score is test for "how representative is your Validation_set".
You should always make sure that you have a good representative validation_set, because it's score is used as an indicator for our model's performance. So your goal should be, to have as little difference between oob_score and valid_score as possible.
I generally use oob_score with validation_score to see how good is my validation_set. I learnt this technique from Jeremy Howard. | What is a good oob score for random forests with sklearn, three-class classification? [duplicate]
There's no such thing as good oob_score, its the difference between valid_score and oob_score that matters.
Think of oob_score as a score for some subset(say, oob_set) of training set. To learn how it |
32,685 | What is a good oob score for random forests with sklearn, three-class classification? [duplicate] | Q: What is a good oob score for random forests with sklearn, three-class classification?
A: Depends. In my view, if learning and testing samples are drawn from the same distribution, then -in my view- OOB is equal to approximately 3-fold cross-validation. So if we repeat the same question but with "3-fold cross-validation", the answer would be the same, which is "generally, the highest the accuracy the merrier, unless you fear to overfit your learning set because someone told you that the true testing samples are of a different distribution".
Can you give me your dataset? I can have little fun with it and tell you what I manage to do with it for free. | What is a good oob score for random forests with sklearn, three-class classification? [duplicate] | Q: What is a good oob score for random forests with sklearn, three-class classification?
A: Depends. In my view, if learning and testing samples are drawn from the same distribution, then -in my view- | What is a good oob score for random forests with sklearn, three-class classification? [duplicate]
Q: What is a good oob score for random forests with sklearn, three-class classification?
A: Depends. In my view, if learning and testing samples are drawn from the same distribution, then -in my view- OOB is equal to approximately 3-fold cross-validation. So if we repeat the same question but with "3-fold cross-validation", the answer would be the same, which is "generally, the highest the accuracy the merrier, unless you fear to overfit your learning set because someone told you that the true testing samples are of a different distribution".
Can you give me your dataset? I can have little fun with it and tell you what I manage to do with it for free. | What is a good oob score for random forests with sklearn, three-class classification? [duplicate]
Q: What is a good oob score for random forests with sklearn, three-class classification?
A: Depends. In my view, if learning and testing samples are drawn from the same distribution, then -in my view- |
32,686 | What is a good oob score for random forests with sklearn, three-class classification? [duplicate] | a different take on the question: to start with, you have to associate a loss with every misclassification you do. This price-paid/loss/penalty for misclassification would(probably) be different for False Positive(FP) vs False Negatives(FN). Some classifications, say cancer detection, would rather have more FPs than FNs. Some other, say spam filter, would rather allow certain spams(FN) than block mails(FP) from your friend. Building on this logic you can have use F1-score or Accuracy, whatever suits your purpose.( for eg. I could be happy if my spam filter has no FPs and a score of .1 as I have 10% less spams to worry about. On the other hand someone else could be unhappy with even .9 (90% spams filtered). What would be good score then?) | What is a good oob score for random forests with sklearn, three-class classification? [duplicate] | a different take on the question: to start with, you have to associate a loss with every misclassification you do. This price-paid/loss/penalty for misclassification would(probably) be different for F | What is a good oob score for random forests with sklearn, three-class classification? [duplicate]
a different take on the question: to start with, you have to associate a loss with every misclassification you do. This price-paid/loss/penalty for misclassification would(probably) be different for False Positive(FP) vs False Negatives(FN). Some classifications, say cancer detection, would rather have more FPs than FNs. Some other, say spam filter, would rather allow certain spams(FN) than block mails(FP) from your friend. Building on this logic you can have use F1-score or Accuracy, whatever suits your purpose.( for eg. I could be happy if my spam filter has no FPs and a score of .1 as I have 10% less spams to worry about. On the other hand someone else could be unhappy with even .9 (90% spams filtered). What would be good score then?) | What is a good oob score for random forests with sklearn, three-class classification? [duplicate]
a different take on the question: to start with, you have to associate a loss with every misclassification you do. This price-paid/loss/penalty for misclassification would(probably) be different for F |
32,687 | The sum of two symmetric random variables is symmetric | This looks a lot like a homework exercise but nonetheless, here goes.
If $X$ and $Y$ are zero-mean independent random variables, then (assuming that
they are continuous random variables) we have that for any $z$,
$f_{X+Y}(z)$ is given by the convolution of the marginal densities. Thus,
$$\begin{align}
f_{X+Y}(z) &= \int_{-\infty}^\infty f_X(z-y)f_Y(y)\,\mathrm dy \tag{1}\\
&= \int_{-\infty}^\infty f_X(y-z)f_Y(-y)\,\mathrm dy,
&\text{symmetry of the densities}\\
&= \int_{-\infty}^\infty f_X(-t-z)f_Y(t)\,\mathrm dt,
&\text{substitution: } t = -y\\
&= \int_{-\infty}^\infty f_X((-z)-t)f_Y(t)\,\mathrm dt,\\
&= \int_{-\infty}^\infty f_X((-z)-y)f_Y(y)\,\mathrm dy,
&\text{substitution: } t = y \tag{2}\\
&= f_{X+Y}(-z) &\text{on comparing (1) and (2)}\tag{3}
\end{align}$$
If $X$ and $Y$ have nonzero means $\mu_X$ and $\mu_Y$ respectively
and their densities are symmetric about their respective
means, then $\hat{X} = X-\mu_X$ and $\hat{Y} = Y - \mu_Y$ can be used
in the above proof to show that $\hat{Z} = \hat{X} + \hat{Y} =
(X+Y) - (\mu_X+\mu_Y) = Z - \mu_Z$ has a density symmetric about
$0$, and so $Z$ has a density symmetric about $\mu_Z$. Or,
we can use the outline suggested in @Quantibex's answer to incorporate
the means in the proof itself.
Similar proofs can be written for discrete random variables.
While the result is always true for independent random variables,
it can hold for some dependent random variables as well. As
an example, see this recently-closed question where it is shown that if $(X,Y)$ are
uniformly distributed on the unit disc (and hence have symmetric marginal
densities but are not independent), then $X+Y$ also has a symmetric
density; in fact,
$$f_{X+Y}(z) = \frac{1}{\sqrt{2}}f_X\left(\frac{z}{\sqrt{2}}\right). \tag{4}$$
Indeed, $(4)$ is true whenever $(X,Y)$ have a
circularly symmetric joint density (uniform density, as in the closed
question, is not needed). Another nice example (with nonzero means)
is the joint density that has value $2$ on the trapezoidal
region with vertices $(0,0), (1,1), (\frac 12, 1), (0,\frac 12)$ and
on the triangular region with vertices $(\frac 12,0), (1,0), (1,\frac 12)$.
It is readily verified that $X$ and $Y$ have
marginal densities $U(0,1)$ that are symmetric about their
mean $\frac 12$, and that they are not independent.
Nonetheless, the density of their sum is the convolution of the
marginal densities and is symmetric about $1$. | The sum of two symmetric random variables is symmetric | This looks a lot like a homework exercise but nonetheless, here goes.
If $X$ and $Y$ are zero-mean independent random variables, then (assuming that
they are continuous random variables) we have that | The sum of two symmetric random variables is symmetric
This looks a lot like a homework exercise but nonetheless, here goes.
If $X$ and $Y$ are zero-mean independent random variables, then (assuming that
they are continuous random variables) we have that for any $z$,
$f_{X+Y}(z)$ is given by the convolution of the marginal densities. Thus,
$$\begin{align}
f_{X+Y}(z) &= \int_{-\infty}^\infty f_X(z-y)f_Y(y)\,\mathrm dy \tag{1}\\
&= \int_{-\infty}^\infty f_X(y-z)f_Y(-y)\,\mathrm dy,
&\text{symmetry of the densities}\\
&= \int_{-\infty}^\infty f_X(-t-z)f_Y(t)\,\mathrm dt,
&\text{substitution: } t = -y\\
&= \int_{-\infty}^\infty f_X((-z)-t)f_Y(t)\,\mathrm dt,\\
&= \int_{-\infty}^\infty f_X((-z)-y)f_Y(y)\,\mathrm dy,
&\text{substitution: } t = y \tag{2}\\
&= f_{X+Y}(-z) &\text{on comparing (1) and (2)}\tag{3}
\end{align}$$
If $X$ and $Y$ have nonzero means $\mu_X$ and $\mu_Y$ respectively
and their densities are symmetric about their respective
means, then $\hat{X} = X-\mu_X$ and $\hat{Y} = Y - \mu_Y$ can be used
in the above proof to show that $\hat{Z} = \hat{X} + \hat{Y} =
(X+Y) - (\mu_X+\mu_Y) = Z - \mu_Z$ has a density symmetric about
$0$, and so $Z$ has a density symmetric about $\mu_Z$. Or,
we can use the outline suggested in @Quantibex's answer to incorporate
the means in the proof itself.
Similar proofs can be written for discrete random variables.
While the result is always true for independent random variables,
it can hold for some dependent random variables as well. As
an example, see this recently-closed question where it is shown that if $(X,Y)$ are
uniformly distributed on the unit disc (and hence have symmetric marginal
densities but are not independent), then $X+Y$ also has a symmetric
density; in fact,
$$f_{X+Y}(z) = \frac{1}{\sqrt{2}}f_X\left(\frac{z}{\sqrt{2}}\right). \tag{4}$$
Indeed, $(4)$ is true whenever $(X,Y)$ have a
circularly symmetric joint density (uniform density, as in the closed
question, is not needed). Another nice example (with nonzero means)
is the joint density that has value $2$ on the trapezoidal
region with vertices $(0,0), (1,1), (\frac 12, 1), (0,\frac 12)$ and
on the triangular region with vertices $(\frac 12,0), (1,0), (1,\frac 12)$.
It is readily verified that $X$ and $Y$ have
marginal densities $U(0,1)$ that are symmetric about their
mean $\frac 12$, and that they are not independent.
Nonetheless, the density of their sum is the convolution of the
marginal densities and is symmetric about $1$. | The sum of two symmetric random variables is symmetric
This looks a lot like a homework exercise but nonetheless, here goes.
If $X$ and $Y$ are zero-mean independent random variables, then (assuming that
they are continuous random variables) we have that |
32,688 | The sum of two symmetric random variables is symmetric | An outline of a proof (in the case where $X$ and $Y$ are independent) is the following.
Denote $f_X$ and $f_Y$ the density functions of $X$ and $Y$, and $\mu_X$ and $\mu_Y$ their respective means.
Note that $f_X(\mu_X + x) = f_X(\mu_X - x)$ for all $x$ by the symmetry of $f_X$, and similarly $f_Y(\mu_Y + y) = f_Y(\mu_Y - y)$ for all $y$.
Let $Z = X + Y$, and denote $f_Z$ its density function and $\mu_Z$ its mean. Obviously, $\mu_Z = \mu_X + \mu_Y$.
It can be shown that $f_Z$ is the convolution $f_X * f_Y$ of $f_X$ and $f_Y$, where
$$
(f_X * f_Y)(z) = \int_{-\infty}^\infty f_X(z - y)f_Y(y) dz .
$$
To prove the symmetry of $f_Z$, show that $f_Z(\mu_Z + z) = f_Z(\mu_Z - z)$ for all $z$ using the convolution $f_X * f_Y$, with an appropriate change of variable such as $y = t + \mu_Y$, and using the symmetry properties of $f_X$ and $f_Y$. | The sum of two symmetric random variables is symmetric | An outline of a proof (in the case where $X$ and $Y$ are independent) is the following.
Denote $f_X$ and $f_Y$ the density functions of $X$ and $Y$, and $\mu_X$ and $\mu_Y$ their respective means.
Not | The sum of two symmetric random variables is symmetric
An outline of a proof (in the case where $X$ and $Y$ are independent) is the following.
Denote $f_X$ and $f_Y$ the density functions of $X$ and $Y$, and $\mu_X$ and $\mu_Y$ their respective means.
Note that $f_X(\mu_X + x) = f_X(\mu_X - x)$ for all $x$ by the symmetry of $f_X$, and similarly $f_Y(\mu_Y + y) = f_Y(\mu_Y - y)$ for all $y$.
Let $Z = X + Y$, and denote $f_Z$ its density function and $\mu_Z$ its mean. Obviously, $\mu_Z = \mu_X + \mu_Y$.
It can be shown that $f_Z$ is the convolution $f_X * f_Y$ of $f_X$ and $f_Y$, where
$$
(f_X * f_Y)(z) = \int_{-\infty}^\infty f_X(z - y)f_Y(y) dz .
$$
To prove the symmetry of $f_Z$, show that $f_Z(\mu_Z + z) = f_Z(\mu_Z - z)$ for all $z$ using the convolution $f_X * f_Y$, with an appropriate change of variable such as $y = t + \mu_Y$, and using the symmetry properties of $f_X$ and $f_Y$. | The sum of two symmetric random variables is symmetric
An outline of a proof (in the case where $X$ and $Y$ are independent) is the following.
Denote $f_X$ and $f_Y$ the density functions of $X$ and $Y$, and $\mu_X$ and $\mu_Y$ their respective means.
Not |
32,689 | When to use Bayes' theorem to calculate conditional probability? | As people have mentioned in the comments it depends on the problem. If you know $P(F)$ you can use your first equation. If you don't know $P(F)$ but you know $P(F|E)$ that is the probability of $F$ conditional on $E$ then you can use the second equation. Both of these equations are equivalent. | When to use Bayes' theorem to calculate conditional probability? | As people have mentioned in the comments it depends on the problem. If you know $P(F)$ you can use your first equation. If you don't know $P(F)$ but you know $P(F|E)$ that is the probability of $F$ co | When to use Bayes' theorem to calculate conditional probability?
As people have mentioned in the comments it depends on the problem. If you know $P(F)$ you can use your first equation. If you don't know $P(F)$ but you know $P(F|E)$ that is the probability of $F$ conditional on $E$ then you can use the second equation. Both of these equations are equivalent. | When to use Bayes' theorem to calculate conditional probability?
As people have mentioned in the comments it depends on the problem. If you know $P(F)$ you can use your first equation. If you don't know $P(F)$ but you know $P(F|E)$ that is the probability of $F$ co |
32,690 | When to use Bayes' theorem to calculate conditional probability? | The law of total probability is used in Bayes theorem:
P(A|B)=P(A∩B)P(B)⟹P(A∩B)=P(B)P(A|B).P(A|B)=P(A∩B)P(B)⟹P(A∩B)=P(B)P(A|B). This is just the definition of conditional probability.
Now, the Law of Total Probabiliyy can be used to calculate P(B)P(B) in the above definition. The law requires that you have a set of disjoint events DiDi that collectively "cover" the event BB. Then, instead of calculating P(B)P(B) directly, you add up the intersection of BB with each of the events EiEi:
P(B)=∑P(B∩Ei)P(B)=∑P(B∩Ei) Of course, we can rewrite this using the definition of conditional probability:
P(B)=∑P(B∩Ei)=∑P(Ei)P(B|Ei)P(B)=∑P(B∩Ei)=∑P(Ei)P(B|Ei)
Thus, the following are equivalent:
P(B|A)=P(A∩B)P(A)=P(B)P(A|B)P(B)P(A|B)+P(¬B)P(A|¬B)P(B|A)=P(A∩B)P(A)=P(B)P(A|B)P(B)P(A|B)+P(¬B)P(A|¬B) Since BB and ¬B¬B are disjoint events.
In general, Bayes' rule is used to "flip" a conditional probability, while the law of total probability is used when you don't know the probability of an event, but you know its occurrence under several disjoint scenarios and the probability of each scenario | When to use Bayes' theorem to calculate conditional probability? | The law of total probability is used in Bayes theorem:
P(A|B)=P(A∩B)P(B)⟹P(A∩B)=P(B)P(A|B).P(A|B)=P(A∩B)P(B)⟹P(A∩B)=P(B)P(A|B). This is just the definition of conditional probability.
Now, the Law of | When to use Bayes' theorem to calculate conditional probability?
The law of total probability is used in Bayes theorem:
P(A|B)=P(A∩B)P(B)⟹P(A∩B)=P(B)P(A|B).P(A|B)=P(A∩B)P(B)⟹P(A∩B)=P(B)P(A|B). This is just the definition of conditional probability.
Now, the Law of Total Probabiliyy can be used to calculate P(B)P(B) in the above definition. The law requires that you have a set of disjoint events DiDi that collectively "cover" the event BB. Then, instead of calculating P(B)P(B) directly, you add up the intersection of BB with each of the events EiEi:
P(B)=∑P(B∩Ei)P(B)=∑P(B∩Ei) Of course, we can rewrite this using the definition of conditional probability:
P(B)=∑P(B∩Ei)=∑P(Ei)P(B|Ei)P(B)=∑P(B∩Ei)=∑P(Ei)P(B|Ei)
Thus, the following are equivalent:
P(B|A)=P(A∩B)P(A)=P(B)P(A|B)P(B)P(A|B)+P(¬B)P(A|¬B)P(B|A)=P(A∩B)P(A)=P(B)P(A|B)P(B)P(A|B)+P(¬B)P(A|¬B) Since BB and ¬B¬B are disjoint events.
In general, Bayes' rule is used to "flip" a conditional probability, while the law of total probability is used when you don't know the probability of an event, but you know its occurrence under several disjoint scenarios and the probability of each scenario | When to use Bayes' theorem to calculate conditional probability?
The law of total probability is used in Bayes theorem:
P(A|B)=P(A∩B)P(B)⟹P(A∩B)=P(B)P(A|B).P(A|B)=P(A∩B)P(B)⟹P(A∩B)=P(B)P(A|B). This is just the definition of conditional probability.
Now, the Law of |
32,691 | Is it worth reporting small fixed-effect $R^2$ (marginal $R^2$), large model $R^2$ (conditional $R^2$)? | See Nakagawa and Schielzeth (2013) (or this blog entry) for more information and a discussion on $R^2$ for mixed models. First of all, as Nagakawa and Schielzeth notice, using $R^2$ from OLS linear model for mixed models is misleading and should not be used. Generally, there are multiple ideas how to compute $R^2$ for mixed models, but there's still no consensus on it. As for me, the Nagakawa and Schielzeth's ideas are most interesting, however you should always remember that $R^2$ for mixed models is not the same "variance explained" as for linear models and it is just an approximation. Among other approaches you could also check Snijders and Bosker (1994) for comparison.
Statistical significance is also problematic for mixed models (and problematic in general) so I wouldn't pay much attention to it.
As for your question, I'd recommend Gelman and Hill's (2007) book. First of all, they suggest to compare "effect sizes" for mixed models. In your case marginal $R^2$ is very small, but you should also look at the "Betas": if they are small compared to variance of your data, i.e. including this effects in the model does not change anything for estimation, you probably could abandon those effects. On another hand, in regression and mixed models literature there are multiple examples for leaving "non-significant" effects in the model and it is almost never a decision based on a simple rule of thumb. For example, in your case the AIC changed quite dramatically what suggests that the second model has a better fit. So I do not see a simple answer in here, not yet. | Is it worth reporting small fixed-effect $R^2$ (marginal $R^2$), large model $R^2$ (conditional $R^2 | See Nakagawa and Schielzeth (2013) (or this blog entry) for more information and a discussion on $R^2$ for mixed models. First of all, as Nagakawa and Schielzeth notice, using $R^2$ from OLS linear mo | Is it worth reporting small fixed-effect $R^2$ (marginal $R^2$), large model $R^2$ (conditional $R^2$)?
See Nakagawa and Schielzeth (2013) (or this blog entry) for more information and a discussion on $R^2$ for mixed models. First of all, as Nagakawa and Schielzeth notice, using $R^2$ from OLS linear model for mixed models is misleading and should not be used. Generally, there are multiple ideas how to compute $R^2$ for mixed models, but there's still no consensus on it. As for me, the Nagakawa and Schielzeth's ideas are most interesting, however you should always remember that $R^2$ for mixed models is not the same "variance explained" as for linear models and it is just an approximation. Among other approaches you could also check Snijders and Bosker (1994) for comparison.
Statistical significance is also problematic for mixed models (and problematic in general) so I wouldn't pay much attention to it.
As for your question, I'd recommend Gelman and Hill's (2007) book. First of all, they suggest to compare "effect sizes" for mixed models. In your case marginal $R^2$ is very small, but you should also look at the "Betas": if they are small compared to variance of your data, i.e. including this effects in the model does not change anything for estimation, you probably could abandon those effects. On another hand, in regression and mixed models literature there are multiple examples for leaving "non-significant" effects in the model and it is almost never a decision based on a simple rule of thumb. For example, in your case the AIC changed quite dramatically what suggests that the second model has a better fit. So I do not see a simple answer in here, not yet. | Is it worth reporting small fixed-effect $R^2$ (marginal $R^2$), large model $R^2$ (conditional $R^2
See Nakagawa and Schielzeth (2013) (or this blog entry) for more information and a discussion on $R^2$ for mixed models. First of all, as Nagakawa and Schielzeth notice, using $R^2$ from OLS linear mo |
32,692 | Is it worth reporting small fixed-effect $R^2$ (marginal $R^2$), large model $R^2$ (conditional $R^2$)? | It seems strange that the conditional R^2 goes down when you add the fixed effects. I know Nagakawa and Schielzeth talk about this kind of thing, but it seems less than useful if adding significant predictors decreases the conditional (i.e. fixed + random) R^2... | Is it worth reporting small fixed-effect $R^2$ (marginal $R^2$), large model $R^2$ (conditional $R^2 | It seems strange that the conditional R^2 goes down when you add the fixed effects. I know Nagakawa and Schielzeth talk about this kind of thing, but it seems less than useful if adding significant pr | Is it worth reporting small fixed-effect $R^2$ (marginal $R^2$), large model $R^2$ (conditional $R^2$)?
It seems strange that the conditional R^2 goes down when you add the fixed effects. I know Nagakawa and Schielzeth talk about this kind of thing, but it seems less than useful if adding significant predictors decreases the conditional (i.e. fixed + random) R^2... | Is it worth reporting small fixed-effect $R^2$ (marginal $R^2$), large model $R^2$ (conditional $R^2
It seems strange that the conditional R^2 goes down when you add the fixed effects. I know Nagakawa and Schielzeth talk about this kind of thing, but it seems less than useful if adding significant pr |
32,693 | Selecting the number of sparse principal components to include in regression | While I don't have direct insights about your question, I ran across some research papers, which might be of your interest. That is, of course, if I understand correctly that you are talking about sparse PCA, principal component regression and related topics. In that case, here are the papers:
Sparse PCA for high-dimensional data with outliers
Robust sparse principal component analysis
Robust Sparse Principal Component Regression under the High Dimensional Elliptical Model
Structured Sparse Principal Component Analysis
Robust Principal Component Analysis? | Selecting the number of sparse principal components to include in regression | While I don't have direct insights about your question, I ran across some research papers, which might be of your interest. That is, of course, if I understand correctly that you are talking about spa | Selecting the number of sparse principal components to include in regression
While I don't have direct insights about your question, I ran across some research papers, which might be of your interest. That is, of course, if I understand correctly that you are talking about sparse PCA, principal component regression and related topics. In that case, here are the papers:
Sparse PCA for high-dimensional data with outliers
Robust sparse principal component analysis
Robust Sparse Principal Component Regression under the High Dimensional Elliptical Model
Structured Sparse Principal Component Analysis
Robust Principal Component Analysis? | Selecting the number of sparse principal components to include in regression
While I don't have direct insights about your question, I ran across some research papers, which might be of your interest. That is, of course, if I understand correctly that you are talking about spa |
32,694 | Selecting the number of sparse principal components to include in regression | The cross validation results were also used to determine the optimal number of dimensions for the LSI space. Too few dimensions did not take advantage of the predictive power of the data; while too many dimensions resulted in
over-fitting. Fig. 4 shows the distribution of average errors for models with different numbers of LSI dimensions. The models with four dimensional LSI spaces produced both the fewest average number of errors and the fewest median number of errors, so the final model was built using a four dimensional LSI space.
Link
I can post a copy if you aren't an ieee member.
This is from a paper I wrote in undergrad. I had a problem where I needed to decide how many dimensions (Latent Semantic Indexing is similar to PCA) to use in my logistic regression model. What I did was pick a metric (i.e. the error rate when using a flagging probability of .5) and looked at the distribution for this error rate for different models trained on different number of dimensions. I then picked the model with the lowest error rate. You could use other metrics like area under ROC curve.
You could also use something like stepwise regression to pick the number of dimensions for you. What type of regression are you preforming specifically?
What do you mean by sparse btw? | Selecting the number of sparse principal components to include in regression | The cross validation results were also used to determine the optimal number of dimensions for the LSI space. Too few dimensions did not take advantage of the predictive power of the data; while too ma | Selecting the number of sparse principal components to include in regression
The cross validation results were also used to determine the optimal number of dimensions for the LSI space. Too few dimensions did not take advantage of the predictive power of the data; while too many dimensions resulted in
over-fitting. Fig. 4 shows the distribution of average errors for models with different numbers of LSI dimensions. The models with four dimensional LSI spaces produced both the fewest average number of errors and the fewest median number of errors, so the final model was built using a four dimensional LSI space.
Link
I can post a copy if you aren't an ieee member.
This is from a paper I wrote in undergrad. I had a problem where I needed to decide how many dimensions (Latent Semantic Indexing is similar to PCA) to use in my logistic regression model. What I did was pick a metric (i.e. the error rate when using a flagging probability of .5) and looked at the distribution for this error rate for different models trained on different number of dimensions. I then picked the model with the lowest error rate. You could use other metrics like area under ROC curve.
You could also use something like stepwise regression to pick the number of dimensions for you. What type of regression are you preforming specifically?
What do you mean by sparse btw? | Selecting the number of sparse principal components to include in regression
The cross validation results were also used to determine the optimal number of dimensions for the LSI space. Too few dimensions did not take advantage of the predictive power of the data; while too ma |
32,695 | Probability to draw a black ball in a set of black and white balls with mixed replacement conditions | Let the initial number of white balls be $w$ and black balls be $b$. The question describes a Markov chain whose states are indexed by the possible numbers of black balls $i \in \{0, 1, 2, \ldots, b\}.$ The transition probabilities are
$$p_w(i, i) = \frac{w}{w+i}, \quad p_w(i,i-1) = \frac{i}{w+i}.$$
The first describes drawing a white ball, in which case $i$ does not change, and the second describes drawing a black ball, in which case $i$ is reduced by $1$.
From now on let us drop the explicit subscript "$w$," taking this value as fixed throughout. The eigenvalues of the transition matrix $\mathbb{P}$ are
$$\mathbf{e} = \left(\frac{w}{w+b-i},\ i = 0, 1, \ldots, b\right)$$
corresponding to the matrix $\mathbb{Q}$ given by
$$q_{ij} = (-1)^{i+j+b} (j+w) \binom{b}{j} w^{j-b} \binom{b-j}{i} (b-i+w)^{b-j-1}$$
whose inverse is
$$(q^{-1})_{ij} = \frac{w^{b-i} \binom{j}{b-i} (b-j+w)^{i-b}}{\binom{b}{b-i}}.$$
That is,
$$\mathbb{P} = \mathbb{Q}\ \text{Diagonal}(\mathbf{e})\ \mathbb{Q}^{-1}.$$
Consequently the distribution after $n$ transitions out of the state $b$ is given by the vector of probabilities
$$\mathbf{p}_n = (0,0,\ldots,0,1) \mathbb{P}^n = (0,0,\ldots,0,1)\mathbb{Q}\ \text{Diagonal}(\mathbf{e}^n)\ \mathbb{Q}^{-1}.$$
That is, the chance there are $i$ black balls left after $n$ draws is
$$p_{ni} = \sum_{j=0}^b q_{nj} e_j^n (q^{-1})_{ji}.$$
For example, starting with any number of white balls and $b=2$ black balls, the probability distribution after $n \ge 1$ draws is
$$\eqalign{
\Pr(i=2) &= p_{n2} &= \frac{w^n}{(2+w)^n} \\
\Pr(i=1) &= p_{n1} &= \frac{2w^{n-1}}{(1+w)^{n-1}} - \frac{2 w^{n-1}(1+w)}{(2+w)^n} \\
\Pr(i=0) &= p_{n0} &= 1 - \frac{2 w^{n-1}}{(1+w)^{n-1}} + \frac{w^{n-1}}{(2+w)^{n-1}}.
}$$
The curves in this figure track the probabilities of the states $i=0$ (blue), $i=1$ (red), and $i=2$ (gold) as a function of the number of draws $n$ when $w=5$; that is, the urn begins with two black balls and five white balls.
The state $i=0$ (running out of black balls) is an absorbing state: in the limit as $n$ grows without bound, the probability of this state approaches unity (but never exactly reaches it). | Probability to draw a black ball in a set of black and white balls with mixed replacement conditions | Let the initial number of white balls be $w$ and black balls be $b$. The question describes a Markov chain whose states are indexed by the possible numbers of black balls $i \in \{0, 1, 2, \ldots, b\ | Probability to draw a black ball in a set of black and white balls with mixed replacement conditions
Let the initial number of white balls be $w$ and black balls be $b$. The question describes a Markov chain whose states are indexed by the possible numbers of black balls $i \in \{0, 1, 2, \ldots, b\}.$ The transition probabilities are
$$p_w(i, i) = \frac{w}{w+i}, \quad p_w(i,i-1) = \frac{i}{w+i}.$$
The first describes drawing a white ball, in which case $i$ does not change, and the second describes drawing a black ball, in which case $i$ is reduced by $1$.
From now on let us drop the explicit subscript "$w$," taking this value as fixed throughout. The eigenvalues of the transition matrix $\mathbb{P}$ are
$$\mathbf{e} = \left(\frac{w}{w+b-i},\ i = 0, 1, \ldots, b\right)$$
corresponding to the matrix $\mathbb{Q}$ given by
$$q_{ij} = (-1)^{i+j+b} (j+w) \binom{b}{j} w^{j-b} \binom{b-j}{i} (b-i+w)^{b-j-1}$$
whose inverse is
$$(q^{-1})_{ij} = \frac{w^{b-i} \binom{j}{b-i} (b-j+w)^{i-b}}{\binom{b}{b-i}}.$$
That is,
$$\mathbb{P} = \mathbb{Q}\ \text{Diagonal}(\mathbf{e})\ \mathbb{Q}^{-1}.$$
Consequently the distribution after $n$ transitions out of the state $b$ is given by the vector of probabilities
$$\mathbf{p}_n = (0,0,\ldots,0,1) \mathbb{P}^n = (0,0,\ldots,0,1)\mathbb{Q}\ \text{Diagonal}(\mathbf{e}^n)\ \mathbb{Q}^{-1}.$$
That is, the chance there are $i$ black balls left after $n$ draws is
$$p_{ni} = \sum_{j=0}^b q_{nj} e_j^n (q^{-1})_{ji}.$$
For example, starting with any number of white balls and $b=2$ black balls, the probability distribution after $n \ge 1$ draws is
$$\eqalign{
\Pr(i=2) &= p_{n2} &= \frac{w^n}{(2+w)^n} \\
\Pr(i=1) &= p_{n1} &= \frac{2w^{n-1}}{(1+w)^{n-1}} - \frac{2 w^{n-1}(1+w)}{(2+w)^n} \\
\Pr(i=0) &= p_{n0} &= 1 - \frac{2 w^{n-1}}{(1+w)^{n-1}} + \frac{w^{n-1}}{(2+w)^{n-1}}.
}$$
The curves in this figure track the probabilities of the states $i=0$ (blue), $i=1$ (red), and $i=2$ (gold) as a function of the number of draws $n$ when $w=5$; that is, the urn begins with two black balls and five white balls.
The state $i=0$ (running out of black balls) is an absorbing state: in the limit as $n$ grows without bound, the probability of this state approaches unity (but never exactly reaches it). | Probability to draw a black ball in a set of black and white balls with mixed replacement conditions
Let the initial number of white balls be $w$ and black balls be $b$. The question describes a Markov chain whose states are indexed by the possible numbers of black balls $i \in \{0, 1, 2, \ldots, b\ |
32,696 | For the binomial distribution, why does no unbiased estimator exist for $1/p$? | By definition, an estimator of any property of the distribution of $X$ is a function $t$ of the possible values of $X,$ here equal to $0, 1, \ldots, n.$
Given $n,$ assume $X$ has some Binomial$(n,p)$ distribution where $p$ is known only to lie within a given set $\Omega \subset[0,1].$ We will say more about $\Omega$ at the end.
The expectation of $t$ is its probability-weighted average,
$$E[t(X)] = \sum_{x=0}^n \Pr(X=x) t(x) = \sum_{x=0}^n \binom{n}{x}p^x(1-p)^{n-x} t(x).$$
(Note that the expressions "$t(x)$" are just numbers, one for each $x=0,1,\ldots, n.$)
This expectation depends on $p.$ In the specific case where $1/p$ is to be estimated, the estimator is unbiased when it equals $1/p$ for all values of $p \in\Omega;$ that is,
$$\frac{1}{p} = E[t(X)] = \sum_{x=0}^n \binom{n}{x}p^x(1-p)^{n-x} t(x).\tag{*}$$
Since $p\ne 0,$ this is algebraically equivalent to
$$\eqalign{
0 &= pE[t(x)] - 1 \\
&= -1 + \sum_{x=0}^n \binom{n}{x}p^{x+1}(1-p)^{n-x} t(x)\\
&= -1 + \sum_{x=0}^n \binom{n}{x}p^{x+1}\sum_{i=0}^{n-x}\binom{n-x}{i}(-p)^i t(x)\\
&= -1 + \sum_{x=0}^n\sum_{i=0}^{n-x}(-1)^i t(x) \binom{n}{x}\binom{n-x}{i}\,p^{x+1+i}\\
&= -1 + \sum_{k=1}^{n+1} \left(\sum_{i=0}^{k-1}(-1)^i t(k-1-i)\binom{n}{k-1-i}\binom{n-k+1+i}{i}\right)\, p^k\\
&= \sum_{k=0}^{n+1} a_k\, p^k
}$$
where $a_0=-1$ and
$$a_k = \sum_{i=0}^{k-1}(-1)^i t(k-1-i)\binom{n}{k-1-i}\binom{n-k+1+i}{i}$$
are constants determined by $t.$
This is explicitly a nonzero polynomial of degree at most $n+1$ in $p$ and therefore can have at most $n+1$ zeros. If, then, $\Omega$ contains more than $n+2$ values, this equation cannot hold for all of them, whence $t$ cannot be unbiased.
Generalizations of this result to certain other functions of $p,$ besides $1/p,$ should be obvious.
The reason why this argument does not generalize to, say, estimating $p,$ is that similar calculations give a polynomial whose coefficients actually can be reduced to zero by a suitable choice of $t:$ that's why it was crucial to observe that the polynomial determined by $a_0, a_1, \ldots, a_{n+1}$ is nonzero (because $a_0=-1$ no matter what).
Take, for instance, the case $n=2.$ The condition $(*)$ of unbiasedness becomes, for all $p\in\Omega,$
$$\eqalign{
0 &= -p + E[t(X)] \\&= -p + \left[t(0)(1-p)^2 + 2t(1)p(1-p) + t(2)p^2\right] \\
&= t(0) + (-1-2t_0+2t_1)p + (t(0)-2t(1) + t(2))p^2.
}$$
Working from left to right we find that the coefficients can all be made zero by setting $t(0)=1,$ then $t(1)=1/2,$ and finally $t(2) = 1.$ This is the only set of choices that does so. Thus, when $n=2$ and $\Omega$ contains at least three elements, this estimator $t$ is the unique unbiased estimator of $p.$
Finally, as an example of why the content of $\Omega$ matters, suppose $\Omega=\{1/3, 2/3\}.$ That is, we know $X$ counts the heads in two flips of a coin that favors either tails or heads by odds of $2:1$ (but we don't know which way). An unbiased estimate $1/p$ is obtained by the estimator $t(0) = 11/2,$ $t(1) = 1 = t(2).$ The check is straightforward: when $p=1/3$, the expectation of $t$ is
$$(2/3)^2\,t(0) + 2(2/3)(1/3)\,t(1) + (1/3)^2\,t(2) = (4/9)(11/2) + 4/9 + 1/9 = 3$$
and when $p=2/3$ the expectation is
$$(1/3)^2\,t(0) + 2(1/3)(2/3)\,t(1) + (2/3)^2\,t(2) = (1/9)(11/2) + 4/9 + 4/9 = 3/2.$$
In each case the expectation indeed is $1/p.$ (It is amusing that none of the values of $t,$ though, are actually equal $3$ or $3/2,$ which are the only two possible values of $1/p.$) | For the binomial distribution, why does no unbiased estimator exist for $1/p$? | By definition, an estimator of any property of the distribution of $X$ is a function $t$ of the possible values of $X,$ here equal to $0, 1, \ldots, n.$
Given $n,$ assume $X$ has some Binomial$(n,p) | For the binomial distribution, why does no unbiased estimator exist for $1/p$?
By definition, an estimator of any property of the distribution of $X$ is a function $t$ of the possible values of $X,$ here equal to $0, 1, \ldots, n.$
Given $n,$ assume $X$ has some Binomial$(n,p)$ distribution where $p$ is known only to lie within a given set $\Omega \subset[0,1].$ We will say more about $\Omega$ at the end.
The expectation of $t$ is its probability-weighted average,
$$E[t(X)] = \sum_{x=0}^n \Pr(X=x) t(x) = \sum_{x=0}^n \binom{n}{x}p^x(1-p)^{n-x} t(x).$$
(Note that the expressions "$t(x)$" are just numbers, one for each $x=0,1,\ldots, n.$)
This expectation depends on $p.$ In the specific case where $1/p$ is to be estimated, the estimator is unbiased when it equals $1/p$ for all values of $p \in\Omega;$ that is,
$$\frac{1}{p} = E[t(X)] = \sum_{x=0}^n \binom{n}{x}p^x(1-p)^{n-x} t(x).\tag{*}$$
Since $p\ne 0,$ this is algebraically equivalent to
$$\eqalign{
0 &= pE[t(x)] - 1 \\
&= -1 + \sum_{x=0}^n \binom{n}{x}p^{x+1}(1-p)^{n-x} t(x)\\
&= -1 + \sum_{x=0}^n \binom{n}{x}p^{x+1}\sum_{i=0}^{n-x}\binom{n-x}{i}(-p)^i t(x)\\
&= -1 + \sum_{x=0}^n\sum_{i=0}^{n-x}(-1)^i t(x) \binom{n}{x}\binom{n-x}{i}\,p^{x+1+i}\\
&= -1 + \sum_{k=1}^{n+1} \left(\sum_{i=0}^{k-1}(-1)^i t(k-1-i)\binom{n}{k-1-i}\binom{n-k+1+i}{i}\right)\, p^k\\
&= \sum_{k=0}^{n+1} a_k\, p^k
}$$
where $a_0=-1$ and
$$a_k = \sum_{i=0}^{k-1}(-1)^i t(k-1-i)\binom{n}{k-1-i}\binom{n-k+1+i}{i}$$
are constants determined by $t.$
This is explicitly a nonzero polynomial of degree at most $n+1$ in $p$ and therefore can have at most $n+1$ zeros. If, then, $\Omega$ contains more than $n+2$ values, this equation cannot hold for all of them, whence $t$ cannot be unbiased.
Generalizations of this result to certain other functions of $p,$ besides $1/p,$ should be obvious.
The reason why this argument does not generalize to, say, estimating $p,$ is that similar calculations give a polynomial whose coefficients actually can be reduced to zero by a suitable choice of $t:$ that's why it was crucial to observe that the polynomial determined by $a_0, a_1, \ldots, a_{n+1}$ is nonzero (because $a_0=-1$ no matter what).
Take, for instance, the case $n=2.$ The condition $(*)$ of unbiasedness becomes, for all $p\in\Omega,$
$$\eqalign{
0 &= -p + E[t(X)] \\&= -p + \left[t(0)(1-p)^2 + 2t(1)p(1-p) + t(2)p^2\right] \\
&= t(0) + (-1-2t_0+2t_1)p + (t(0)-2t(1) + t(2))p^2.
}$$
Working from left to right we find that the coefficients can all be made zero by setting $t(0)=1,$ then $t(1)=1/2,$ and finally $t(2) = 1.$ This is the only set of choices that does so. Thus, when $n=2$ and $\Omega$ contains at least three elements, this estimator $t$ is the unique unbiased estimator of $p.$
Finally, as an example of why the content of $\Omega$ matters, suppose $\Omega=\{1/3, 2/3\}.$ That is, we know $X$ counts the heads in two flips of a coin that favors either tails or heads by odds of $2:1$ (but we don't know which way). An unbiased estimate $1/p$ is obtained by the estimator $t(0) = 11/2,$ $t(1) = 1 = t(2).$ The check is straightforward: when $p=1/3$, the expectation of $t$ is
$$(2/3)^2\,t(0) + 2(2/3)(1/3)\,t(1) + (1/3)^2\,t(2) = (4/9)(11/2) + 4/9 + 1/9 = 3$$
and when $p=2/3$ the expectation is
$$(1/3)^2\,t(0) + 2(1/3)(2/3)\,t(1) + (2/3)^2\,t(2) = (1/9)(11/2) + 4/9 + 4/9 = 3/2.$$
In each case the expectation indeed is $1/p.$ (It is amusing that none of the values of $t,$ though, are actually equal $3$ or $3/2,$ which are the only two possible values of $1/p.$) | For the binomial distribution, why does no unbiased estimator exist for $1/p$?
By definition, an estimator of any property of the distribution of $X$ is a function $t$ of the possible values of $X,$ here equal to $0, 1, \ldots, n.$
Given $n,$ assume $X$ has some Binomial$(n,p) |
32,697 | How to compare measurements and uncertainties made with different measuring instruments? | The model you use to "simulate your problem" can be used almost verbatim to estimate the parameters you are interested in using Bayesian estimation. Here is the model I'll use (using the same notation as you):
$$ L_B \sim \mathrm{Normal}(\mu, \sigma) \\
x_i \sim \mathrm{Normal}(\mu, \sigma) \mathrm{\ for\ i\ from\ 1\ to\ N} \\
L_{Ai} \sim \mathrm{Normal}(x_i \cdot \mathrm{gain} - \mathrm{offset}, \mathrm{dispersion}) \mathrm{\ for\ i\ from\ 1\ to\ N} \\
$$
The glaring omision in this model compared to your problem is that I don't include the assumption that some of the same $x_i$s that got measured by B could then be measured again by A. This could probably be added, but I'm not completely sure how.
This model is implemented in R & JAGS below using very vague, almost flat priors, the data used is the one you generated in your question:
library(rjags)
model_string <- "model{
for(i in 1:length(L_B)) {
L_B[i] ~ dnorm(mu, inv_sigma2) # <- reparameterizing sigma into precision
# needed because of JAGS/BUGS legacy.
}
for(i in 1:length(L_A)) {
x[i] ~ dnorm(mu, inv_sigma2)
L_A[i] ~ dnorm(gain * x[i] - offset , inv_dispersion2)
}
mu ~ dnorm(0, 0.00001)
inv_sigma2 ~ dgamma(0.0001, 0.0001)
sigma <- sqrt(1 / inv_sigma2)
gain ~ dnorm(0, 0.00001) T(0,)
offset ~ dnorm(0, 0.00001)
inv_dispersion2 ~ dgamma(0.0001, 0.0001)
dispersion <- sqrt(1 / inv_dispersion2)
}"
Let's run it and see how well it does:
model <- jags.model(textConnection(model_string), list(L_A = L_A, L_B = L_B), n.chains=3)
update(model, 3000)
mcmc_samples <- coda.samples(model, c("mu", "sigma", "gain", "offset", "dispersion"), 200000, thin=100)
apply(as.matrix(mcmc_samples), 2, quantile, c(0.025, 0.5, 0.975))
## dispersion gain mu offset sigma
## 2.5% 0.01057 0.1366 -0.3116 -0.51836 0.9365
## 50% 0.18657 1.0745 -0.1099 -0.26950 1.0675
## 97.5% 1.20153 1.2846 0.1051 -0.04409 1.2433
The resulting estimates are reasonably close to the values you used when you generated the data:
c(gain_A, offset_A, dispersion_A)
## [1] 1.1 -0.2 0.5
...except for, perhaps, dispersion. But with more data, perhaps more informed priors and running the MCMC sampling longer this estimate should be better. | How to compare measurements and uncertainties made with different measuring instruments? | The model you use to "simulate your problem" can be used almost verbatim to estimate the parameters you are interested in using Bayesian estimation. Here is the model I'll use (using the same notation | How to compare measurements and uncertainties made with different measuring instruments?
The model you use to "simulate your problem" can be used almost verbatim to estimate the parameters you are interested in using Bayesian estimation. Here is the model I'll use (using the same notation as you):
$$ L_B \sim \mathrm{Normal}(\mu, \sigma) \\
x_i \sim \mathrm{Normal}(\mu, \sigma) \mathrm{\ for\ i\ from\ 1\ to\ N} \\
L_{Ai} \sim \mathrm{Normal}(x_i \cdot \mathrm{gain} - \mathrm{offset}, \mathrm{dispersion}) \mathrm{\ for\ i\ from\ 1\ to\ N} \\
$$
The glaring omision in this model compared to your problem is that I don't include the assumption that some of the same $x_i$s that got measured by B could then be measured again by A. This could probably be added, but I'm not completely sure how.
This model is implemented in R & JAGS below using very vague, almost flat priors, the data used is the one you generated in your question:
library(rjags)
model_string <- "model{
for(i in 1:length(L_B)) {
L_B[i] ~ dnorm(mu, inv_sigma2) # <- reparameterizing sigma into precision
# needed because of JAGS/BUGS legacy.
}
for(i in 1:length(L_A)) {
x[i] ~ dnorm(mu, inv_sigma2)
L_A[i] ~ dnorm(gain * x[i] - offset , inv_dispersion2)
}
mu ~ dnorm(0, 0.00001)
inv_sigma2 ~ dgamma(0.0001, 0.0001)
sigma <- sqrt(1 / inv_sigma2)
gain ~ dnorm(0, 0.00001) T(0,)
offset ~ dnorm(0, 0.00001)
inv_dispersion2 ~ dgamma(0.0001, 0.0001)
dispersion <- sqrt(1 / inv_dispersion2)
}"
Let's run it and see how well it does:
model <- jags.model(textConnection(model_string), list(L_A = L_A, L_B = L_B), n.chains=3)
update(model, 3000)
mcmc_samples <- coda.samples(model, c("mu", "sigma", "gain", "offset", "dispersion"), 200000, thin=100)
apply(as.matrix(mcmc_samples), 2, quantile, c(0.025, 0.5, 0.975))
## dispersion gain mu offset sigma
## 2.5% 0.01057 0.1366 -0.3116 -0.51836 0.9365
## 50% 0.18657 1.0745 -0.1099 -0.26950 1.0675
## 97.5% 1.20153 1.2846 0.1051 -0.04409 1.2433
The resulting estimates are reasonably close to the values you used when you generated the data:
c(gain_A, offset_A, dispersion_A)
## [1] 1.1 -0.2 0.5
...except for, perhaps, dispersion. But with more data, perhaps more informed priors and running the MCMC sampling longer this estimate should be better. | How to compare measurements and uncertainties made with different measuring instruments?
The model you use to "simulate your problem" can be used almost verbatim to estimate the parameters you are interested in using Bayesian estimation. Here is the model I'll use (using the same notation |
32,698 | How to compare measurements and uncertainties made with different measuring instruments? | I'm more of an engineering guy, not a stats guy, so lets talk nuts-and-bolts.
Find:
estimated object metric uncertainty (state covariance)
(possibly) an established standard measurement for finding measurment uncertainty.
Given:
random replacement of the object measured after each measurement, so paired comparison is not allowed.
two measurement tools with different measurement uncertainty, one of whose uncertainty is not characterized.
Considerations:
Calibration. Most measurement systems are calibrated against high quality standards and so they have minimal bias. I should be able to assume this, but your system indicates bias as an issue.
Sample size. Taking 100 measurements is pretty weak. I prefer at least 300, but that is related to my data. You need to make sure that you get enough samples to minimize the error in your estimates, but not so much that you are buried. I can easily get 10M samples, but it is beyond the scope of MatLab and my laptop to do anything more than basics with that many rows.
Approach:
convert your code from R to a language I know well, MatLab.
look at CDF-domain interploation and scatterplots.
MatLab of your code:
function MySimulation
%houekeeping
clc;
%parameters
N=1000
true_mean = 0
true_sd = 1
% I simulate the property to be measured for N objects, here the property is
% normally distributed...
true_values = normrnd(true_mean,true_sd,N,1);
% but it could also be a mixture of normal distributions:
% it could be anything. A Gaussian mixture is crazy-tame compared to what
% it could be, but we have to assume something to start this.
% The "quality" of instrument B is "good enough" to measure the true values:
gain_B = 1;
offset_B = true_sd/10;
dispersion_B = true_sd/10;
% The instrument B has a lower "quality" than the one of instrument A:
gain_A = 1.1*gain_B;
offset_A=-2*offset_B;
dispersion_A=5*dispersion_B;
% I simulate the measuremente made by instrument A:
L_A = instrument_measurement(true_values,gain_A,offset_A,dispersion_A);
% I make the sample:
% sample_to_measure_with_B = sample(true_values,100,replace=F)
sample_to_measure_with_B = randsample(true_values,100);
% I simulate the measuremente made by instrument B:
L_B = instrument_measurement(sample_to_measure_with_B,gain_B,offset_B,dispersion_B);
% I plot the empirical CDF of the true values, of the measurements made with
% instrument A and of the measurements made with instrument B
figure(1); clf; hold on
h1=cdfplot(true_values);
set(h1,'Color',0.5.*[1,1,1],'Linewidth',2);
h2=cdfplot(L_A);
set(h2,'Color','b','Marker','+');
h3=cdfplot(L_B);
set(h3,'Color',[1,0.65,0],'Marker','o');
legend('true','A','B','location','northwest')
title('Empirical CDFs')
xlabel('x-measured property')
ylabel('value of empirical CDF')
function [out] = instrument_measurement(true_value,gain,offset,dispersion)
% The instrument has three parameters: the true_value is transformed by means
% of a linear transformation described by parameters offset and gain; then there
% is a dispersion parameter. An ideal instrument would have gain=1, offset=0
% and dispersion that approaches to zero.
out=normrnd(gain*true_value-offset,dispersion,length(true_value),1);
return
Your picture, translated:
10x Higher sampling:
Now when we plot the A vs. B using the following
[xa,fa]=ecdf(L_A);
[xb,fb]=ecdf(L_B);
xb2=xa;
fb2=interp1(xb,fb,xb2);
we get the this:
If you look at it, you see that, except for runaway at the tails, the relationship between these two distributions is essentially linear. It is scale and offset. I would use sample size to figure out where to truncate the tails, then fit an analytic line through that. Then I can translate from one sensor reading to another.
The trick of mapping through CDF's is how to deal with many distributions. You need to make sure that your sample size is high enough to capture the characteristics of the underlying distribution.
If you were dealing with one bite at a time, iteratively instead of 10k samples at a time, then you could use a kalman filter to determine the mean and variance, or the transform between A and B. The preceding is a "rev1" brute force sort of approach. It is inelegant. It has holes and weaknesses. It is not strongly characterized. It does not stand on amazingly strong theoretical foundations. It is, however, reasonably quick and "good enough" which are also nice measures of goodness.
About Welch has a great introduction. (link)
A keyword for this is "sensor fusion". Here is a second link on the topic. (link)
Best of luck. | How to compare measurements and uncertainties made with different measuring instruments? | I'm more of an engineering guy, not a stats guy, so lets talk nuts-and-bolts.
Find:
estimated object metric uncertainty (state covariance)
(possibly) an established standard measurement for finding m | How to compare measurements and uncertainties made with different measuring instruments?
I'm more of an engineering guy, not a stats guy, so lets talk nuts-and-bolts.
Find:
estimated object metric uncertainty (state covariance)
(possibly) an established standard measurement for finding measurment uncertainty.
Given:
random replacement of the object measured after each measurement, so paired comparison is not allowed.
two measurement tools with different measurement uncertainty, one of whose uncertainty is not characterized.
Considerations:
Calibration. Most measurement systems are calibrated against high quality standards and so they have minimal bias. I should be able to assume this, but your system indicates bias as an issue.
Sample size. Taking 100 measurements is pretty weak. I prefer at least 300, but that is related to my data. You need to make sure that you get enough samples to minimize the error in your estimates, but not so much that you are buried. I can easily get 10M samples, but it is beyond the scope of MatLab and my laptop to do anything more than basics with that many rows.
Approach:
convert your code from R to a language I know well, MatLab.
look at CDF-domain interploation and scatterplots.
MatLab of your code:
function MySimulation
%houekeeping
clc;
%parameters
N=1000
true_mean = 0
true_sd = 1
% I simulate the property to be measured for N objects, here the property is
% normally distributed...
true_values = normrnd(true_mean,true_sd,N,1);
% but it could also be a mixture of normal distributions:
% it could be anything. A Gaussian mixture is crazy-tame compared to what
% it could be, but we have to assume something to start this.
% The "quality" of instrument B is "good enough" to measure the true values:
gain_B = 1;
offset_B = true_sd/10;
dispersion_B = true_sd/10;
% The instrument B has a lower "quality" than the one of instrument A:
gain_A = 1.1*gain_B;
offset_A=-2*offset_B;
dispersion_A=5*dispersion_B;
% I simulate the measuremente made by instrument A:
L_A = instrument_measurement(true_values,gain_A,offset_A,dispersion_A);
% I make the sample:
% sample_to_measure_with_B = sample(true_values,100,replace=F)
sample_to_measure_with_B = randsample(true_values,100);
% I simulate the measuremente made by instrument B:
L_B = instrument_measurement(sample_to_measure_with_B,gain_B,offset_B,dispersion_B);
% I plot the empirical CDF of the true values, of the measurements made with
% instrument A and of the measurements made with instrument B
figure(1); clf; hold on
h1=cdfplot(true_values);
set(h1,'Color',0.5.*[1,1,1],'Linewidth',2);
h2=cdfplot(L_A);
set(h2,'Color','b','Marker','+');
h3=cdfplot(L_B);
set(h3,'Color',[1,0.65,0],'Marker','o');
legend('true','A','B','location','northwest')
title('Empirical CDFs')
xlabel('x-measured property')
ylabel('value of empirical CDF')
function [out] = instrument_measurement(true_value,gain,offset,dispersion)
% The instrument has three parameters: the true_value is transformed by means
% of a linear transformation described by parameters offset and gain; then there
% is a dispersion parameter. An ideal instrument would have gain=1, offset=0
% and dispersion that approaches to zero.
out=normrnd(gain*true_value-offset,dispersion,length(true_value),1);
return
Your picture, translated:
10x Higher sampling:
Now when we plot the A vs. B using the following
[xa,fa]=ecdf(L_A);
[xb,fb]=ecdf(L_B);
xb2=xa;
fb2=interp1(xb,fb,xb2);
we get the this:
If you look at it, you see that, except for runaway at the tails, the relationship between these two distributions is essentially linear. It is scale and offset. I would use sample size to figure out where to truncate the tails, then fit an analytic line through that. Then I can translate from one sensor reading to another.
The trick of mapping through CDF's is how to deal with many distributions. You need to make sure that your sample size is high enough to capture the characteristics of the underlying distribution.
If you were dealing with one bite at a time, iteratively instead of 10k samples at a time, then you could use a kalman filter to determine the mean and variance, or the transform between A and B. The preceding is a "rev1" brute force sort of approach. It is inelegant. It has holes and weaknesses. It is not strongly characterized. It does not stand on amazingly strong theoretical foundations. It is, however, reasonably quick and "good enough" which are also nice measures of goodness.
About Welch has a great introduction. (link)
A keyword for this is "sensor fusion". Here is a second link on the topic. (link)
Best of luck. | How to compare measurements and uncertainties made with different measuring instruments?
I'm more of an engineering guy, not a stats guy, so lets talk nuts-and-bolts.
Find:
estimated object metric uncertainty (state covariance)
(possibly) an established standard measurement for finding m |
32,699 | For a continuous random variable, why does $P(a < Z < b) = P(a \leq Z < b) = P(a < Z \leq b) = P(a \leq Z \leq b)$ | Nothing formal to add to this, but an analogy that really helped me to understand this came from a calculus text. Imagine you have an iron pipe of a certain length and weight. And you wish to cut it into two pieces. If the pipe is say 1 m long you might want to cut it in half at the 0.5 mark. Now think of the pipe's weight as some constant times the length of the pipe, (we assume that all cross-sections of equal length has the same weight).
Cutting the pipe in half at the 0.5 m mark - how much weight do you lose? Remember that the only cross-section you are removing is the 0.5 m mark itself. So what is the length of this cross-section? Consider that 0.49999999... is not apart of it, and neither is 0.5000000000...1, or any other point that is close to, but not equal to 0.5 - so the length of this cross-section is technically zero. Which means you're not really removing any weight at all.
This would explain why $\leq$ and $<$ are basically the same for continuous variables - including or excluding the endpoint really doesn't change anything - for any point you pick close to the endpoint, there is still an infinite amount of points between them.
Does this make any sense? | For a continuous random variable, why does $P(a < Z < b) = P(a \leq Z < b) = P(a < Z \leq b) = P(a \ | Nothing formal to add to this, but an analogy that really helped me to understand this came from a calculus text. Imagine you have an iron pipe of a certain length and weight. And you wish to cut it i | For a continuous random variable, why does $P(a < Z < b) = P(a \leq Z < b) = P(a < Z \leq b) = P(a \leq Z \leq b)$
Nothing formal to add to this, but an analogy that really helped me to understand this came from a calculus text. Imagine you have an iron pipe of a certain length and weight. And you wish to cut it into two pieces. If the pipe is say 1 m long you might want to cut it in half at the 0.5 mark. Now think of the pipe's weight as some constant times the length of the pipe, (we assume that all cross-sections of equal length has the same weight).
Cutting the pipe in half at the 0.5 m mark - how much weight do you lose? Remember that the only cross-section you are removing is the 0.5 m mark itself. So what is the length of this cross-section? Consider that 0.49999999... is not apart of it, and neither is 0.5000000000...1, or any other point that is close to, but not equal to 0.5 - so the length of this cross-section is technically zero. Which means you're not really removing any weight at all.
This would explain why $\leq$ and $<$ are basically the same for continuous variables - including or excluding the endpoint really doesn't change anything - for any point you pick close to the endpoint, there is still an infinite amount of points between them.
Does this make any sense? | For a continuous random variable, why does $P(a < Z < b) = P(a \leq Z < b) = P(a < Z \leq b) = P(a \
Nothing formal to add to this, but an analogy that really helped me to understand this came from a calculus text. Imagine you have an iron pipe of a certain length and weight. And you wish to cut it i |
32,700 | For a continuous random variable, why does $P(a < Z < b) = P(a \leq Z < b) = P(a < Z \leq b) = P(a \leq Z \leq b)$ | First I will give the definition of an (absolutely) continuous random variable $Z$.
(Advanced probability is needed, you many skip it!)
Let $(\Omega,F, P)$ be a probability space and let $Z:\Omega\rightarrow R^n$ be a random vector. The probability $P_X$ on $B(R^n)$ defined by $P_Z(A)=P\{Z\in A\}$, $A \in B(R^n)$ is called the distribution of $Z$. Now if $P_Z\ll \mu,$ where $\mu$ is Lebesgue measure on $R^n$, (i.e. $P$ is absolutely continuous with respect to $\mu$) then we say that $Z$ is an (absolutely) continuous random vector. Now by using Radon–Nikodym theorem, there exists a function $f: R^n\to[0,+\infty]$ such that $P_Z(A)=\int_{A}fd_{\mu}$ for all $A \in B(R^n)$. We call $f$ the density function of $Z$.
Now define the cumulative distribution function (CDF) of an absolutely continuous random variable $Z$ as: $$F_Z(z)=P(Z\leq z).$$
Before I give a formal proof, let's have an example of a continuous random varible that is uniformly distributed i.e. with a probability density function of $f(z)=1$ for $0\leq z \leq 1$ and 0 otherwise. Now let's try to find $P(z=0.5)$. We have $$P(z=0.5)\sim P(0.4< z\leq 0.6)=\int_{0.4}^{0.6}f(z)d_z=0.2.$$ We can shrink that interval to get a better approximation as follows: $$P(z=0.5)\sim P(0.49< z\leq 0.51)=\int_{0.49}^{0.51}f(z)d_z=0.02,$$ $$P(z=0.5)\sim P(0.499< z\leq 0.501)=\int_{0.499}^{0.501}f(z)d_z=0.002.$$ As you can see, these probabilities are converging to zero as we shrink the length of the interval. Now let's prove it formally. I am goig to show that for any continuous random variable $Z$, we have: $$P(Z=a)=0,$$ by using the CDF. $$P(Z=a)=\lim_{\epsilon\to 0}P(a-\epsilon< Z\leq a+\epsilon)=\lim_{\epsilon\to 0}F_Z(a+\epsilon)-\lim_{\epsilon\to 0}F_Z(a-\epsilon)\\=F_Z(a)-F_Z(a)=0,$$ since the CDF function, $F$, is a "continuous" function for the continuous random variable $Z$. Similarly $P(Z=b)=0$.
Finally note that $$P(A\bigcup B)=P(A)+P(B)-P(A\bigcap B).$$ So $$P(a\leq Z<b)=P\Big(\{Z=a\}\bigcap \{a<Z<b\}\Big)=P(Z=a)+P(a<Z<b)\\=0+P(a<Z<b)=P(a<Z<b).$$ You can use the same argument for other equalities. | For a continuous random variable, why does $P(a < Z < b) = P(a \leq Z < b) = P(a < Z \leq b) = P(a \ | First I will give the definition of an (absolutely) continuous random variable $Z$.
(Advanced probability is needed, you many skip it!)
Let $(\Omega,F, P)$ be a probability space and let $Z:\O | For a continuous random variable, why does $P(a < Z < b) = P(a \leq Z < b) = P(a < Z \leq b) = P(a \leq Z \leq b)$
First I will give the definition of an (absolutely) continuous random variable $Z$.
(Advanced probability is needed, you many skip it!)
Let $(\Omega,F, P)$ be a probability space and let $Z:\Omega\rightarrow R^n$ be a random vector. The probability $P_X$ on $B(R^n)$ defined by $P_Z(A)=P\{Z\in A\}$, $A \in B(R^n)$ is called the distribution of $Z$. Now if $P_Z\ll \mu,$ where $\mu$ is Lebesgue measure on $R^n$, (i.e. $P$ is absolutely continuous with respect to $\mu$) then we say that $Z$ is an (absolutely) continuous random vector. Now by using Radon–Nikodym theorem, there exists a function $f: R^n\to[0,+\infty]$ such that $P_Z(A)=\int_{A}fd_{\mu}$ for all $A \in B(R^n)$. We call $f$ the density function of $Z$.
Now define the cumulative distribution function (CDF) of an absolutely continuous random variable $Z$ as: $$F_Z(z)=P(Z\leq z).$$
Before I give a formal proof, let's have an example of a continuous random varible that is uniformly distributed i.e. with a probability density function of $f(z)=1$ for $0\leq z \leq 1$ and 0 otherwise. Now let's try to find $P(z=0.5)$. We have $$P(z=0.5)\sim P(0.4< z\leq 0.6)=\int_{0.4}^{0.6}f(z)d_z=0.2.$$ We can shrink that interval to get a better approximation as follows: $$P(z=0.5)\sim P(0.49< z\leq 0.51)=\int_{0.49}^{0.51}f(z)d_z=0.02,$$ $$P(z=0.5)\sim P(0.499< z\leq 0.501)=\int_{0.499}^{0.501}f(z)d_z=0.002.$$ As you can see, these probabilities are converging to zero as we shrink the length of the interval. Now let's prove it formally. I am goig to show that for any continuous random variable $Z$, we have: $$P(Z=a)=0,$$ by using the CDF. $$P(Z=a)=\lim_{\epsilon\to 0}P(a-\epsilon< Z\leq a+\epsilon)=\lim_{\epsilon\to 0}F_Z(a+\epsilon)-\lim_{\epsilon\to 0}F_Z(a-\epsilon)\\=F_Z(a)-F_Z(a)=0,$$ since the CDF function, $F$, is a "continuous" function for the continuous random variable $Z$. Similarly $P(Z=b)=0$.
Finally note that $$P(A\bigcup B)=P(A)+P(B)-P(A\bigcap B).$$ So $$P(a\leq Z<b)=P\Big(\{Z=a\}\bigcap \{a<Z<b\}\Big)=P(Z=a)+P(a<Z<b)\\=0+P(a<Z<b)=P(a<Z<b).$$ You can use the same argument for other equalities. | For a continuous random variable, why does $P(a < Z < b) = P(a \leq Z < b) = P(a < Z \leq b) = P(a \
First I will give the definition of an (absolutely) continuous random variable $Z$.
(Advanced probability is needed, you many skip it!)
Let $(\Omega,F, P)$ be a probability space and let $Z:\O |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.