idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
11,301
Blind source separation of convex mixture?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. It could be achieved by using an exponential non-linea...
Blind source separation of convex mixture?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Blind source separation of convex mixture? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. It could be...
Blind source separation of convex mixture? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
11,302
Blind source separation of convex mixture?
Transformation of variables is a good option when that linearizes the problem. That procedure can be used to increase the correlations, reduce the residuals, and decrease the number of parameters needed to produce a good fit to the data. For example, $\ln Y=a_0+a_1\ln X_1+a_2\ln X_2\to Y=e^{a_0}X_1^{a_1}X_2^{a_2},$ mig...
Blind source separation of convex mixture?
Transformation of variables is a good option when that linearizes the problem. That procedure can be used to increase the correlations, reduce the residuals, and decrease the number of parameters need
Blind source separation of convex mixture? Transformation of variables is a good option when that linearizes the problem. That procedure can be used to increase the correlations, reduce the residuals, and decrease the number of parameters needed to produce a good fit to the data. For example, $\ln Y=a_0+a_1\ln X_1+a_2\...
Blind source separation of convex mixture? Transformation of variables is a good option when that linearizes the problem. That procedure can be used to increase the correlations, reduce the residuals, and decrease the number of parameters need
11,303
Why is the use of high order polynomials for regression discouraged?
I cover this in some detail in Chapter 2 of RMS. Briefly, besides extrapolation problems, ordinary polynomials have these problems: The shape of the fit in one region of the data is influenced by far away points Polynomials cannot fit threshold effects, e.g., a nearly flat curve that suddenly accelerates Polynomials ...
Why is the use of high order polynomials for regression discouraged?
I cover this in some detail in Chapter 2 of RMS. Briefly, besides extrapolation problems, ordinary polynomials have these problems: The shape of the fit in one region of the data is influenced by fa
Why is the use of high order polynomials for regression discouraged? I cover this in some detail in Chapter 2 of RMS. Briefly, besides extrapolation problems, ordinary polynomials have these problems: The shape of the fit in one region of the data is influenced by far away points Polynomials cannot fit threshold effe...
Why is the use of high order polynomials for regression discouraged? I cover this in some detail in Chapter 2 of RMS. Briefly, besides extrapolation problems, ordinary polynomials have these problems: The shape of the fit in one region of the data is influenced by fa
11,304
Why is the use of high order polynomials for regression discouraged?
Yes, polynomials are also problematic in interpolation, because of overfitting and high variability. Here is an example. Assume your dependent variable $y$ is uniformly distributed on the interval $[0,1]$. You also have a "predictor" variable $x$, also uniformly distributed on the interval $[0,1]$. However, there is no...
Why is the use of high order polynomials for regression discouraged?
Yes, polynomials are also problematic in interpolation, because of overfitting and high variability. Here is an example. Assume your dependent variable $y$ is uniformly distributed on the interval $[0
Why is the use of high order polynomials for regression discouraged? Yes, polynomials are also problematic in interpolation, because of overfitting and high variability. Here is an example. Assume your dependent variable $y$ is uniformly distributed on the interval $[0,1]$. You also have a "predictor" variable $x$, als...
Why is the use of high order polynomials for regression discouraged? Yes, polynomials are also problematic in interpolation, because of overfitting and high variability. Here is an example. Assume your dependent variable $y$ is uniformly distributed on the interval $[0
11,305
Why is the use of high order polynomials for regression discouraged?
If your goal is interpolation, you typically want the simplest function that describes your observations and avoid overfitting. Given that it is unusual to see physical laws and relationships which contain powers higher than 3, using higher order polynomials when there is no intuitive justification is taken to be a sig...
Why is the use of high order polynomials for regression discouraged?
If your goal is interpolation, you typically want the simplest function that describes your observations and avoid overfitting. Given that it is unusual to see physical laws and relationships which co
Why is the use of high order polynomials for regression discouraged? If your goal is interpolation, you typically want the simplest function that describes your observations and avoid overfitting. Given that it is unusual to see physical laws and relationships which contain powers higher than 3, using higher order poly...
Why is the use of high order polynomials for regression discouraged? If your goal is interpolation, you typically want the simplest function that describes your observations and avoid overfitting. Given that it is unusual to see physical laws and relationships which co
11,306
Why is the use of high order polynomials for regression discouraged?
Runge's phenomenon can lead to high-degree polynomials being much wigglier than the variation actually suggested by the data. An appeal of splines as a substitute for high-degree polynomials, particularly natural splines, is to allow nonmonotonicity and varying slopes without varying too wildly. I would be hard-pressed...
Why is the use of high order polynomials for regression discouraged?
Runge's phenomenon can lead to high-degree polynomials being much wigglier than the variation actually suggested by the data. An appeal of splines as a substitute for high-degree polynomials, particul
Why is the use of high order polynomials for regression discouraged? Runge's phenomenon can lead to high-degree polynomials being much wigglier than the variation actually suggested by the data. An appeal of splines as a substitute for high-degree polynomials, particularly natural splines, is to allow nonmonotonicity a...
Why is the use of high order polynomials for regression discouraged? Runge's phenomenon can lead to high-degree polynomials being much wigglier than the variation actually suggested by the data. An appeal of splines as a substitute for high-degree polynomials, particul
11,307
Why is the use of high order polynomials for regression discouraged?
As an inveterate contrarian, I feel the need to amend the premise that high order polynomials shouldn't be used for interpolation. I would argue that the correct statement is "high order polynomials make poor interpolants unless properly regularized". Indeed, it is quite popular (at least in academic circles) to conduc...
Why is the use of high order polynomials for regression discouraged?
As an inveterate contrarian, I feel the need to amend the premise that high order polynomials shouldn't be used for interpolation. I would argue that the correct statement is "high order polynomials m
Why is the use of high order polynomials for regression discouraged? As an inveterate contrarian, I feel the need to amend the premise that high order polynomials shouldn't be used for interpolation. I would argue that the correct statement is "high order polynomials make poor interpolants unless properly regularized"....
Why is the use of high order polynomials for regression discouraged? As an inveterate contrarian, I feel the need to amend the premise that high order polynomials shouldn't be used for interpolation. I would argue that the correct statement is "high order polynomials m
11,308
Why is P(A,B|C)/P(B|C) = P(A|B,C)?
Any probability result that is true for unconditional probability remains true if everything is conditioned on some event. You know that by definition, $$P(A\mid B) = \frac{P(A\cap B)}{P(B)}\tag{1}$$ and so if we condition everything on $C$ having occurred, we get that $$P(A\mid (B \cap C)) = \frac{P((A\cap B)\mid C)}{...
Why is P(A,B|C)/P(B|C) = P(A|B,C)?
Any probability result that is true for unconditional probability remains true if everything is conditioned on some event. You know that by definition, $$P(A\mid B) = \frac{P(A\cap B)}{P(B)}\tag{1}$$
Why is P(A,B|C)/P(B|C) = P(A|B,C)? Any probability result that is true for unconditional probability remains true if everything is conditioned on some event. You know that by definition, $$P(A\mid B) = \frac{P(A\cap B)}{P(B)}\tag{1}$$ and so if we condition everything on $C$ having occurred, we get that $$P(A\mid (B \c...
Why is P(A,B|C)/P(B|C) = P(A|B,C)? Any probability result that is true for unconditional probability remains true if everything is conditioned on some event. You know that by definition, $$P(A\mid B) = \frac{P(A\cap B)}{P(B)}\tag{1}$$
11,309
Why is P(A,B|C)/P(B|C) = P(A|B,C)?
Just draw the Venn diagram. We then have $$\Pr[A \cap B \mid C] = \frac{\text{"1"}}{\text{"C"}}, \quad \Pr[B \mid C] = \frac{\text{"1"} + \text{"2"}}{\text{"C"}}, \quad \Pr[A \mid B \cap C] = \frac{\text{"1"}}{\text{"1"} + \text{"2"}},$$ and the relationship follows by dividing the first expression by the second.
Why is P(A,B|C)/P(B|C) = P(A|B,C)?
Just draw the Venn diagram. We then have $$\Pr[A \cap B \mid C] = \frac{\text{"1"}}{\text{"C"}}, \quad \Pr[B \mid C] = \frac{\text{"1"} + \text{"2"}}{\text{"C"}}, \quad \Pr[A \mid B \cap C] = \frac{\
Why is P(A,B|C)/P(B|C) = P(A|B,C)? Just draw the Venn diagram. We then have $$\Pr[A \cap B \mid C] = \frac{\text{"1"}}{\text{"C"}}, \quad \Pr[B \mid C] = \frac{\text{"1"} + \text{"2"}}{\text{"C"}}, \quad \Pr[A \mid B \cap C] = \frac{\text{"1"}}{\text{"1"} + \text{"2"}},$$ and the relationship follows by dividing the f...
Why is P(A,B|C)/P(B|C) = P(A|B,C)? Just draw the Venn diagram. We then have $$\Pr[A \cap B \mid C] = \frac{\text{"1"}}{\text{"C"}}, \quad \Pr[B \mid C] = \frac{\text{"1"} + \text{"2"}}{\text{"C"}}, \quad \Pr[A \mid B \cap C] = \frac{\
11,310
Why is P(A,B|C)/P(B|C) = P(A|B,C)?
\begin{align*} \frac{P(A,B|C)}{P(B|C)} &= \frac{P(A,B,C)}{P(C)}\frac{P(C)}{P(B,C)} \\ &= \frac{P(A,B,C)}{P(B,C)} \\ &= P(A|B,C) \end{align*}
Why is P(A,B|C)/P(B|C) = P(A|B,C)?
\begin{align*} \frac{P(A,B|C)}{P(B|C)} &= \frac{P(A,B,C)}{P(C)}\frac{P(C)}{P(B,C)} \\ &= \frac{P(A,B,C)}{P(B,C)} \\ &= P(A|B,C) \end{align*}
Why is P(A,B|C)/P(B|C) = P(A|B,C)? \begin{align*} \frac{P(A,B|C)}{P(B|C)} &= \frac{P(A,B,C)}{P(C)}\frac{P(C)}{P(B,C)} \\ &= \frac{P(A,B,C)}{P(B,C)} \\ &= P(A|B,C) \end{align*}
Why is P(A,B|C)/P(B|C) = P(A|B,C)? \begin{align*} \frac{P(A,B|C)}{P(B|C)} &= \frac{P(A,B,C)}{P(C)}\frac{P(C)}{P(B,C)} \\ &= \frac{P(A,B,C)}{P(B,C)} \\ &= P(A|B,C) \end{align*}
11,311
Why is P(A,B|C)/P(B|C) = P(A|B,C)?
My intuition is the following ... Conditioning on $C$ means that we are considering only the cases when $C$ is given. Now, suppose that I live in a world where $C$ is always given. My pepole know nothing about and cannot imagine a world without $C$. For some reason, our mathematicians denote probability of $X$ by $\hat...
Why is P(A,B|C)/P(B|C) = P(A|B,C)?
My intuition is the following ... Conditioning on $C$ means that we are considering only the cases when $C$ is given. Now, suppose that I live in a world where $C$ is always given. My pepole know noth
Why is P(A,B|C)/P(B|C) = P(A|B,C)? My intuition is the following ... Conditioning on $C$ means that we are considering only the cases when $C$ is given. Now, suppose that I live in a world where $C$ is always given. My pepole know nothing about and cannot imagine a world without $C$. For some reason, our mathematicians...
Why is P(A,B|C)/P(B|C) = P(A|B,C)? My intuition is the following ... Conditioning on $C$ means that we are considering only the cases when $C$ is given. Now, suppose that I live in a world where $C$ is always given. My pepole know noth
11,312
Do statisticians assume one can't over-water a plant, or am I just using the wrong search terms for curvilinear regression?
The remarks in the question about link functions and monotonicity are a red herring. Underlying them seems to be an implicit assumption that a generalized linear model (GLM), by expressing the expectation of a response $Y$ as a monotonic function $f$ of a linear combination $X\beta$ of explanatory variables $X$, is no...
Do statisticians assume one can't over-water a plant, or am I just using the wrong search terms for
The remarks in the question about link functions and monotonicity are a red herring. Underlying them seems to be an implicit assumption that a generalized linear model (GLM), by expressing the expect
Do statisticians assume one can't over-water a plant, or am I just using the wrong search terms for curvilinear regression? The remarks in the question about link functions and monotonicity are a red herring. Underlying them seems to be an implicit assumption that a generalized linear model (GLM), by expressing the ex...
Do statisticians assume one can't over-water a plant, or am I just using the wrong search terms for The remarks in the question about link functions and monotonicity are a red herring. Underlying them seems to be an implicit assumption that a generalized linear model (GLM), by expressing the expect
11,313
Do statisticians assume one can't over-water a plant, or am I just using the wrong search terms for curvilinear regression?
Looks guiltily at the dying plant on his desk....apparently not In the comments, @whuber says that "modeling choices ought to be informed by an understanding of what produced the data and motivated by theories in relevant disciplines", to which you asked how one goes about doing this. The Michaelis and Menten kinetics...
Do statisticians assume one can't over-water a plant, or am I just using the wrong search terms for
Looks guiltily at the dying plant on his desk....apparently not In the comments, @whuber says that "modeling choices ought to be informed by an understanding of what produced the data and motivated by
Do statisticians assume one can't over-water a plant, or am I just using the wrong search terms for curvilinear regression? Looks guiltily at the dying plant on his desk....apparently not In the comments, @whuber says that "modeling choices ought to be informed by an understanding of what produced the data and motivate...
Do statisticians assume one can't over-water a plant, or am I just using the wrong search terms for Looks guiltily at the dying plant on his desk....apparently not In the comments, @whuber says that "modeling choices ought to be informed by an understanding of what produced the data and motivated by
11,314
Do statisticians assume one can't over-water a plant, or am I just using the wrong search terms for curvilinear regression?
I have a rather informal response from the point of view of someone who spent half of his scientific life at the bench and the other half at the computer, playing with statistics. I tried to put in into a comment, but it was too long. You see, if I was a scientist observing the type of results that you are getting, I w...
Do statisticians assume one can't over-water a plant, or am I just using the wrong search terms for
I have a rather informal response from the point of view of someone who spent half of his scientific life at the bench and the other half at the computer, playing with statistics. I tried to put in in
Do statisticians assume one can't over-water a plant, or am I just using the wrong search terms for curvilinear regression? I have a rather informal response from the point of view of someone who spent half of his scientific life at the bench and the other half at the computer, playing with statistics. I tried to put i...
Do statisticians assume one can't over-water a plant, or am I just using the wrong search terms for I have a rather informal response from the point of view of someone who spent half of his scientific life at the bench and the other half at the computer, playing with statistics. I tried to put in in
11,315
Do statisticians assume one can't over-water a plant, or am I just using the wrong search terms for curvilinear regression?
For data like that, I'd probably be at least considering linear splines. You can do those in lm or glm easily enough. If you take such an approach, your issue will be choosing number of knots and knot locations; one solution might be to consider a fair number of possible locations, and use something like the lasso or...
Do statisticians assume one can't over-water a plant, or am I just using the wrong search terms for
For data like that, I'd probably be at least considering linear splines. You can do those in lm or glm easily enough. If you take such an approach, your issue will be choosing number of knots and kn
Do statisticians assume one can't over-water a plant, or am I just using the wrong search terms for curvilinear regression? For data like that, I'd probably be at least considering linear splines. You can do those in lm or glm easily enough. If you take such an approach, your issue will be choosing number of knots an...
Do statisticians assume one can't over-water a plant, or am I just using the wrong search terms for For data like that, I'd probably be at least considering linear splines. You can do those in lm or glm easily enough. If you take such an approach, your issue will be choosing number of knots and kn
11,316
Do statisticians assume one can't over-water a plant, or am I just using the wrong search terms for curvilinear regression?
I didn't have time to read your whole post, but it seems that your main concern is that the functional forms of responses might shift with treatments. There are techniques for dealing with this, but they are data-intensive. To your specific example: G is growth W is water T is treatment library(mgcv) mod = gam(G ~ T +...
Do statisticians assume one can't over-water a plant, or am I just using the wrong search terms for
I didn't have time to read your whole post, but it seems that your main concern is that the functional forms of responses might shift with treatments. There are techniques for dealing with this, but
Do statisticians assume one can't over-water a plant, or am I just using the wrong search terms for curvilinear regression? I didn't have time to read your whole post, but it seems that your main concern is that the functional forms of responses might shift with treatments. There are techniques for dealing with this, ...
Do statisticians assume one can't over-water a plant, or am I just using the wrong search terms for I didn't have time to read your whole post, but it seems that your main concern is that the functional forms of responses might shift with treatments. There are techniques for dealing with this, but
11,317
What exactly does a non-parametric test accomplish & What do you do with the results?
I know non-parametric relies on the median instead of the mean Hardly any nonparametric tests actually "rely on" medians in this sense. I can only think of a couple... and the only one I expect you'd be likely to have even heard of would be the sign test. to compare...something. If they relied on medians, presumably...
What exactly does a non-parametric test accomplish & What do you do with the results?
I know non-parametric relies on the median instead of the mean Hardly any nonparametric tests actually "rely on" medians in this sense. I can only think of a couple... and the only one I expect you'd
What exactly does a non-parametric test accomplish & What do you do with the results? I know non-parametric relies on the median instead of the mean Hardly any nonparametric tests actually "rely on" medians in this sense. I can only think of a couple... and the only one I expect you'd be likely to have even heard of w...
What exactly does a non-parametric test accomplish & What do you do with the results? I know non-parametric relies on the median instead of the mean Hardly any nonparametric tests actually "rely on" medians in this sense. I can only think of a couple... and the only one I expect you'd
11,318
What exactly does a non-parametric test accomplish & What do you do with the results?
Suppose you and I are coaching track teams. Our athletes come from the same school, are similar ages, and the same gender (i.e., they're drawn from the same population), but I claim to have discovered a Revolutionary New Training System that will make my team members run much faster than yours. How can I convince you t...
What exactly does a non-parametric test accomplish & What do you do with the results?
Suppose you and I are coaching track teams. Our athletes come from the same school, are similar ages, and the same gender (i.e., they're drawn from the same population), but I claim to have discovered
What exactly does a non-parametric test accomplish & What do you do with the results? Suppose you and I are coaching track teams. Our athletes come from the same school, are similar ages, and the same gender (i.e., they're drawn from the same population), but I claim to have discovered a Revolutionary New Training Syst...
What exactly does a non-parametric test accomplish & What do you do with the results? Suppose you and I are coaching track teams. Our athletes come from the same school, are similar ages, and the same gender (i.e., they're drawn from the same population), but I claim to have discovered
11,319
What exactly does a non-parametric test accomplish & What do you do with the results?
You asked to be corrected if wrong. Here are some comments under that heading to complement @Peter Flom's positive suggestions. "non-parametric relies on the median instead of the mean": often in practice, but that's not a definition. Several non-parametric tests (e.g. chi-square) have nothing to do with medians. re...
What exactly does a non-parametric test accomplish & What do you do with the results?
You asked to be corrected if wrong. Here are some comments under that heading to complement @Peter Flom's positive suggestions. "non-parametric relies on the median instead of the mean": often in pr
What exactly does a non-parametric test accomplish & What do you do with the results? You asked to be corrected if wrong. Here are some comments under that heading to complement @Peter Flom's positive suggestions. "non-parametric relies on the median instead of the mean": often in practice, but that's not a definitio...
What exactly does a non-parametric test accomplish & What do you do with the results? You asked to be corrected if wrong. Here are some comments under that heading to complement @Peter Flom's positive suggestions. "non-parametric relies on the median instead of the mean": often in pr
11,320
What exactly does a non-parametric test accomplish & What do you do with the results?
You "want" the same things from a p-value here that you want in any other test. The U statistic is the result of a calculation, just like the t statistic, the odds ratio, the F statistic, or what have you. The formula can be found lots of places. It's not very intuitive, but then, neither are other test statistics unti...
What exactly does a non-parametric test accomplish & What do you do with the results?
You "want" the same things from a p-value here that you want in any other test. The U statistic is the result of a calculation, just like the t statistic, the odds ratio, the F statistic, or what have
What exactly does a non-parametric test accomplish & What do you do with the results? You "want" the same things from a p-value here that you want in any other test. The U statistic is the result of a calculation, just like the t statistic, the odds ratio, the F statistic, or what have you. The formula can be found lot...
What exactly does a non-parametric test accomplish & What do you do with the results? You "want" the same things from a p-value here that you want in any other test. The U statistic is the result of a calculation, just like the t statistic, the odds ratio, the F statistic, or what have
11,321
What exactly does a non-parametric test accomplish & What do you do with the results?
As a response to a recently closed question, this addresses the above as well. Below is a quote from Bradley's classic Distribution-Free Statistical Tests (1968, p. 15–16) which, while a bit long, is a pretty clear explanation, I believe. The terms nonparametric and distribution-free are not synonymous, and neithert...
What exactly does a non-parametric test accomplish & What do you do with the results?
As a response to a recently closed question, this addresses the above as well. Below is a quote from Bradley's classic Distribution-Free Statistical Tests (1968, p. 15–16) which, while a bit long, is
What exactly does a non-parametric test accomplish & What do you do with the results? As a response to a recently closed question, this addresses the above as well. Below is a quote from Bradley's classic Distribution-Free Statistical Tests (1968, p. 15–16) which, while a bit long, is a pretty clear explanation, I beli...
What exactly does a non-parametric test accomplish & What do you do with the results? As a response to a recently closed question, this addresses the above as well. Below is a quote from Bradley's classic Distribution-Free Statistical Tests (1968, p. 15–16) which, while a bit long, is
11,322
Does this distribution have a name? $f(x)\propto\exp(-|x-\mu|^p/\beta)$
Short Answer The pdf you describe is most appropriately known as a Subbotin distribution ... see the paper in 1923 by Subbotin which has exactly the same functional form, with say $Y = X-\mu$. Subbotin, M. T. (1923), On the law of frequency of error, Matematicheskii Sbornik, 31, 296-301. who enters the pdf at his e...
Does this distribution have a name? $f(x)\propto\exp(-|x-\mu|^p/\beta)$
Short Answer The pdf you describe is most appropriately known as a Subbotin distribution ... see the paper in 1923 by Subbotin which has exactly the same functional form, with say $Y = X-\mu$. Subb
Does this distribution have a name? $f(x)\propto\exp(-|x-\mu|^p/\beta)$ Short Answer The pdf you describe is most appropriately known as a Subbotin distribution ... see the paper in 1923 by Subbotin which has exactly the same functional form, with say $Y = X-\mu$. Subbotin, M. T. (1923), On the law of frequency of e...
Does this distribution have a name? $f(x)\propto\exp(-|x-\mu|^p/\beta)$ Short Answer The pdf you describe is most appropriately known as a Subbotin distribution ... see the paper in 1923 by Subbotin which has exactly the same functional form, with say $Y = X-\mu$. Subb
11,323
Does this distribution have a name? $f(x)\propto\exp(-|x-\mu|^p/\beta)$
For obvious reasons, you can get rid of μ and β so all that remains is $$\int_0^\infty \exp\{−x^p\}\text{d}x\stackrel{y=x^p}{=}\int_0^\infty \exp\{−y\}\left|\dfrac{\text{d}x}{\text{d}y}\right|\text{d}y\stackrel{x=y^{1/p}}{=}\int_0^\infty \exp\{−y\}\frac{1}{p}y^{\frac{1}{p}-1}\text{d}y=\Gamma(1/p)\frac{1}{p} $$ Hence $$...
Does this distribution have a name? $f(x)\propto\exp(-|x-\mu|^p/\beta)$
For obvious reasons, you can get rid of μ and β so all that remains is $$\int_0^\infty \exp\{−x^p\}\text{d}x\stackrel{y=x^p}{=}\int_0^\infty \exp\{−y\}\left|\dfrac{\text{d}x}{\text{d}y}\right|\text{d}
Does this distribution have a name? $f(x)\propto\exp(-|x-\mu|^p/\beta)$ For obvious reasons, you can get rid of μ and β so all that remains is $$\int_0^\infty \exp\{−x^p\}\text{d}x\stackrel{y=x^p}{=}\int_0^\infty \exp\{−y\}\left|\dfrac{\text{d}x}{\text{d}y}\right|\text{d}y\stackrel{x=y^{1/p}}{=}\int_0^\infty \exp\{−y\}...
Does this distribution have a name? $f(x)\propto\exp(-|x-\mu|^p/\beta)$ For obvious reasons, you can get rid of μ and β so all that remains is $$\int_0^\infty \exp\{−x^p\}\text{d}x\stackrel{y=x^p}{=}\int_0^\infty \exp\{−y\}\left|\dfrac{\text{d}x}{\text{d}y}\right|\text{d}
11,324
Does this distribution have a name? $f(x)\propto\exp(-|x-\mu|^p/\beta)$
According to Wikipedia, this is known as Generalized normal distribution (version 1 in the article), and the restriction $p\in [1,2]$ is not required but any positive value is fine. The reference given in Wikipedia is Saralees Nadarajah (2005) A generalized normal distribution, Journal of Applied Statistics, 32:7, 6...
Does this distribution have a name? $f(x)\propto\exp(-|x-\mu|^p/\beta)$
According to Wikipedia, this is known as Generalized normal distribution (version 1 in the article), and the restriction $p\in [1,2]$ is not required but any positive value is fine. The reference gi
Does this distribution have a name? $f(x)\propto\exp(-|x-\mu|^p/\beta)$ According to Wikipedia, this is known as Generalized normal distribution (version 1 in the article), and the restriction $p\in [1,2]$ is not required but any positive value is fine. The reference given in Wikipedia is Saralees Nadarajah (2005) A...
Does this distribution have a name? $f(x)\propto\exp(-|x-\mu|^p/\beta)$ According to Wikipedia, this is known as Generalized normal distribution (version 1 in the article), and the restriction $p\in [1,2]$ is not required but any positive value is fine. The reference gi
11,325
Distribution of ratio between two independent uniform random variables
The right logic is that with independent $X, Y \sim U(0,1)$, $Z=\frac YX$ and $Z^{-1} =\frac XY$ have the same distribution and so for $0 < z < 1$ \begin{align} P\left\{\frac YX \leq z\right\} &= P\left\{\frac XY \leq z\right\}\\ &= P\left\{\frac YX \geq \frac 1z \right\}\\ \left.\left.F_{Z}\right(z\right) &= 1 - F_{Z...
Distribution of ratio between two independent uniform random variables
The right logic is that with independent $X, Y \sim U(0,1)$, $Z=\frac YX$ and $Z^{-1} =\frac XY$ have the same distribution and so for $0 < z < 1$ \begin{align} P\left\{\frac YX \leq z\right\} &= P\l
Distribution of ratio between two independent uniform random variables The right logic is that with independent $X, Y \sim U(0,1)$, $Z=\frac YX$ and $Z^{-1} =\frac XY$ have the same distribution and so for $0 < z < 1$ \begin{align} P\left\{\frac YX \leq z\right\} &= P\left\{\frac XY \leq z\right\}\\ &= P\left\{\frac Y...
Distribution of ratio between two independent uniform random variables The right logic is that with independent $X, Y \sim U(0,1)$, $Z=\frac YX$ and $Z^{-1} =\frac XY$ have the same distribution and so for $0 < z < 1$ \begin{align} P\left\{\frac YX \leq z\right\} &= P\l
11,326
Distribution of ratio between two independent uniform random variables
This distribution is symmetric--if you look at it the right way. The symmetry you have (correctly) observed is that $Y/X$ and $X/Y = 1/(Y/X)$ must be identically distributed. When working with ratios and powers, you are really working within the multiplicative group of the positive real numbers. The analog of the loc...
Distribution of ratio between two independent uniform random variables
This distribution is symmetric--if you look at it the right way. The symmetry you have (correctly) observed is that $Y/X$ and $X/Y = 1/(Y/X)$ must be identically distributed. When working with ratios
Distribution of ratio between two independent uniform random variables This distribution is symmetric--if you look at it the right way. The symmetry you have (correctly) observed is that $Y/X$ and $X/Y = 1/(Y/X)$ must be identically distributed. When working with ratios and powers, you are really working within the mu...
Distribution of ratio between two independent uniform random variables This distribution is symmetric--if you look at it the right way. The symmetry you have (correctly) observed is that $Y/X$ and $X/Y = 1/(Y/X)$ must be identically distributed. When working with ratios
11,327
Distribution of ratio between two independent uniform random variables
Method 1: Let $X_1\sim U(0,1)$ and $X_2\sim U(0,1)$. Since $X_1$ and $X_2$ are independent, $$f_{X_1,X_2}(x_1,x_2)=f_{X_1}(x_1).f_{X_2}(x_2)=1.$$ Define $Y_1=\frac{X_1}{X_2}$ and $Y_2=X_2$. It means that $Y_1=u_1(X_1,X_2)$ and $Y_2=u_2(X_1,X_2)$ where $u_1(x_1,x_2)=\frac{x_1}{x_2}$ and $u_2(x_1,x_2)=x_2$. Now, let's fi...
Distribution of ratio between two independent uniform random variables
Method 1: Let $X_1\sim U(0,1)$ and $X_2\sim U(0,1)$. Since $X_1$ and $X_2$ are independent, $$f_{X_1,X_2}(x_1,x_2)=f_{X_1}(x_1).f_{X_2}(x_2)=1.$$ Define $Y_1=\frac{X_1}{X_2}$ and $Y_2=X_2$. It means t
Distribution of ratio between two independent uniform random variables Method 1: Let $X_1\sim U(0,1)$ and $X_2\sim U(0,1)$. Since $X_1$ and $X_2$ are independent, $$f_{X_1,X_2}(x_1,x_2)=f_{X_1}(x_1).f_{X_2}(x_2)=1.$$ Define $Y_1=\frac{X_1}{X_2}$ and $Y_2=X_2$. It means that $Y_1=u_1(X_1,X_2)$ and $Y_2=u_2(X_1,X_2)$ whe...
Distribution of ratio between two independent uniform random variables Method 1: Let $X_1\sim U(0,1)$ and $X_2\sim U(0,1)$. Since $X_1$ and $X_2$ are independent, $$f_{X_1,X_2}(x_1,x_2)=f_{X_1}(x_1).f_{X_2}(x_2)=1.$$ Define $Y_1=\frac{X_1}{X_2}$ and $Y_2=X_2$. It means t
11,328
Distribution of ratio between two independent uniform random variables
If you think geometrically... In the $X$-$Y$ plane, curves of constant $Z = Y/X$ are lines through the origin. ($Y/X$ is the slope.) One can read off the value of $Z$ from a line through the origin by finding its intersection with the line $X=1$. (If you've ever studied projective space: here $X$ is the homogenizing...
Distribution of ratio between two independent uniform random variables
If you think geometrically... In the $X$-$Y$ plane, curves of constant $Z = Y/X$ are lines through the origin. ($Y/X$ is the slope.) One can read off the value of $Z$ from a line through the origin
Distribution of ratio between two independent uniform random variables If you think geometrically... In the $X$-$Y$ plane, curves of constant $Z = Y/X$ are lines through the origin. ($Y/X$ is the slope.) One can read off the value of $Z$ from a line through the origin by finding its intersection with the line $X=1$. ...
Distribution of ratio between two independent uniform random variables If you think geometrically... In the $X$-$Y$ plane, curves of constant $Z = Y/X$ are lines through the origin. ($Y/X$ is the slope.) One can read off the value of $Z$ from a line through the origin
11,329
Distribution of ratio between two independent uniform random variables
Just for the record, my intuition was totally wrong. We are talking about density, not probability. The right logic is to check that $$ \int_1^k f_Z(z) dz = \int_{1/k}^1 f_Z(z) = \frac{1}{2}(1 - \frac{1}{k}) $$, and this is indeed the case.
Distribution of ratio between two independent uniform random variables
Just for the record, my intuition was totally wrong. We are talking about density, not probability. The right logic is to check that $$ \int_1^k f_Z(z) dz = \int_{1/k}^1 f_Z(z) = \frac{1}{2}(1 - \frac
Distribution of ratio between two independent uniform random variables Just for the record, my intuition was totally wrong. We are talking about density, not probability. The right logic is to check that $$ \int_1^k f_Z(z) dz = \int_{1/k}^1 f_Z(z) = \frac{1}{2}(1 - \frac{1}{k}) $$, and this is indeed the case.
Distribution of ratio between two independent uniform random variables Just for the record, my intuition was totally wrong. We are talking about density, not probability. The right logic is to check that $$ \int_1^k f_Z(z) dz = \int_{1/k}^1 f_Z(z) = \frac{1}{2}(1 - \frac
11,330
Distribution of ratio between two independent uniform random variables
Yea the link Distribution of a ratio of uniforms: What is wrong? provides CDF of $Z=Y/X$. The PDF here is just derivative of the CDF. So the formula is correct. I think your problem lies in the assumption that you think Z is "symmetric" around 1. However this is not true. Intuitively Z should be a skewed distribution,...
Distribution of ratio between two independent uniform random variables
Yea the link Distribution of a ratio of uniforms: What is wrong? provides CDF of $Z=Y/X$. The PDF here is just derivative of the CDF. So the formula is correct. I think your problem lies in the assum
Distribution of ratio between two independent uniform random variables Yea the link Distribution of a ratio of uniforms: What is wrong? provides CDF of $Z=Y/X$. The PDF here is just derivative of the CDF. So the formula is correct. I think your problem lies in the assumption that you think Z is "symmetric" around 1. H...
Distribution of ratio between two independent uniform random variables Yea the link Distribution of a ratio of uniforms: What is wrong? provides CDF of $Z=Y/X$. The PDF here is just derivative of the CDF. So the formula is correct. I think your problem lies in the assum
11,331
Difference between Random Forests and Decision tree
You are right that the two concepts are similar. As is implied by the names "Tree" and "Forest," a Random Forest is essentially a collection of Decision Trees. A decision tree is built on an entire dataset, using all the features/variables of interest, whereas a random forest randomly selects observations/rows and sp...
Difference between Random Forests and Decision tree
You are right that the two concepts are similar. As is implied by the names "Tree" and "Forest," a Random Forest is essentially a collection of Decision Trees. A decision tree is built on an entire
Difference between Random Forests and Decision tree You are right that the two concepts are similar. As is implied by the names "Tree" and "Forest," a Random Forest is essentially a collection of Decision Trees. A decision tree is built on an entire dataset, using all the features/variables of interest, whereas a ran...
Difference between Random Forests and Decision tree You are right that the two concepts are similar. As is implied by the names "Tree" and "Forest," a Random Forest is essentially a collection of Decision Trees. A decision tree is built on an entire
11,332
Difference between Random Forests and Decision tree
When using a decision tree model on a given training dataset the accuracy keeps improving with more and more splits. You can easily overfit the data and doesn't know when you have crossed the line unless you are using cross validation (on training data set). The advantage of a simple decision tree model is easy to inte...
Difference between Random Forests and Decision tree
When using a decision tree model on a given training dataset the accuracy keeps improving with more and more splits. You can easily overfit the data and doesn't know when you have crossed the line unl
Difference between Random Forests and Decision tree When using a decision tree model on a given training dataset the accuracy keeps improving with more and more splits. You can easily overfit the data and doesn't know when you have crossed the line unless you are using cross validation (on training data set). The advan...
Difference between Random Forests and Decision tree When using a decision tree model on a given training dataset the accuracy keeps improving with more and more splits. You can easily overfit the data and doesn't know when you have crossed the line unl
11,333
Difference between Random Forests and Decision tree
The random forest algorithm is a type of ensemble learning algorithm. This means that it uses multiple decision trees to make predictions. The advantage of using an ensemble algorithm is that it can reduce the variance in the predictions, making them more accurate. The random forest algorithm achieves this by averaging...
Difference between Random Forests and Decision tree
The random forest algorithm is a type of ensemble learning algorithm. This means that it uses multiple decision trees to make predictions. The advantage of using an ensemble algorithm is that it can r
Difference between Random Forests and Decision tree The random forest algorithm is a type of ensemble learning algorithm. This means that it uses multiple decision trees to make predictions. The advantage of using an ensemble algorithm is that it can reduce the variance in the predictions, making them more accurate. Th...
Difference between Random Forests and Decision tree The random forest algorithm is a type of ensemble learning algorithm. This means that it uses multiple decision trees to make predictions. The advantage of using an ensemble algorithm is that it can r
11,334
Evaluate definite interval of normal distribution
It depends on exactly what you are looking for. Below are some brief details and references. Much of the literature for approximations centers around the function $$ Q(x) = \int_x^\infty \frac{1}{\sqrt{2\pi}} e^{-\frac{u^2}{2}} \, \mathrm{d}u $$ for $x > 0$. This is because the function you provided can be decomposed a...
Evaluate definite interval of normal distribution
It depends on exactly what you are looking for. Below are some brief details and references. Much of the literature for approximations centers around the function $$ Q(x) = \int_x^\infty \frac{1}{\sqr
Evaluate definite interval of normal distribution It depends on exactly what you are looking for. Below are some brief details and references. Much of the literature for approximations centers around the function $$ Q(x) = \int_x^\infty \frac{1}{\sqrt{2\pi}} e^{-\frac{u^2}{2}} \, \mathrm{d}u $$ for $x > 0$. This is bec...
Evaluate definite interval of normal distribution It depends on exactly what you are looking for. Below are some brief details and references. Much of the literature for approximations centers around the function $$ Q(x) = \int_x^\infty \frac{1}{\sqr
11,335
Evaluate definite interval of normal distribution
I suppose I'm too late the hero, but I wanted to comment on cardinal's post, and this comment became too big for its intended box. For this answer, I'm assuming $x >0$; appropriate reflection formulae can be used for negative $x$. I'm more used to dealing with the error function $\mathrm{erf}(x)$ myself, but I'll try t...
Evaluate definite interval of normal distribution
I suppose I'm too late the hero, but I wanted to comment on cardinal's post, and this comment became too big for its intended box. For this answer, I'm assuming $x >0$; appropriate reflection formulae
Evaluate definite interval of normal distribution I suppose I'm too late the hero, but I wanted to comment on cardinal's post, and this comment became too big for its intended box. For this answer, I'm assuming $x >0$; appropriate reflection formulae can be used for negative $x$. I'm more used to dealing with the error...
Evaluate definite interval of normal distribution I suppose I'm too late the hero, but I wanted to comment on cardinal's post, and this comment became too big for its intended box. For this answer, I'm assuming $x >0$; appropriate reflection formulae
11,336
Evaluate definite interval of normal distribution
(This reply originally appeared in response to a similar question, subsequently closed as a duplicate. The O.P. only wanted "an" implementation of the Gaussian integral, not necessarily "state of the art." In his comments it became apparent that a relatively simple, short implementation would be preferred.) As comme...
Evaluate definite interval of normal distribution
(This reply originally appeared in response to a similar question, subsequently closed as a duplicate. The O.P. only wanted "an" implementation of the Gaussian integral, not necessarily "state of the
Evaluate definite interval of normal distribution (This reply originally appeared in response to a similar question, subsequently closed as a duplicate. The O.P. only wanted "an" implementation of the Gaussian integral, not necessarily "state of the art." In his comments it became apparent that a relatively simple, s...
Evaluate definite interval of normal distribution (This reply originally appeared in response to a similar question, subsequently closed as a duplicate. The O.P. only wanted "an" implementation of the Gaussian integral, not necessarily "state of the
11,337
Distribution of the maximum of two correlated normal variables
According to Nadarajah and Kotz, 2008, Exact Distribution of the Max/Min of Two Gaussian Random Variables, the PDF of $X = \max(X_1, X_2)$ appears to be $$f(x) = 2 \cdot \phi(x) \cdot \Phi\left( \frac{1 - r}{\sqrt{1 - r^2}} x\right),$$ where $\phi$ is the PDF and $\Phi$ is the CDF of the standard normal distribution. $...
Distribution of the maximum of two correlated normal variables
According to Nadarajah and Kotz, 2008, Exact Distribution of the Max/Min of Two Gaussian Random Variables, the PDF of $X = \max(X_1, X_2)$ appears to be $$f(x) = 2 \cdot \phi(x) \cdot \Phi\left( \frac
Distribution of the maximum of two correlated normal variables According to Nadarajah and Kotz, 2008, Exact Distribution of the Max/Min of Two Gaussian Random Variables, the PDF of $X = \max(X_1, X_2)$ appears to be $$f(x) = 2 \cdot \phi(x) \cdot \Phi\left( \frac{1 - r}{\sqrt{1 - r^2}} x\right),$$ where $\phi$ is the P...
Distribution of the maximum of two correlated normal variables According to Nadarajah and Kotz, 2008, Exact Distribution of the Max/Min of Two Gaussian Random Variables, the PDF of $X = \max(X_1, X_2)$ appears to be $$f(x) = 2 \cdot \phi(x) \cdot \Phi\left( \frac
11,338
Distribution of the maximum of two correlated normal variables
Let $f_\rho$ be the bivariate Normal PDF for $(X,Y)$ with standard marginals and correlation $\rho$. The CDF of the maximum is, by definition, $$\Pr(\max(X, Y)\le z) = \Pr(X\le z,\ Y\le z) = \int_{-\infty}^z\int_{-\infty}^z f_\rho(x,y)dy dx.$$ The bivariate Normal PDF is symmetric (via reflection) around the diagonal....
Distribution of the maximum of two correlated normal variables
Let $f_\rho$ be the bivariate Normal PDF for $(X,Y)$ with standard marginals and correlation $\rho$. The CDF of the maximum is, by definition, $$\Pr(\max(X, Y)\le z) = \Pr(X\le z,\ Y\le z) = \int_{-\
Distribution of the maximum of two correlated normal variables Let $f_\rho$ be the bivariate Normal PDF for $(X,Y)$ with standard marginals and correlation $\rho$. The CDF of the maximum is, by definition, $$\Pr(\max(X, Y)\le z) = \Pr(X\le z,\ Y\le z) = \int_{-\infty}^z\int_{-\infty}^z f_\rho(x,y)dy dx.$$ The bivariat...
Distribution of the maximum of two correlated normal variables Let $f_\rho$ be the bivariate Normal PDF for $(X,Y)$ with standard marginals and correlation $\rho$. The CDF of the maximum is, by definition, $$\Pr(\max(X, Y)\le z) = \Pr(X\le z,\ Y\le z) = \int_{-\
11,339
"Fully Bayesian" vs "Bayesian"
The terminology "fully Bayesian approach" is nothing but a way to indicate that one moves from a "partially" Bayesian approach to a "true" Bayesian approach, depending on the context. Or to distinguish a "pseudo-Bayesian" approach from a "strictly" Bayesian approach. For example one author writes: "Unlike the majority ...
"Fully Bayesian" vs "Bayesian"
The terminology "fully Bayesian approach" is nothing but a way to indicate that one moves from a "partially" Bayesian approach to a "true" Bayesian approach, depending on the context. Or to distinguis
"Fully Bayesian" vs "Bayesian" The terminology "fully Bayesian approach" is nothing but a way to indicate that one moves from a "partially" Bayesian approach to a "true" Bayesian approach, depending on the context. Or to distinguish a "pseudo-Bayesian" approach from a "strictly" Bayesian approach. For example one autho...
"Fully Bayesian" vs "Bayesian" The terminology "fully Bayesian approach" is nothing but a way to indicate that one moves from a "partially" Bayesian approach to a "true" Bayesian approach, depending on the context. Or to distinguis
11,340
"Fully Bayesian" vs "Bayesian"
I think the terminology is used to distinguish between the Bayesian approach and the empirical Bayes approach. Full Bayes uses a specified prior whereas empirical Bayes allows the prior to be estimated through use of data.
"Fully Bayesian" vs "Bayesian"
I think the terminology is used to distinguish between the Bayesian approach and the empirical Bayes approach. Full Bayes uses a specified prior whereas empirical Bayes allows the prior to be estimat
"Fully Bayesian" vs "Bayesian" I think the terminology is used to distinguish between the Bayesian approach and the empirical Bayes approach. Full Bayes uses a specified prior whereas empirical Bayes allows the prior to be estimated through use of data.
"Fully Bayesian" vs "Bayesian" I think the terminology is used to distinguish between the Bayesian approach and the empirical Bayes approach. Full Bayes uses a specified prior whereas empirical Bayes allows the prior to be estimat
11,341
"Fully Bayesian" vs "Bayesian"
I would use "fully Bayesian" to mean that any nuissance parameters had been marginalised from the analysis, rather than optimised (e.g. MAP estimates). For example a Gaussian process model, with hyper-parameters tuned to maximise the marginal likelihood would be Bayesian, but only partially so, whereas if the hyper-pa...
"Fully Bayesian" vs "Bayesian"
I would use "fully Bayesian" to mean that any nuissance parameters had been marginalised from the analysis, rather than optimised (e.g. MAP estimates). For example a Gaussian process model, with hype
"Fully Bayesian" vs "Bayesian" I would use "fully Bayesian" to mean that any nuissance parameters had been marginalised from the analysis, rather than optimised (e.g. MAP estimates). For example a Gaussian process model, with hyper-parameters tuned to maximise the marginal likelihood would be Bayesian, but only partia...
"Fully Bayesian" vs "Bayesian" I would use "fully Bayesian" to mean that any nuissance parameters had been marginalised from the analysis, rather than optimised (e.g. MAP estimates). For example a Gaussian process model, with hype
11,342
"Fully Bayesian" vs "Bayesian"
"Bayesian" really means "approximate Bayesian". "Fully Bayesian" also means "approximate Bayesian", but with less approximation. Edit: Clarification. The fully Bayesian approach would be, for a given model and data, to calculate the posterior probability using the Bayes rule $$ p(\theta \mid \text{Data}) \propto p(\te...
"Fully Bayesian" vs "Bayesian"
"Bayesian" really means "approximate Bayesian". "Fully Bayesian" also means "approximate Bayesian", but with less approximation. Edit: Clarification. The fully Bayesian approach would be, for a given
"Fully Bayesian" vs "Bayesian" "Bayesian" really means "approximate Bayesian". "Fully Bayesian" also means "approximate Bayesian", but with less approximation. Edit: Clarification. The fully Bayesian approach would be, for a given model and data, to calculate the posterior probability using the Bayes rule $$ p(\theta ...
"Fully Bayesian" vs "Bayesian" "Bayesian" really means "approximate Bayesian". "Fully Bayesian" also means "approximate Bayesian", but with less approximation. Edit: Clarification. The fully Bayesian approach would be, for a given
11,343
"Fully Bayesian" vs "Bayesian"
As a practical example: I do some Bayesian modeling using splines. A common problem with splines is knot selection. One popular possibility is to use a Reversible Jump Markov Chain Monte Carlo (RJMCMC) scheme where one proposes to add, delete, or move a knot during each iteration. The coefficients for the splines ar...
"Fully Bayesian" vs "Bayesian"
As a practical example: I do some Bayesian modeling using splines. A common problem with splines is knot selection. One popular possibility is to use a Reversible Jump Markov Chain Monte Carlo (RJMC
"Fully Bayesian" vs "Bayesian" As a practical example: I do some Bayesian modeling using splines. A common problem with splines is knot selection. One popular possibility is to use a Reversible Jump Markov Chain Monte Carlo (RJMCMC) scheme where one proposes to add, delete, or move a knot during each iteration. The ...
"Fully Bayesian" vs "Bayesian" As a practical example: I do some Bayesian modeling using splines. A common problem with splines is knot selection. One popular possibility is to use a Reversible Jump Markov Chain Monte Carlo (RJMC
11,344
"Fully Bayesian" vs "Bayesian"
I would add a characterization that has not been mentioned so far. A fully Bayesian approach "fully" propagates the uncertainty in all the unknown quantities through the Bayes theorem. On the other hand, Pseudo-Bayes approaches such as empirical Bayes do not propagate all the uncertainties. For example, when estimat...
"Fully Bayesian" vs "Bayesian"
I would add a characterization that has not been mentioned so far. A fully Bayesian approach "fully" propagates the uncertainty in all the unknown quantities through the Bayes theorem. On the other
"Fully Bayesian" vs "Bayesian" I would add a characterization that has not been mentioned so far. A fully Bayesian approach "fully" propagates the uncertainty in all the unknown quantities through the Bayes theorem. On the other hand, Pseudo-Bayes approaches such as empirical Bayes do not propagate all the uncertaint...
"Fully Bayesian" vs "Bayesian" I would add a characterization that has not been mentioned so far. A fully Bayesian approach "fully" propagates the uncertainty in all the unknown quantities through the Bayes theorem. On the other
11,345
What's the purpose of autocorrelation?
Autocorrelation has several plain-language interpretations that signify in ways that non-autocorrelated processes and models do not: An autocorrelated variable has memory of its previous values. Such variables have behavior that depends on what went before. Memory may be long or short relative to the period of observa...
What's the purpose of autocorrelation?
Autocorrelation has several plain-language interpretations that signify in ways that non-autocorrelated processes and models do not: An autocorrelated variable has memory of its previous values. Such
What's the purpose of autocorrelation? Autocorrelation has several plain-language interpretations that signify in ways that non-autocorrelated processes and models do not: An autocorrelated variable has memory of its previous values. Such variables have behavior that depends on what went before. Memory may be long or ...
What's the purpose of autocorrelation? Autocorrelation has several plain-language interpretations that signify in ways that non-autocorrelated processes and models do not: An autocorrelated variable has memory of its previous values. Such
11,346
What's the purpose of autocorrelation?
An attempt at an answer. Autocorrelation is no different than any other relationship between predictors. It's just that the predictor and the dependent variable happen to be the same time series, just lagged. isn't every state in the universe dependent on the previous one? Yes indeed. Just as every object's state in ...
What's the purpose of autocorrelation?
An attempt at an answer. Autocorrelation is no different than any other relationship between predictors. It's just that the predictor and the dependent variable happen to be the same time series, just
What's the purpose of autocorrelation? An attempt at an answer. Autocorrelation is no different than any other relationship between predictors. It's just that the predictor and the dependent variable happen to be the same time series, just lagged. isn't every state in the universe dependent on the previous one? Yes i...
What's the purpose of autocorrelation? An attempt at an answer. Autocorrelation is no different than any other relationship between predictors. It's just that the predictor and the dependent variable happen to be the same time series, just
11,347
What's the purpose of autocorrelation?
First, I think you mean what is the purpose of evaluating autocorrelation and dealing with it. If you really mean the "purpose of autocorrelation" then that's philosophy, not statistics. Second, states of the universe are correlated with previous states but not every statistical problem deals with previous states of na...
What's the purpose of autocorrelation?
First, I think you mean what is the purpose of evaluating autocorrelation and dealing with it. If you really mean the "purpose of autocorrelation" then that's philosophy, not statistics. Second, state
What's the purpose of autocorrelation? First, I think you mean what is the purpose of evaluating autocorrelation and dealing with it. If you really mean the "purpose of autocorrelation" then that's philosophy, not statistics. Second, states of the universe are correlated with previous states but not every statistical p...
What's the purpose of autocorrelation? First, I think you mean what is the purpose of evaluating autocorrelation and dealing with it. If you really mean the "purpose of autocorrelation" then that's philosophy, not statistics. Second, state
11,348
Is a high $R^2$ ever useless?
Yes. The criteria for evaluating a statistical model depend on the specific problem at hand and aren't some mechanical function of $R^2$ or statistical significance (though they matter). The relevant question is, "does the model help you understand the data?" Meaningless regressions with high $R^2$ The simplest way to...
Is a high $R^2$ ever useless?
Yes. The criteria for evaluating a statistical model depend on the specific problem at hand and aren't some mechanical function of $R^2$ or statistical significance (though they matter). The relevant
Is a high $R^2$ ever useless? Yes. The criteria for evaluating a statistical model depend on the specific problem at hand and aren't some mechanical function of $R^2$ or statistical significance (though they matter). The relevant question is, "does the model help you understand the data?" Meaningless regressions with h...
Is a high $R^2$ ever useless? Yes. The criteria for evaluating a statistical model depend on the specific problem at hand and aren't some mechanical function of $R^2$ or statistical significance (though they matter). The relevant
11,349
Is a high $R^2$ ever useless?
"Higher is better" is a bad rule of thumb for R-square. Don Morrison wrote some famous articles a few years back demonstrating that R-squares approaching zero could still both actionable and profitable, depending on the industry. For instance, in direct marketing predicting response to a magazine subscription mailing ...
Is a high $R^2$ ever useless?
"Higher is better" is a bad rule of thumb for R-square. Don Morrison wrote some famous articles a few years back demonstrating that R-squares approaching zero could still both actionable and profitab
Is a high $R^2$ ever useless? "Higher is better" is a bad rule of thumb for R-square. Don Morrison wrote some famous articles a few years back demonstrating that R-squares approaching zero could still both actionable and profitable, depending on the industry. For instance, in direct marketing predicting response to a ...
Is a high $R^2$ ever useless? "Higher is better" is a bad rule of thumb for R-square. Don Morrison wrote some famous articles a few years back demonstrating that R-squares approaching zero could still both actionable and profitab
11,350
Is a high $R^2$ ever useless?
The other answers offer great theoretical explanations of the many ways R-squared values can be fixed/faked/misleading/etc.. Here is a hands-on demonstration that has always stuck with me, coded in r: y <- rnorm(10) x <- sapply(rep(10,8),rnorm) summary(lm(y~x)) This can provide R-squared values > 0.90. Add enough regr...
Is a high $R^2$ ever useless?
The other answers offer great theoretical explanations of the many ways R-squared values can be fixed/faked/misleading/etc.. Here is a hands-on demonstration that has always stuck with me, coded in r:
Is a high $R^2$ ever useless? The other answers offer great theoretical explanations of the many ways R-squared values can be fixed/faked/misleading/etc.. Here is a hands-on demonstration that has always stuck with me, coded in r: y <- rnorm(10) x <- sapply(rep(10,8),rnorm) summary(lm(y~x)) This can provide R-squared ...
Is a high $R^2$ ever useless? The other answers offer great theoretical explanations of the many ways R-squared values can be fixed/faked/misleading/etc.. Here is a hands-on demonstration that has always stuck with me, coded in r:
11,351
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distribution?
For the more restricted question Why is a biased standard deviation formula typically used? the simple answer Because the associated variance estimator is unbiased. There is no real mathematical/statistical justification. may be accurate in many cases. However, this is not necessarily always the case. There are at...
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distrib
For the more restricted question Why is a biased standard deviation formula typically used? the simple answer Because the associated variance estimator is unbiased. There is no real mathematical/s
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distribution? For the more restricted question Why is a biased standard deviation formula typically used? the simple answer Because the associated variance estimator is unbiased. There is no real mathematical/statistical ju...
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distrib For the more restricted question Why is a biased standard deviation formula typically used? the simple answer Because the associated variance estimator is unbiased. There is no real mathematical/s
11,352
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distribution?
The sample standard deviation $S=\sqrt{\frac{\sum (X - \bar{X})^2}{n-1}}$ is complete and sufficient for $\sigma$ so the set of unbiased estimators of $\sigma^k$ given by $$ \frac{(n-1)^\frac{k}{2}}{2^\frac{k}{2}} \cdot \frac{\Gamma\left(\frac{n-1}{2}\right)}{\Gamma\left(\frac{n+k-1}{2}\right)} \cdot S^k = \frac{S^k}{c...
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distrib
The sample standard deviation $S=\sqrt{\frac{\sum (X - \bar{X})^2}{n-1}}$ is complete and sufficient for $\sigma$ so the set of unbiased estimators of $\sigma^k$ given by $$ \frac{(n-1)^\frac{k}{2}}{2
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distribution? The sample standard deviation $S=\sqrt{\frac{\sum (X - \bar{X})^2}{n-1}}$ is complete and sufficient for $\sigma$ so the set of unbiased estimators of $\sigma^k$ given by $$ \frac{(n-1)^\frac{k}{2}}{2^\frac{k}{2}}...
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distrib The sample standard deviation $S=\sqrt{\frac{\sum (X - \bar{X})^2}{n-1}}$ is complete and sufficient for $\sigma$ so the set of unbiased estimators of $\sigma^k$ given by $$ \frac{(n-1)^\frac{k}{2}}{2
11,353
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distribution?
Q2: Would someone please explain to me why we are using SD anyway as it is clearly biased and misleading? This came up as an aside in comments, but I think it bears repeating because it's the crux of the answer: The sample variance formula is unbiased, and variances are additive. So if you expect to do any (affine) t...
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distrib
Q2: Would someone please explain to me why we are using SD anyway as it is clearly biased and misleading? This came up as an aside in comments, but I think it bears repeating because it's the crux o
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distribution? Q2: Would someone please explain to me why we are using SD anyway as it is clearly biased and misleading? This came up as an aside in comments, but I think it bears repeating because it's the crux of the answer:...
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distrib Q2: Would someone please explain to me why we are using SD anyway as it is clearly biased and misleading? This came up as an aside in comments, but I think it bears repeating because it's the crux o
11,354
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distribution?
This post is in outline form. (1) Taking a square root is not an affine transformation (Credit @Scortchi.) (2) ${\rm var}(s) = {\rm E} (s^2) - {\rm E}(s)^2$, thus ${\rm E}(s) = \sqrt{{\rm E}(s^2) -{\rm var}(s)}\neq{\sqrt{\rm var(s)}}$ (3) $ {\rm var}(s)=\frac{\Sigma_{i=1}^{n}(x_i-\bar{x})^2}{n-1}$, whereas $\text{E}(s...
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distrib
This post is in outline form. (1) Taking a square root is not an affine transformation (Credit @Scortchi.) (2) ${\rm var}(s) = {\rm E} (s^2) - {\rm E}(s)^2$, thus ${\rm E}(s) = \sqrt{{\rm E}(s^2) -{\
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distribution? This post is in outline form. (1) Taking a square root is not an affine transformation (Credit @Scortchi.) (2) ${\rm var}(s) = {\rm E} (s^2) - {\rm E}(s)^2$, thus ${\rm E}(s) = \sqrt{{\rm E}(s^2) -{\rm var}(s)}\n...
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distrib This post is in outline form. (1) Taking a square root is not an affine transformation (Credit @Scortchi.) (2) ${\rm var}(s) = {\rm E} (s^2) - {\rm E}(s)^2$, thus ${\rm E}(s) = \sqrt{{\rm E}(s^2) -{\
11,355
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distribution?
I want to add the Bayesian answer to this discussion. Just because your assumption is that the data is generated according to some normal with unknown mean and variance, that doesn't mean that you should summarize your data using a mean and a variance. This whole problem can be avoided if you draw the model, which wi...
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distrib
I want to add the Bayesian answer to this discussion. Just because your assumption is that the data is generated according to some normal with unknown mean and variance, that doesn't mean that you sh
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distribution? I want to add the Bayesian answer to this discussion. Just because your assumption is that the data is generated according to some normal with unknown mean and variance, that doesn't mean that you should summariz...
Why are we using a biased and misleading standard deviation formula for $\sigma$ of a normal distrib I want to add the Bayesian answer to this discussion. Just because your assumption is that the data is generated according to some normal with unknown mean and variance, that doesn't mean that you sh
11,356
Good resource to understand ANOVA and ANCOVA?
The classics I think are Winer and Kirk, both cover essentially only ANOVA and ANCOVA. You can probably get used copys for cheap (e.g., I own a Winer second edition from 71 bought via AMAZON for less than 10$): Winer - Statistical Principles In Experimental Design Kirk - Experimental Design A more contemporary book is ...
Good resource to understand ANOVA and ANCOVA?
The classics I think are Winer and Kirk, both cover essentially only ANOVA and ANCOVA. You can probably get used copys for cheap (e.g., I own a Winer second edition from 71 bought via AMAZON for less
Good resource to understand ANOVA and ANCOVA? The classics I think are Winer and Kirk, both cover essentially only ANOVA and ANCOVA. You can probably get used copys for cheap (e.g., I own a Winer second edition from 71 bought via AMAZON for less than 10$): Winer - Statistical Principles In Experimental Design Kirk - Ex...
Good resource to understand ANOVA and ANCOVA? The classics I think are Winer and Kirk, both cover essentially only ANOVA and ANCOVA. You can probably get used copys for cheap (e.g., I own a Winer second edition from 71 bought via AMAZON for less
11,357
Good resource to understand ANOVA and ANCOVA?
So, in addition to this paper, Misunderstanding Analysis of Covariance, which enumerates common pitfalls when using ANCOVA, I would recommend starting with: Frank Harrell's homepage, especially his handout on Regression Modeling Strategies and Biostatistical Modeling John Fox's homepage includes great material on Line...
Good resource to understand ANOVA and ANCOVA?
So, in addition to this paper, Misunderstanding Analysis of Covariance, which enumerates common pitfalls when using ANCOVA, I would recommend starting with: Frank Harrell's homepage, especially his h
Good resource to understand ANOVA and ANCOVA? So, in addition to this paper, Misunderstanding Analysis of Covariance, which enumerates common pitfalls when using ANCOVA, I would recommend starting with: Frank Harrell's homepage, especially his handout on Regression Modeling Strategies and Biostatistical Modeling John ...
Good resource to understand ANOVA and ANCOVA? So, in addition to this paper, Misunderstanding Analysis of Covariance, which enumerates common pitfalls when using ANCOVA, I would recommend starting with: Frank Harrell's homepage, especially his h
11,358
Good resource to understand ANOVA and ANCOVA?
Applied Linear Statistical Models by Neter, Kutner, Wasserman, and Nachtscheim, has a very exhaustive (and exhausting!) treatment of ANOVA and ANCOVA. It also covers power analysis, linear regression, multilinear regression, and introduces some MANOVA. It's a very long text, but does a very thorough job. I've linked yo...
Good resource to understand ANOVA and ANCOVA?
Applied Linear Statistical Models by Neter, Kutner, Wasserman, and Nachtscheim, has a very exhaustive (and exhausting!) treatment of ANOVA and ANCOVA. It also covers power analysis, linear regression,
Good resource to understand ANOVA and ANCOVA? Applied Linear Statistical Models by Neter, Kutner, Wasserman, and Nachtscheim, has a very exhaustive (and exhausting!) treatment of ANOVA and ANCOVA. It also covers power analysis, linear regression, multilinear regression, and introduces some MANOVA. It's a very long text...
Good resource to understand ANOVA and ANCOVA? Applied Linear Statistical Models by Neter, Kutner, Wasserman, and Nachtscheim, has a very exhaustive (and exhausting!) treatment of ANOVA and ANCOVA. It also covers power analysis, linear regression,
11,359
Good resource to understand ANOVA and ANCOVA?
Gelman has a good discussion paper on ANOVA Analysis of variance—why it is more important than ever
Good resource to understand ANOVA and ANCOVA?
Gelman has a good discussion paper on ANOVA Analysis of variance—why it is more important than ever
Good resource to understand ANOVA and ANCOVA? Gelman has a good discussion paper on ANOVA Analysis of variance—why it is more important than ever
Good resource to understand ANOVA and ANCOVA? Gelman has a good discussion paper on ANOVA Analysis of variance—why it is more important than ever
11,360
Good resource to understand ANOVA and ANCOVA?
In my line of work, I've found this one to be quite useful: Statistical Methods for Psychology (Howell, 2009)
Good resource to understand ANOVA and ANCOVA?
In my line of work, I've found this one to be quite useful: Statistical Methods for Psychology (Howell, 2009)
Good resource to understand ANOVA and ANCOVA? In my line of work, I've found this one to be quite useful: Statistical Methods for Psychology (Howell, 2009)
Good resource to understand ANOVA and ANCOVA? In my line of work, I've found this one to be quite useful: Statistical Methods for Psychology (Howell, 2009)
11,361
Good resource to understand ANOVA and ANCOVA?
The R book does a good job on that. You can see that it dedicates one chapter to each one of those methods (11 and 12). If you are new to R, this is a great book to start with.
Good resource to understand ANOVA and ANCOVA?
The R book does a good job on that. You can see that it dedicates one chapter to each one of those methods (11 and 12). If you are new to R, this is a great book to start with.
Good resource to understand ANOVA and ANCOVA? The R book does a good job on that. You can see that it dedicates one chapter to each one of those methods (11 and 12). If you are new to R, this is a great book to start with.
Good resource to understand ANOVA and ANCOVA? The R book does a good job on that. You can see that it dedicates one chapter to each one of those methods (11 and 12). If you are new to R, this is a great book to start with.
11,362
Why are second-order derivatives useful in convex optimization?
Here's a common framework for interpreting both gradient descent and Newton's method, which is maybe a useful way to think of the difference as a supplement to @Sycorax's answer. (BFGS approximates Newton's method; I won't talk about it in particular here.) We're minimizing the function $f$, but we don't know how to do...
Why are second-order derivatives useful in convex optimization?
Here's a common framework for interpreting both gradient descent and Newton's method, which is maybe a useful way to think of the difference as a supplement to @Sycorax's answer. (BFGS approximates Ne
Why are second-order derivatives useful in convex optimization? Here's a common framework for interpreting both gradient descent and Newton's method, which is maybe a useful way to think of the difference as a supplement to @Sycorax's answer. (BFGS approximates Newton's method; I won't talk about it in particular here....
Why are second-order derivatives useful in convex optimization? Here's a common framework for interpreting both gradient descent and Newton's method, which is maybe a useful way to think of the difference as a supplement to @Sycorax's answer. (BFGS approximates Ne
11,363
Why are second-order derivatives useful in convex optimization?
Essentially, the advantage of a second-derivative method like Newton's method is that it has the quality of quadratic termination. This means that it can minimize a quadratic function in a finite number of steps. A method like gradient descent depends heavily on the learning rate, which can cause optimization to either...
Why are second-order derivatives useful in convex optimization?
Essentially, the advantage of a second-derivative method like Newton's method is that it has the quality of quadratic termination. This means that it can minimize a quadratic function in a finite numb
Why are second-order derivatives useful in convex optimization? Essentially, the advantage of a second-derivative method like Newton's method is that it has the quality of quadratic termination. This means that it can minimize a quadratic function in a finite number of steps. A method like gradient descent depends heav...
Why are second-order derivatives useful in convex optimization? Essentially, the advantage of a second-derivative method like Newton's method is that it has the quality of quadratic termination. This means that it can minimize a quadratic function in a finite numb
11,364
Why are second-order derivatives useful in convex optimization?
In convex optimization you are approximating the function as the second degree polynomial in one dimensional case: $$f(x)=c+\beta x + \alpha x^2$$ In this case the the second derivative $$\partial^2 f(x)/\partial x^2=2\alpha$$ If you know the derivatives, then it's easy to get the next guess for the optimum: $$\text{g...
Why are second-order derivatives useful in convex optimization?
In convex optimization you are approximating the function as the second degree polynomial in one dimensional case: $$f(x)=c+\beta x + \alpha x^2$$ In this case the the second derivative $$\partial^2 f
Why are second-order derivatives useful in convex optimization? In convex optimization you are approximating the function as the second degree polynomial in one dimensional case: $$f(x)=c+\beta x + \alpha x^2$$ In this case the the second derivative $$\partial^2 f(x)/\partial x^2=2\alpha$$ If you know the derivatives, ...
Why are second-order derivatives useful in convex optimization? In convex optimization you are approximating the function as the second degree polynomial in one dimensional case: $$f(x)=c+\beta x + \alpha x^2$$ In this case the the second derivative $$\partial^2 f
11,365
Why are second-order derivatives useful in convex optimization?
@Danica already gave a great technical answer. The no-maths explanation is that while the linear (order 1) approximation provides a “plane” that is tangential to a point on an error surface, the quadratic approximation (order 2) provides a surface that hugs the curvature of the error surface. The videos on this link d...
Why are second-order derivatives useful in convex optimization?
@Danica already gave a great technical answer. The no-maths explanation is that while the linear (order 1) approximation provides a “plane” that is tangential to a point on an error surface, the quadr
Why are second-order derivatives useful in convex optimization? @Danica already gave a great technical answer. The no-maths explanation is that while the linear (order 1) approximation provides a “plane” that is tangential to a point on an error surface, the quadratic approximation (order 2) provides a surface that hu...
Why are second-order derivatives useful in convex optimization? @Danica already gave a great technical answer. The no-maths explanation is that while the linear (order 1) approximation provides a “plane” that is tangential to a point on an error surface, the quadr
11,366
ACF and PACF Formula
Autocorrelations The correlation between two variables $y_1, y_2$ is defined as: $$ \rho = \frac{\hbox{E}\left[(y_1-\mu_1)(y_2-\mu_2)\right]}{\sigma_1 \sigma_2} = \frac{\hbox{Cov}(y_1, y_2)}{\sigma_1 \sigma_2} \,, $$ where E is the expectation operator, $\mu_1$ and $\mu_2$ are the means respectively for $y_1$ and $y_2$...
ACF and PACF Formula
Autocorrelations The correlation between two variables $y_1, y_2$ is defined as: $$ \rho = \frac{\hbox{E}\left[(y_1-\mu_1)(y_2-\mu_2)\right]}{\sigma_1 \sigma_2} = \frac{\hbox{Cov}(y_1, y_2)}{\sigma_1
ACF and PACF Formula Autocorrelations The correlation between two variables $y_1, y_2$ is defined as: $$ \rho = \frac{\hbox{E}\left[(y_1-\mu_1)(y_2-\mu_2)\right]}{\sigma_1 \sigma_2} = \frac{\hbox{Cov}(y_1, y_2)}{\sigma_1 \sigma_2} \,, $$ where E is the expectation operator, $\mu_1$ and $\mu_2$ are the means respectivel...
ACF and PACF Formula Autocorrelations The correlation between two variables $y_1, y_2$ is defined as: $$ \rho = \frac{\hbox{E}\left[(y_1-\mu_1)(y_2-\mu_2)\right]}{\sigma_1 \sigma_2} = \frac{\hbox{Cov}(y_1, y_2)}{\sigma_1
11,367
ACF and PACF Formula
"I want to create a code for plotting ACF and PACF from time-series data". Although the OP is a bit vague, it may possibly be more targeted to a "recipe"-style coding formulation than a linear algebra model formulation. The ACF is rather straightforward: we have a time series, and basically make multiple "copies" ...
ACF and PACF Formula
"I want to create a code for plotting ACF and PACF from time-series data". Although the OP is a bit vague, it may possibly be more targeted to a "recipe"-style coding formulation than a linear alge
ACF and PACF Formula "I want to create a code for plotting ACF and PACF from time-series data". Although the OP is a bit vague, it may possibly be more targeted to a "recipe"-style coding formulation than a linear algebra model formulation. The ACF is rather straightforward: we have a time series, and basically ma...
ACF and PACF Formula "I want to create a code for plotting ACF and PACF from time-series data". Although the OP is a bit vague, it may possibly be more targeted to a "recipe"-style coding formulation than a linear alge
11,368
ACF and PACF Formula
Well, in the practise we found error (noise) which is represented by $ e_t $ the confidence bands help you to figure out if a level can be considerate as only noise (because about the 95% times will be into the bands).
ACF and PACF Formula
Well, in the practise we found error (noise) which is represented by $ e_t $ the confidence bands help you to figure out if a level can be considerate as only noise (because about the 95% times will b
ACF and PACF Formula Well, in the practise we found error (noise) which is represented by $ e_t $ the confidence bands help you to figure out if a level can be considerate as only noise (because about the 95% times will be into the bands).
ACF and PACF Formula Well, in the practise we found error (noise) which is represented by $ e_t $ the confidence bands help you to figure out if a level can be considerate as only noise (because about the 95% times will b
11,369
ACF and PACF Formula
Here is a python code to compute ACF: def shift(x,b): if ( b <= 0 ): return x d = np.array(x); d1 = d d1[b:] = d[:-b] d1[0:b] = 0 return d1 # One way of doing it using bare bones # - you divide by first to normalize - because corr(x,x) = 1 x = np.arange(0,10) xo = x - x.mean() cors = [...
ACF and PACF Formula
Here is a python code to compute ACF: def shift(x,b): if ( b <= 0 ): return x d = np.array(x); d1 = d d1[b:] = d[:-b] d1[0:b] = 0 return d1 # One way of doing it using
ACF and PACF Formula Here is a python code to compute ACF: def shift(x,b): if ( b <= 0 ): return x d = np.array(x); d1 = d d1[b:] = d[:-b] d1[0:b] = 0 return d1 # One way of doing it using bare bones # - you divide by first to normalize - because corr(x,x) = 1 x = np.arange(0,10) xo = x...
ACF and PACF Formula Here is a python code to compute ACF: def shift(x,b): if ( b <= 0 ): return x d = np.array(x); d1 = d d1[b:] = d[:-b] d1[0:b] = 0 return d1 # One way of doing it using
11,370
What makes the mean of some distributions undefined?
The mean of a distribution is defined in terms of an integral (I'll write it as if for a continuous distribution - as a Riemann integral, say - but the issue applies more generally; we can proceed to Stieltjes or Lebesgue integration to deal with these properly and all at once): $$E(X) = \int_{-\infty}^\infty x f(x)\, ...
What makes the mean of some distributions undefined?
The mean of a distribution is defined in terms of an integral (I'll write it as if for a continuous distribution - as a Riemann integral, say - but the issue applies more generally; we can proceed to
What makes the mean of some distributions undefined? The mean of a distribution is defined in terms of an integral (I'll write it as if for a continuous distribution - as a Riemann integral, say - but the issue applies more generally; we can proceed to Stieltjes or Lebesgue integration to deal with these properly and a...
What makes the mean of some distributions undefined? The mean of a distribution is defined in terms of an integral (I'll write it as if for a continuous distribution - as a Riemann integral, say - but the issue applies more generally; we can proceed to
11,371
What makes the mean of some distributions undefined?
The other answers are good, but might not convince everyone, especially people who take one look at the Cauchy distribution (with $x_0 = 0$) and say it's still intuitively obvious that the mean should be zero. The reason the intuitive answer is not correct from the mathematical perspective is due to the Riemann rearr...
What makes the mean of some distributions undefined?
The other answers are good, but might not convince everyone, especially people who take one look at the Cauchy distribution (with $x_0 = 0$) and say it's still intuitively obvious that the mean shoul
What makes the mean of some distributions undefined? The other answers are good, but might not convince everyone, especially people who take one look at the Cauchy distribution (with $x_0 = 0$) and say it's still intuitively obvious that the mean should be zero. The reason the intuitive answer is not correct from the...
What makes the mean of some distributions undefined? The other answers are good, but might not convince everyone, especially people who take one look at the Cauchy distribution (with $x_0 = 0$) and say it's still intuitively obvious that the mean shoul
11,372
What makes the mean of some distributions undefined?
General Abrial and Glen_b had perfect answers. I just want to add a small demo to show you the mean of Cauchy distribution does not exist / does not converge. In following experiment, you will see, even you get a large sample and calcluate the empirical mean from the sample, the numbers are quite different from experi...
What makes the mean of some distributions undefined?
General Abrial and Glen_b had perfect answers. I just want to add a small demo to show you the mean of Cauchy distribution does not exist / does not converge. In following experiment, you will see, e
What makes the mean of some distributions undefined? General Abrial and Glen_b had perfect answers. I just want to add a small demo to show you the mean of Cauchy distribution does not exist / does not converge. In following experiment, you will see, even you get a large sample and calcluate the empirical mean from th...
What makes the mean of some distributions undefined? General Abrial and Glen_b had perfect answers. I just want to add a small demo to show you the mean of Cauchy distribution does not exist / does not converge. In following experiment, you will see, e
11,373
What makes the mean of some distributions undefined?
The Cauchy distribution is a disguised form of a very fundamental distribution, namely the uniform distribution on a circle. In formulas, the infinitesimal probability is $d\theta/2\pi$, where $\theta$ is the angle coordinate. The probability (or measure) of an arc $A\subset \mathbb S^1$ is $\mathtt{length}(A)/2\pi$. T...
What makes the mean of some distributions undefined?
The Cauchy distribution is a disguised form of a very fundamental distribution, namely the uniform distribution on a circle. In formulas, the infinitesimal probability is $d\theta/2\pi$, where $\theta
What makes the mean of some distributions undefined? The Cauchy distribution is a disguised form of a very fundamental distribution, namely the uniform distribution on a circle. In formulas, the infinitesimal probability is $d\theta/2\pi$, where $\theta$ is the angle coordinate. The probability (or measure) of an arc $...
What makes the mean of some distributions undefined? The Cauchy distribution is a disguised form of a very fundamental distribution, namely the uniform distribution on a circle. In formulas, the infinitesimal probability is $d\theta/2\pi$, where $\theta
11,374
What makes the mean of some distributions undefined?
By definition of Lebesgue-Stieltjes integral, the mean exists if: $$\int \vert x\vert dF(x)<\infty.$$ https://en.wikipedia.org/wiki/Moment_(mathematics)#Significance_of_the_moments https://en.wikipedia.org/wiki/Lebesgue_integration
What makes the mean of some distributions undefined?
By definition of Lebesgue-Stieltjes integral, the mean exists if: $$\int \vert x\vert dF(x)<\infty.$$ https://en.wikipedia.org/wiki/Moment_(mathematics)#Significance_of_the_moments https://en.wikipedi
What makes the mean of some distributions undefined? By definition of Lebesgue-Stieltjes integral, the mean exists if: $$\int \vert x\vert dF(x)<\infty.$$ https://en.wikipedia.org/wiki/Moment_(mathematics)#Significance_of_the_moments https://en.wikipedia.org/wiki/Lebesgue_integration
What makes the mean of some distributions undefined? By definition of Lebesgue-Stieltjes integral, the mean exists if: $$\int \vert x\vert dF(x)<\infty.$$ https://en.wikipedia.org/wiki/Moment_(mathematics)#Significance_of_the_moments https://en.wikipedi
11,375
What makes the mean of some distributions undefined?
It helps to think about such questions from a more abstract point of view. When we are talking about random variables we implicitly assume the existence of probability space $(\Omega, \mathcal{F}, \mathbb{P})$. Then a random variable $X: \Omega \to \mathbb{R}$ is simply a measurable function in $(\mathbb{R}, \mathcal{B...
What makes the mean of some distributions undefined?
It helps to think about such questions from a more abstract point of view. When we are talking about random variables we implicitly assume the existence of probability space $(\Omega, \mathcal{F}, \ma
What makes the mean of some distributions undefined? It helps to think about such questions from a more abstract point of view. When we are talking about random variables we implicitly assume the existence of probability space $(\Omega, \mathcal{F}, \mathbb{P})$. Then a random variable $X: \Omega \to \mathbb{R}$ is sim...
What makes the mean of some distributions undefined? It helps to think about such questions from a more abstract point of view. When we are talking about random variables we implicitly assume the existence of probability space $(\Omega, \mathcal{F}, \ma
11,376
Can a model for non-negative data with clumping at zeros (Tweedie GLM, zero-inflated GLM, etc.) predict exact zeros?
Note that the predicted value in a GLM is a mean. For any distribution on non-negative values, to predict a mean of 0, its distribution would have to be entirely a spike at 0. However, with a log-link, you're never going to fit a mean of exactly zero (since that would require $\eta$ to go to $-\infty$). So your problem...
Can a model for non-negative data with clumping at zeros (Tweedie GLM, zero-inflated GLM, etc.) pred
Note that the predicted value in a GLM is a mean. For any distribution on non-negative values, to predict a mean of 0, its distribution would have to be entirely a spike at 0. However, with a log-link
Can a model for non-negative data with clumping at zeros (Tweedie GLM, zero-inflated GLM, etc.) predict exact zeros? Note that the predicted value in a GLM is a mean. For any distribution on non-negative values, to predict a mean of 0, its distribution would have to be entirely a spike at 0. However, with a log-link, y...
Can a model for non-negative data with clumping at zeros (Tweedie GLM, zero-inflated GLM, etc.) pred Note that the predicted value in a GLM is a mean. For any distribution on non-negative values, to predict a mean of 0, its distribution would have to be entirely a spike at 0. However, with a log-link
11,377
Can a model for non-negative data with clumping at zeros (Tweedie GLM, zero-inflated GLM, etc.) predict exact zeros?
Predicting the proportion of zeros I am the author of the statmod package and joint author of the tweedie package. Everything in your example is working correctly. The code is accounting correctly for any zeros that might be in the data. As Glen_b and Tim have explained, the predicted mean value will never be exactly z...
Can a model for non-negative data with clumping at zeros (Tweedie GLM, zero-inflated GLM, etc.) pred
Predicting the proportion of zeros I am the author of the statmod package and joint author of the tweedie package. Everything in your example is working correctly. The code is accounting correctly for
Can a model for non-negative data with clumping at zeros (Tweedie GLM, zero-inflated GLM, etc.) predict exact zeros? Predicting the proportion of zeros I am the author of the statmod package and joint author of the tweedie package. Everything in your example is working correctly. The code is accounting correctly for an...
Can a model for non-negative data with clumping at zeros (Tweedie GLM, zero-inflated GLM, etc.) pred Predicting the proportion of zeros I am the author of the statmod package and joint author of the tweedie package. Everything in your example is working correctly. The code is accounting correctly for
11,378
Can a model for non-negative data with clumping at zeros (Tweedie GLM, zero-inflated GLM, etc.) predict exact zeros?
This answer was merged from another thread asking about predictions zero-inflated regression model, but it also applies to the Tweedie GLM model. Regression-like models predict mean of some distribution (normal for linear regression, Bernoulli for logistic regression, Poisson for Poisson regression etc.). In the case o...
Can a model for non-negative data with clumping at zeros (Tweedie GLM, zero-inflated GLM, etc.) pred
This answer was merged from another thread asking about predictions zero-inflated regression model, but it also applies to the Tweedie GLM model. Regression-like models predict mean of some distributi
Can a model for non-negative data with clumping at zeros (Tweedie GLM, zero-inflated GLM, etc.) predict exact zeros? This answer was merged from another thread asking about predictions zero-inflated regression model, but it also applies to the Tweedie GLM model. Regression-like models predict mean of some distribution ...
Can a model for non-negative data with clumping at zeros (Tweedie GLM, zero-inflated GLM, etc.) pred This answer was merged from another thread asking about predictions zero-inflated regression model, but it also applies to the Tweedie GLM model. Regression-like models predict mean of some distributi
11,379
Cross validation and parameter tuning
Cross-validation gives a measure of out-of-sample accuracy by averaging over several random partitions of the data into training and test samples. It is often used for parameter tuning by doing cross-validation for several (or many) possible values of a parameter and choosing the parameter value that gives the lowest c...
Cross validation and parameter tuning
Cross-validation gives a measure of out-of-sample accuracy by averaging over several random partitions of the data into training and test samples. It is often used for parameter tuning by doing cross-
Cross validation and parameter tuning Cross-validation gives a measure of out-of-sample accuracy by averaging over several random partitions of the data into training and test samples. It is often used for parameter tuning by doing cross-validation for several (or many) possible values of a parameter and choosing the p...
Cross validation and parameter tuning Cross-validation gives a measure of out-of-sample accuracy by averaging over several random partitions of the data into training and test samples. It is often used for parameter tuning by doing cross-
11,380
Cross validation and parameter tuning
To add to Jonathan's answer. However, if you use cross validation for parameter tuning, the out-samples in fact become part of your model. So you need another independent sample to correctly measure the final model's performance. Employed for measuring model performance, cross validation can measure more than just th...
Cross validation and parameter tuning
To add to Jonathan's answer. However, if you use cross validation for parameter tuning, the out-samples in fact become part of your model. So you need another independent sample to correctly measure t
Cross validation and parameter tuning To add to Jonathan's answer. However, if you use cross validation for parameter tuning, the out-samples in fact become part of your model. So you need another independent sample to correctly measure the final model's performance. Employed for measuring model performance, cross val...
Cross validation and parameter tuning To add to Jonathan's answer. However, if you use cross validation for parameter tuning, the out-samples in fact become part of your model. So you need another independent sample to correctly measure t
11,381
Cross validation and parameter tuning
To add to previous answers, we'll start from the beginning: There are few ways you can overfit your models to the training data, some are obvious, some less so. First, and the most important one is overfitting of the training parameters (weights) to the data (curve fitting parameters in logistic regression, network wei...
Cross validation and parameter tuning
To add to previous answers, we'll start from the beginning: There are few ways you can overfit your models to the training data, some are obvious, some less so. First, and the most important one is ov
Cross validation and parameter tuning To add to previous answers, we'll start from the beginning: There are few ways you can overfit your models to the training data, some are obvious, some less so. First, and the most important one is overfitting of the training parameters (weights) to the data (curve fitting paramete...
Cross validation and parameter tuning To add to previous answers, we'll start from the beginning: There are few ways you can overfit your models to the training data, some are obvious, some less so. First, and the most important one is ov
11,382
Cross validation and parameter tuning
If you are from scikit-learn background, this answer might be helpful. k-fold cross-validation is used to split the data into k partitions, the estimator is then trained on k-1 partitions and then tested on the kth partition. Like this, choosing which partition should be the kth partition, there are k possibilities. Th...
Cross validation and parameter tuning
If you are from scikit-learn background, this answer might be helpful. k-fold cross-validation is used to split the data into k partitions, the estimator is then trained on k-1 partitions and then tes
Cross validation and parameter tuning If you are from scikit-learn background, this answer might be helpful. k-fold cross-validation is used to split the data into k partitions, the estimator is then trained on k-1 partitions and then tested on the kth partition. Like this, choosing which partition should be the kth pa...
Cross validation and parameter tuning If you are from scikit-learn background, this answer might be helpful. k-fold cross-validation is used to split the data into k partitions, the estimator is then trained on k-1 partitions and then tes
11,383
Cross validation and parameter tuning
Hyperparameters optimization or parameters tuning is used to find the best hyperparameters sklearn hyperparameters optimization that are parameters that are not directly learnt within estimators. They are passed as arguments to the constructor of the estimator classes. Typical examples include C, kernel and gamma for S...
Cross validation and parameter tuning
Hyperparameters optimization or parameters tuning is used to find the best hyperparameters sklearn hyperparameters optimization that are parameters that are not directly learnt within estimators. They
Cross validation and parameter tuning Hyperparameters optimization or parameters tuning is used to find the best hyperparameters sklearn hyperparameters optimization that are parameters that are not directly learnt within estimators. They are passed as arguments to the constructor of the estimator classes. Typical exam...
Cross validation and parameter tuning Hyperparameters optimization or parameters tuning is used to find the best hyperparameters sklearn hyperparameters optimization that are parameters that are not directly learnt within estimators. They
11,384
Appropriate normality tests for small samples
The fBasics package in R (part of Rmetrics) includes several normality tests, covering many of the popular frequentist tests -- Kolmogorov-Smirnov, Shapiro-Wilk, Jarque–Bera, and D'Agostino -- along with a wrapper for the normality tests in the nortest package -- Anderson–Darling, Cramer–von Mises, Lilliefors (Kolmogor...
Appropriate normality tests for small samples
The fBasics package in R (part of Rmetrics) includes several normality tests, covering many of the popular frequentist tests -- Kolmogorov-Smirnov, Shapiro-Wilk, Jarque–Bera, and D'Agostino -- along w
Appropriate normality tests for small samples The fBasics package in R (part of Rmetrics) includes several normality tests, covering many of the popular frequentist tests -- Kolmogorov-Smirnov, Shapiro-Wilk, Jarque–Bera, and D'Agostino -- along with a wrapper for the normality tests in the nortest package -- Anderson–D...
Appropriate normality tests for small samples The fBasics package in R (part of Rmetrics) includes several normality tests, covering many of the popular frequentist tests -- Kolmogorov-Smirnov, Shapiro-Wilk, Jarque–Bera, and D'Agostino -- along w
11,385
Appropriate normality tests for small samples
For normality, actual Shapiro-Wilk has good power in fairly small samples. The main competitor in studies that I have seen is the more general Anderson-Darling, which does fairly well, but I wouldn't say it was better. If you can clarify what alternatives interest you, possibly a better statistic would be more obvious...
Appropriate normality tests for small samples
For normality, actual Shapiro-Wilk has good power in fairly small samples. The main competitor in studies that I have seen is the more general Anderson-Darling, which does fairly well, but I wouldn't
Appropriate normality tests for small samples For normality, actual Shapiro-Wilk has good power in fairly small samples. The main competitor in studies that I have seen is the more general Anderson-Darling, which does fairly well, but I wouldn't say it was better. If you can clarify what alternatives interest you, pos...
Appropriate normality tests for small samples For normality, actual Shapiro-Wilk has good power in fairly small samples. The main competitor in studies that I have seen is the more general Anderson-Darling, which does fairly well, but I wouldn't
11,386
Appropriate normality tests for small samples
There is a whole Wikipedia category on normality tests including: the Anderson-Darling test, popular amongst statisticians; and the Jarque-Bera test, popular amongst econometricians. I think A-D is probably the best of them.
Appropriate normality tests for small samples
There is a whole Wikipedia category on normality tests including: the Anderson-Darling test, popular amongst statisticians; and the Jarque-Bera test, popular amongst econometricians. I think A-D is
Appropriate normality tests for small samples There is a whole Wikipedia category on normality tests including: the Anderson-Darling test, popular amongst statisticians; and the Jarque-Bera test, popular amongst econometricians. I think A-D is probably the best of them.
Appropriate normality tests for small samples There is a whole Wikipedia category on normality tests including: the Anderson-Darling test, popular amongst statisticians; and the Jarque-Bera test, popular amongst econometricians. I think A-D is
11,387
Appropriate normality tests for small samples
For completeness, econometricians also like the Kiefer and Salmon test from their 1983 paper in Economics Letters -- it sums 'normalized' expressions of skewness and kurtosis which is then chi-square distributed. I have an old C++ version I wrote during grad school I could translate into R. Edit: And here is recent pa...
Appropriate normality tests for small samples
For completeness, econometricians also like the Kiefer and Salmon test from their 1983 paper in Economics Letters -- it sums 'normalized' expressions of skewness and kurtosis which is then chi-square
Appropriate normality tests for small samples For completeness, econometricians also like the Kiefer and Salmon test from their 1983 paper in Economics Letters -- it sums 'normalized' expressions of skewness and kurtosis which is then chi-square distributed. I have an old C++ version I wrote during grad school I could...
Appropriate normality tests for small samples For completeness, econometricians also like the Kiefer and Salmon test from their 1983 paper in Economics Letters -- it sums 'normalized' expressions of skewness and kurtosis which is then chi-square
11,388
Appropriate normality tests for small samples
In fact the Kiefer Salmon test and the Jarque Bera test are critically different as shown in several places but most recently here -Moment Tests for Standardized Error Distributions: A Simple Robust Approach by Yi-Ting Chen. The Kiefer Salmon test by construction is robust in the face of ARCH type error structures unl...
Appropriate normality tests for small samples
In fact the Kiefer Salmon test and the Jarque Bera test are critically different as shown in several places but most recently here -Moment Tests for Standardized Error Distributions: A Simple Robust
Appropriate normality tests for small samples In fact the Kiefer Salmon test and the Jarque Bera test are critically different as shown in several places but most recently here -Moment Tests for Standardized Error Distributions: A Simple Robust Approach by Yi-Ting Chen. The Kiefer Salmon test by construction is robust...
Appropriate normality tests for small samples In fact the Kiefer Salmon test and the Jarque Bera test are critically different as shown in several places but most recently here -Moment Tests for Standardized Error Distributions: A Simple Robust
11,389
Appropriate normality tests for small samples
For sample sizes <30 subjects, Shapiro-Wilk is considered to have a robust power - Be careful, when adjusting the significance level of the test, since it may induce a type II error! [1]
Appropriate normality tests for small samples
For sample sizes <30 subjects, Shapiro-Wilk is considered to have a robust power - Be careful, when adjusting the significance level of the test, since it may induce a type II error! [1]
Appropriate normality tests for small samples For sample sizes <30 subjects, Shapiro-Wilk is considered to have a robust power - Be careful, when adjusting the significance level of the test, since it may induce a type II error! [1]
Appropriate normality tests for small samples For sample sizes <30 subjects, Shapiro-Wilk is considered to have a robust power - Be careful, when adjusting the significance level of the test, since it may induce a type II error! [1]
11,390
Appropriate normality tests for small samples
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I read about the z-score intervals: For small samples...
Appropriate normality tests for small samples
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
Appropriate normality tests for small samples Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. I read a...
Appropriate normality tests for small samples Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
11,391
Why use extreme value theory?
Disclaimer: At points in the following, this GROSSLY presumes that your data is normally distributed. If you are actually engineering anything then talk to a strong stats professional and let that person sign on the line saying what the level will be. Talk to five of them, or 25 of them. This answer is meant for a c...
Why use extreme value theory?
Disclaimer: At points in the following, this GROSSLY presumes that your data is normally distributed. If you are actually engineering anything then talk to a strong stats professional and let that pe
Why use extreme value theory? Disclaimer: At points in the following, this GROSSLY presumes that your data is normally distributed. If you are actually engineering anything then talk to a strong stats professional and let that person sign on the line saying what the level will be. Talk to five of them, or 25 of them....
Why use extreme value theory? Disclaimer: At points in the following, this GROSSLY presumes that your data is normally distributed. If you are actually engineering anything then talk to a strong stats professional and let that pe
11,392
Why use extreme value theory?
If you are only interested in a tail it makes a sense that you focus your data collection and analysis effort on the tail. It should be more efficient to do so. I emphasized the data collection because this aspect is often ignored when presenting an argument for EVT distributions. In fact, it could be infeasible to col...
Why use extreme value theory?
If you are only interested in a tail it makes a sense that you focus your data collection and analysis effort on the tail. It should be more efficient to do so. I emphasized the data collection becaus
Why use extreme value theory? If you are only interested in a tail it makes a sense that you focus your data collection and analysis effort on the tail. It should be more efficient to do so. I emphasized the data collection because this aspect is often ignored when presenting an argument for EVT distributions. In fact,...
Why use extreme value theory? If you are only interested in a tail it makes a sense that you focus your data collection and analysis effort on the tail. It should be more efficient to do so. I emphasized the data collection becaus
11,393
Why use extreme value theory?
You use extreme value theory to extrapolate from the observed data. Often, the data you have simply isn't big enough to provide you with a sensible estimate of a tail probability. Taking @EngrStudent's example of a 1-in-1000 year event: that corresponds to finding the 99.9% quantile of a distribution. But if you only h...
Why use extreme value theory?
You use extreme value theory to extrapolate from the observed data. Often, the data you have simply isn't big enough to provide you with a sensible estimate of a tail probability. Taking @EngrStudent'
Why use extreme value theory? You use extreme value theory to extrapolate from the observed data. Often, the data you have simply isn't big enough to provide you with a sensible estimate of a tail probability. Taking @EngrStudent's example of a 1-in-1000 year event: that corresponds to finding the 99.9% quantile of a d...
Why use extreme value theory? You use extreme value theory to extrapolate from the observed data. Often, the data you have simply isn't big enough to provide you with a sensible estimate of a tail probability. Taking @EngrStudent'
11,394
Why use extreme value theory?
Usually, the distribution of the underlying data (e.g., Gaussian wind speeds) is for a single sample point. The 98th percentile will tell you that for any randomly selected point there is a 2% chance of the value being bigger than the 98th percentile. I'm not a civil engineer, but I'd imagine what you'd want to know i...
Why use extreme value theory?
Usually, the distribution of the underlying data (e.g., Gaussian wind speeds) is for a single sample point. The 98th percentile will tell you that for any randomly selected point there is a 2% chance
Why use extreme value theory? Usually, the distribution of the underlying data (e.g., Gaussian wind speeds) is for a single sample point. The 98th percentile will tell you that for any randomly selected point there is a 2% chance of the value being bigger than the 98th percentile. I'm not a civil engineer, but I'd ima...
Why use extreme value theory? Usually, the distribution of the underlying data (e.g., Gaussian wind speeds) is for a single sample point. The 98th percentile will tell you that for any randomly selected point there is a 2% chance
11,395
Why use extreme value theory?
The use of the quantile makes the further calculation simpler. The civil engineers can substitute the value (wind speed, for instance) into their first-principle formulas and they obtain the behavior of the system for those extreme conditions that correspond to the 98.5% quantile. The use of the whole distribution coul...
Why use extreme value theory?
The use of the quantile makes the further calculation simpler. The civil engineers can substitute the value (wind speed, for instance) into their first-principle formulas and they obtain the behavior
Why use extreme value theory? The use of the quantile makes the further calculation simpler. The civil engineers can substitute the value (wind speed, for instance) into their first-principle formulas and they obtain the behavior of the system for those extreme conditions that correspond to the 98.5% quantile. The use ...
Why use extreme value theory? The use of the quantile makes the further calculation simpler. The civil engineers can substitute the value (wind speed, for instance) into their first-principle formulas and they obtain the behavior
11,396
How to find local peaks/valleys in a series of data?
The source of this code is obtained by typing its name at the R prompt. The output is function (x, thresh = 0) { pks <- which(diff(sign(diff(x, na.pad = FALSE)), na.pad = FALSE) < 0) + 2 if (!missing(thresh)) { pks[x[pks - 1] - x[pks] > thresh] } else pks } The test x[pks - 1] - x[pks] > thre...
How to find local peaks/valleys in a series of data?
The source of this code is obtained by typing its name at the R prompt. The output is function (x, thresh = 0) { pks <- which(diff(sign(diff(x, na.pad = FALSE)), na.pad = FALSE) < 0) + 2 if
How to find local peaks/valleys in a series of data? The source of this code is obtained by typing its name at the R prompt. The output is function (x, thresh = 0) { pks <- which(diff(sign(diff(x, na.pad = FALSE)), na.pad = FALSE) < 0) + 2 if (!missing(thresh)) { pks[x[pks - 1] - x[pks] > thresh] ...
How to find local peaks/valleys in a series of data? The source of this code is obtained by typing its name at the R prompt. The output is function (x, thresh = 0) { pks <- which(diff(sign(diff(x, na.pad = FALSE)), na.pad = FALSE) < 0) + 2 if
11,397
How to find local peaks/valleys in a series of data?
I agree with whuber's response but just wanted to add that the "+2" portion of the code, which attempts to shift the index to match the newly found peak actually 'overshoots' and should be "+1". for instance in the example at hand we obtain: > findPeaks(cc) [1] 3 22 41 59 78 96 when we highlight these found peaks on ...
How to find local peaks/valleys in a series of data?
I agree with whuber's response but just wanted to add that the "+2" portion of the code, which attempts to shift the index to match the newly found peak actually 'overshoots' and should be "+1". for i
How to find local peaks/valleys in a series of data? I agree with whuber's response but just wanted to add that the "+2" portion of the code, which attempts to shift the index to match the newly found peak actually 'overshoots' and should be "+1". for instance in the example at hand we obtain: > findPeaks(cc) [1] 3 22...
How to find local peaks/valleys in a series of data? I agree with whuber's response but just wanted to add that the "+2" portion of the code, which attempts to shift the index to match the newly found peak actually 'overshoots' and should be "+1". for i
11,398
How to find local peaks/valleys in a series of data?
Eek: Minor update. I had to change two lines of code, the bounds, (add a -1 and +1) to reach equivalency with Stas_G's function(it was finding a few too many 'extra peaks' in real data-sets). Apologies for anyone lead very minorly astray by my original post. I have been using Stas_g's find peaks algorithm for quite som...
How to find local peaks/valleys in a series of data?
Eek: Minor update. I had to change two lines of code, the bounds, (add a -1 and +1) to reach equivalency with Stas_G's function(it was finding a few too many 'extra peaks' in real data-sets). Apologie
How to find local peaks/valleys in a series of data? Eek: Minor update. I had to change two lines of code, the bounds, (add a -1 and +1) to reach equivalency with Stas_G's function(it was finding a few too many 'extra peaks' in real data-sets). Apologies for anyone lead very minorly astray by my original post. I have b...
How to find local peaks/valleys in a series of data? Eek: Minor update. I had to change two lines of code, the bounds, (add a -1 and +1) to reach equivalency with Stas_G's function(it was finding a few too many 'extra peaks' in real data-sets). Apologie
11,399
How to find local peaks/valleys in a series of data?
Firstly: The algorithm also falsely calls a drop to the right of a flat plateau since sign(diff(x, na.pad = FALSE)) will be 0 then -1 so that its diff will also be -1. A simple fix is to ensure that the sign-diff preceding the negative entry is not zero but positive: n <- length(x) dx.1 <- sign(diff(x, na.p...
How to find local peaks/valleys in a series of data?
Firstly: The algorithm also falsely calls a drop to the right of a flat plateau since sign(diff(x, na.pad = FALSE)) will be 0 then -1 so that its diff will also be -1. A simple fix is to ensure tha
How to find local peaks/valleys in a series of data? Firstly: The algorithm also falsely calls a drop to the right of a flat plateau since sign(diff(x, na.pad = FALSE)) will be 0 then -1 so that its diff will also be -1. A simple fix is to ensure that the sign-diff preceding the negative entry is not zero but posit...
How to find local peaks/valleys in a series of data? Firstly: The algorithm also falsely calls a drop to the right of a flat plateau since sign(diff(x, na.pad = FALSE)) will be 0 then -1 so that its diff will also be -1. A simple fix is to ensure tha
11,400
How to find local peaks/valleys in a series of data?
It's true the function also identifies the end of plateaux, but I think there is another easier fix: Since the first diff of a real peak will result in '1' then '-1', the second diff would be '-2', and we can check directly pks <- which(diff(sign(diff(x, na.pad = FALSE)), na.pad = FALSE) < 1) + 1
How to find local peaks/valleys in a series of data?
It's true the function also identifies the end of plateaux, but I think there is another easier fix: Since the first diff of a real peak will result in '1' then '-1', the second diff would be '-2', a
How to find local peaks/valleys in a series of data? It's true the function also identifies the end of plateaux, but I think there is another easier fix: Since the first diff of a real peak will result in '1' then '-1', the second diff would be '-2', and we can check directly pks <- which(diff(sign(diff(x, na.pad ...
How to find local peaks/valleys in a series of data? It's true the function also identifies the end of plateaux, but I think there is another easier fix: Since the first diff of a real peak will result in '1' then '-1', the second diff would be '-2', a